Back To Blog

Applying Good AI Practice Across the Drug Development Lifecycle

Introduction

AI is increasingly used throughout drug and biological product development. To ensure its potential while protecting safety, data integrity, and regulatory confidence, clear expectations are essential. This need for clarity underpins recent regulatory initiatives on AI use in this field.

To provide a clear framework for responsible AI use in drug development, the FDA’s Center for Drug Evaluation and Research (CDER), the Center for Biologics Evaluation and Research (CBER), and the European Medicines Agency (EMA) have outlined Guiding Principles of Good AI Practice in Drug Development. These principles support industry and developer efforts to aid regulatory decisions and protect public health.

These principles reflect current regulatory thinking and good practice.


Human-Centric by Design

AI in drug development should augment, not replace, human decision-making. Human oversight is essential, especially for regulatory, scientific, or clinical judgments. Define clear roles so qualified experts can understand and intervene as needed.

Risk-Based Approach

AI risks vary. A risk-based approach aligns validation and oversight with potential impact on patient safety, product quality, and regulatory decisions. Higher-risk uses need stricter controls.

Adherence to Standards

AI development should align with established scientific, technical, and regulatory standards. Recognized standards promote consistency, reliability, transparency, and aid communication with regulators.

Clear Context of Use

Each AI system should have a clear context of use, meaning an explicit description of what the system does, how it is intended to be used, and its limitations. This clarity ensures the AI is used appropriately and stays within its validated range.

Multidisciplinary Expertise

AI development requires expertise in clinical, regulatory, statistical, data science, engineering, and quality. Multidisciplinary input ensures systems are scientifically sound and meet regulatory expectations.

Data Governance and Documentation

AI relies on strong data governance, which refers to the frameworks and procedures ensuring data is managed properly. The data used to train, test, and validate AI models must be relevant, reliable, and well-documented, supporting traceability, reproducibility, and regulatory review.

Model Design and Development Practices

Good model design and development practices help ensure AI systems work as intended. This includes selecting an appropriate AI model, implementing effective training procedures, and implementing safeguards against bias or unintended effects. Development should follow systematic, controlled, and well-documented steps during the model’s creation.

Risk-Based Performance Assessment

AI model performance should be evaluated using metrics that are appropriate to the context of use and level of risk. Performance assessment should demonstrate that the model is reliable, robust, and suitable for its intended purpose. Where relevant, evaluation should consider variability across populations, datasets, or use conditions.

Life Cycle Management

AI systems should be managed across their entire lifecycle, from development and deployment through monitoring, maintenance, and potential updates. Ongoing oversight helps ensure continued performance and relevance, particularly when data, processes, or environments change over time.

Clear, Essential Information

AI systems should provide clear and essential information to users, reviewers, and regulators. This includes transparency around the model’s purpose, limitations, inputs, and outputs. Clear communication supports appropriate use and builds confidence in AI-supported decisions.

Conclusion

The guiding principles of good AI practice reflect a shared regulatory commitment to innovation with rigorous safety and quality standards. By adopting these principles, drug developers can responsibly integrate AI while supporting regulatory excellence and patient protection.

Adhering to these principles can help ensure AI advances drug development safely, responsibly, and with public health as a priority.