Artificial intelligence (AI) is no longer a futuristic concept in drug development—it’s already embedded across the lifecycle of modern therapeutics. From molecule design and trial monitoring to pharmacovigilance and manufacturing, AI is transforming how we discover, develop, and deliver medicines. Yet the regulatory landscape hasn’t kept pace.

In their 2025 policy review, Singh, Zhou, and Auclair introduce a much-needed solution: the AI-Enabled Ecosystem for Therapeutics (AI2ET) framework (Singh et al., 2025). Designed to address regulatory fragmentation and uncertainty, AI2ET provides a structured, risk-based model for overseeing AI across all phases of therapeutic development—not just at the product level.


The Challenge: Fragmented Oversight

Regulation of AI in drug development is currently inconsistent, both within and across countries. While the FDA has made strides in developing frameworks for AI in medical devices, oversight of AI in drug R&D, manufacturing, or real-world data analysis remains patchy (FDA, 2021).

Further complicating matters, over 70 countries have developed national AI policies—yet few explicitly address human therapeutics, and almost none align internationally (Singh et al., 2025). This fragmentation leads to delayed adoption, uneven access to innovation, and confusion for both sponsors and regulators.


Enter AI2ET: A Systemic Framework

The AI2ET framework proposes a paradigm shift: instead of regulating AI as isolated tools or endpoints, it views AI as part of an interconnected ecosystem spanning:

  • Systems: AI-enabled infrastructure like digital twins, NLP tools, and regulatory automation.
  • Processes: Automated workflows for data analysis, risk detection, and document management.
  • Platforms: Scalable environments that host and operate AI applications.
  • Products: The final outputs, such as drugs, biologics, or vaccines, whose design or testing was influenced by AI.

This approach recognizes that AI is no longer confined to discrete tasks—it’s woven into the very structure of how medicines are developed, evaluated, and monitored (Singh et al., 2025).


Smarter Oversight: A Risk-Based Decision Tree

A core component of AI2ET is its risk-based regulatory flowchart. This tool guides decision-making when AI tools fall outside of existing precedent or guidance. It prompts regulators to assess:

  1. Whether the AI application is covered by current frameworks,
  2. Whether precedent from similar AI contexts (e.g., in devices) can be adapted, and
  3. The risk level of the AI-enabled component (low, medium, high, or unacceptable).

This process reflects the FDA’s existing Good Machine Learning Practice (GMLP) principles and aligns with other tiered oversight models (FDA, 2021). It brings consistency to how AI tools are evaluated across different use cases—even when they’re novel or evolving.


Building Toward Global Harmonization

AI2ET doesn’t stop at risk assessment. It lays out six clear policy recommendations to drive international alignment:

  1. Standardize definitions of AI specific to healthcare and therapeutics.
  2. Modernize the “Context of Use” (CoU) model, which often proves too static for adaptive systems.
  3. Expand oversight to include discovery-phase AI tools and regulatory operations.
  4. Leverage lessons from medical device AI regulation, particularly on model validation and lifecycle monitoring.
  5. Invest in regulatory capacity-building in under-resourced regions to ensure equitable access to AI innovation.
  6. Use decision trees when guidance is lacking, relying on expert judgment, transparency, and public trust.

By embedding these recommendations into national and international frameworks, the AI2ET model offers a structured pathway to harmonize regulation in a rapidly evolving field (Singh et al., 2025).


A Familiar Foundation

The strength of AI2ET lies in its adaptability. It builds on successful precedents like ICH M7, which requires dual-method validation for AI-based QSAR toxicology models (ICH, 2023). It also mirrors the layered oversight seen in medical device regulation—adjusting scrutiny based on the AI tool’s function, potential impact, and maturity.

AI2ET isn’t proposing a radical departure from regulatory science. It’s simply extending existing best practices to a more integrated, AI-driven world.


Final Thoughts

Frameworks like AI2ET offer more than policy guidance—they reflect the urgency of ensuring regulatory science keeps up with therapeutic innovation. As one GMDP Academy participant put it during the CMD Module 3 student presentations:

“There is a lack of clear guidance currently on the use of AI, and regulators will need to address this.”
— Gabriel Mircus, Senior Director, Medical Affairs, Global Adult Pneumococcal Conjugate Vaccines, Pfizer (USA)

Through modules including Digital Technology in Medicines Development and Regulatory Affairs, Drug Safety and Pharmacovigilance, the GMDP Academy prepares professionals to bridge these regulatory gaps with the tools, vocabulary, and vision to lead global change in medicines development.


References

FDA. (2021, October). Good machine learning practice for medical device development: Guiding principles. U.S. Food & Drug Administration. https://www.fda.gov/media/153486/download

International Council for Harmonisation (ICH). (2023). ICH guideline M7(R2) on assessment and control of DNA reactive (mutagenic) impurities in pharmaceuticals to limit potential carcinogenic risk. https://www.ich.org/page/m7Singh, R., Zhou, K., & Auclair, J. R. (2025). Reimagining drug regulation in the age of AI: A framework for the AI-enabled Ecosystem for Therapeutics. Frontiers in Medicine, 12, 1679611. https://doi.org/10.3389/fmed.2025.1679611

Disclaimers

  • The material in these reviews is from various public open-access sources, meant for educational and informational purposes only
  • Any personal opinions expressed are those of only the author(s) and are not intended to represent the position of any organization(s)
  • No official support by any organization(s) has been provided or should be inferred