The European Medicines Agency (EMA) and Heads of Medicines Agencies (HMA) have issued high-level principles for staff in the European medicines regulatory network (EMRN) regarding the use of large language models (LLMs), a type of generative AI. LLMs can help regulators by streamlining document review, automating data mining, and assisting in administrative tasks. However, challenges like result variability, inaccuracies (hallucinations), and data security risks must be addressed.

The guiding principles aim to educate staff on the effective use of LLMs while mitigating risks. They cover safe data input, cross-checking outputs, and understanding whom to consult when issues arise. Continuous learning and familiarization with LLMs are crucial for responsible use. Additionally, agencies are encouraged to set governance rules, define use cases, provide training, and monitor risks to support staff in adopting these tools effectively.

Read more here.

References

  1. Harnessing AI in medicines regulation: use of large language models (LLMs) | European Medicines Agency (EMA). (2024, September 5). European Medicines Agency (EMA). https://www.ema.europa.eu/en/news/harnessing-ai-medicines-regulation-use-large-language-models-llms

Disclaimers

  • The material in these reviews is from various public open-access sources, meant for educational and informational purposes only
  • Any personal opinions expressed are those of only the author(s) and are not intended to represent the position of any organization(s)
  • No official support by any organization(s) has been provided or should be inferred