FDA Explores AI Integration to Speed Up Drug Approval Process

The Food and Drug Administration (FDA) has been exploring the possibility of using AI to speed up the drug approval process with recent negotiations with OpenAI being a breakthrough in the use of artificial intelligence to speed up reviews as reported by Wired.
FDA Commissioner Marty Makary also spotlighted the agency's first use of AI-powered scientific review for a product, noting the promise of AI to transform and accelerate the approval process for new drugs, particularly for diseases such as diabetes and cancer.
Although the specifics of the involvement of OpenAI remain unclear, sources indicate that the collaboration could involve a project called cderGPT, aimed at improving the Center for Drug Evaluation and Research’s processes.
AI's Possibilities in Medication Approval and Regulatory Processes
AI’s role in drug approval could help to compress part of the notoriously lengthy drug development timeline. Currently, the FDA's review process for drug applications takes about a year.
However, there are existing mechanisms to expedite this, such as the Fast Track and Breakthrough Therapy designations, which are available for drugs addressing serious conditions or unmet medical needs.
Apart from accelerating the review process, AI might also have a critical role in other phases of drug development. Prior to final approval, AI might automate activities such as verifying the completeness of drug applications, ensuring that timely feedback is given to applicants.
According to Rafael Rosengarten, CEO of Genialis, while AI holds promise in automating certain review tasks, it is crucial to establish policies on the type of data used for training AI models and acceptable model performance standards.
Also read: Iambic Unveils Major Advances in AI-Driven Drug Discovery with Enchant v2
Concerns About AI Reliability in Drug Reviews
Despite the potential advantages of AI, there are still doubts regarding reliability. One of the former FDA staff members indicated that AI models such as ChatGPT may create credible but false information, which brings into question its reliability in regulatory environments. This concern underscores the need for intensive testing and verification prior to a complete integration of AI in the review process.
According to Wired, the FDA is already experimenting with AI for internal use. In late 2023, the agency advertised a fellowship for research into large language models (LLMs) to enhance precision medicine, drug development, and regulatory science.
Moreover, OpenAI’s recent announcement of ChatGPT Gov, a government-compliant version of its AI tool, aligns with the FDA’s efforts to integrate AI in a regulatory context, with future plans for the chatbot to handle sensitive government data upon receiving FedRAMP accreditation.