FDA staffers who spoke with Stat news, meanwhile, called the tool “rushed” and said its capabilities were overinflated by officials, including Makary and those at the Department of Government Efficiency (DOGE), which was headed by controversial billionaire Elon Musk. In its current form, it should only be used for administrative tasks, not scientific ones, the staffers said.
“Makary and DOGE think AI can replace staff and cut review times, but it decidedly cannot,” one employee said. The staffer also said that the FDA has failed to set up guardrails for the tool’s use. “I’m not sure in their rush to get it out that anyone is thinking through policy and use,” the FDA employee said.
According to Stat, Elsa is based on Anthropic’s Claude LLM and is being developed by consulting firm Deloitte. Since 2020, Deloitte has been paid $13.8 million to develop the original database of FDA documents that Elsa’s training data is derived from. In April, the firm was awarded a $14.7 million contract to scale the tech across the agency. The FDA said that Elsa was built within a high-security GovCloud environment and offers a “secure platform for FDA employees to access internal documents while ensuring all information remains within the agency.”
Previously, each center within the FDA was working on its own AI pilot. However, after cost-cutting in May, the AI pilot originally developed by the FDA’s Center for Drug Evaluation and Research, called CDER-GPT, was selected to be scaled up to an FDA-wide version and rebranded as Elsa.
FDA staffers in the Center for Devices and Radiological Health told NBC News that their AI pilot, CDRH-GPT, is buggy, isn’t connected to the Internet or the FDA’s internal system, and has problems uploading documents and allowing users to submit questions.
Source link
#FDA #rushed #agencywide #toolits