The FDA is approving an average of 69 medical AI devices per year, but about half of those approvals lack clinical validation data according to an analysis published in Nature. The various uses for Artificial Intelligence in medical settings are rapidly growing, and the FDA only approved an average of two AI devices per year before 2016. The study authors wrote, “Devices that lack adequate clinical validation pose risks for patient care. A new validation standard is proposed to evaluate FDA authorization as an indication of clinical effectiveness in medical AI.”

Researchers Sammy Chouffani El Fassi and Gail E. Henderson led an analysis of over 500 AI medical devices to find that about half lacked clinical scientific data to prove the devices are effective for human use. The authors stated, “Although AI device manufacturers boast of the credibility of their technology with FDA authorization, clearance does not mean that the devices have been properly evaluated for clinical effectiveness using real patient data. With these findings, we hope to encourage the FDA and industry to boost the credibility of device authorization by conducting clinical validation studies on these technologies and making the results of such studies publicly available.”

144 of the devices were “retrospectively validated,” which means the AI is given image data from the past. A better scientific approach is prospective validation, which provides real-time patient data. 148 of the devices in the analysis were approved with prospective validation. 22 of the devices were approved with randomized control trials, which is the gold standard.

The study’s authors said there is a need for the FDA to clearly define and distinguish between the three types of validation. Chouffani El Fassi added, “We shared our findings with directors at the FDA who oversee medical device regulation, and we expect our work will inform their regulatory decision-making. We also hope that our publication will inspire researchers and universities globally to conduct clinical validation studies on medical AI to improve the safety and effectiveness of these technologies. We’re looking forward to the positive impact this project will have on patient care at a large scale.”

The AI medical device market will grow from $15 billion in 2023 to $22 billion in 2024. Estimations say it will reach $97 billion by the year 2028. An executive order from October 2023 lays out concerns and hopes for the development of AI technology for a wide variety of industries.

226 out of 521 authorizations by the FDA lacked validation data from real patients. Some of them contained created images rather than data from real patients.

That executive order states, “Artificial Intelligence must be safe and secure.  Meeting this goal requires robust, reliable, repeatable, and standardized evaluations of AI systems, as well as policies, institutions, and, as appropriate, other mechanisms to test, understand, and mitigate risks from these systems before they are put to use.”

In response to that executive order, the FDA Center for Drug Evaluation and Research (CDER) is forming a new council that will oversee all AI-related technologies within the FDA. This will replace the current AI-related steering committee and additional AI-related working groups.

The council is co-led by three individuals within the FDA. Tala Fakhour, the associate director for data sciences and AI policy, has previously worked for the CDC and ICF International, a consulting firm. Qi Liu, the associate director for innovation and partnership in the office of Clinical Pharmacology, formerly worked for Merck. Sri Mantha, the director of the Office of Strategic Programs, has previously worked for multiple pharmaceutical companies including Pfizer and AstraZeneca.

Chouffani El Fassi said, “We shared our findings with directors at the FDA who oversee medical device regulation, and we expect our work will inform their regulatory decision-making. We also hope that our publication will inspire researchers and universities globally to conduct clinical validation studies on medical AI to improve the safety and effectiveness of these technologies. We’re looking forward to the positive impact this project will have on patient care at a large scale.”

A 2023 research paper outlines several drawbacks for using AI in the medical field, including concerns related to ethics, society, and privacy. AI devices in healthcare require algorithmic models that use a significant amount of private patient data. Researchers have warned that this is a privacy risk because this data can potentially be hacked.

Last year, the World Health Organization (WHO) warned that the rapid adoption of AI technology for medical purposes can cause harm to patients. The WHO said, “Precipitous adoption of untested systems could lead to errors by healthcare workers, cause harm to patients, erode trust in AI and thereby undermine (or delay) the potential long-term benefits and uses of such technologies around the world.”

FDA Director Robert Califf warned last year about the healthcare industry being “swept up quickly by something that we hardly understand.” In March, Dr. Califf spoke at the Coalition for Health AI and raised similar concerns. Califf said, “My concern is that our health systems do not have the infrastructure and tools to make the most important determinations about whether an AI application is “effective” for health outcomes. The ability of algorithms to provide accurate assessments will drift if left untended, often in unpredictable, and sometimes dangerous ways.”

Califf added that it is the responsibility of the manufacturer to ensure products are safe with oversight from the FDA. He said it is unclear how this system can work effectively for AI models that change and adapt over time.

“Not surprisingly, I’m hearing that the “effectiveness” metric being used by health systems to make decisions about incorporating an AI implementation is a financial metric,” Califf said. “Will the algorithm improve the bottom line of the part of the health system making the purchase?  I worry that the main use of AI algorithms will be decisions that optimize the bottom line rather than optimizing the longevity and well-being of patients.  This is counter to the mission of the FDA, where effectiveness means an improvement in a health outcome.”

The HighWire has been reporting about the ongoing concerns with AI, including a conversation between Jeffery Jaxen and Host Del Bigtree about whether AI is an existential threat. In an earlier episode, the Jaxen Report included a warning by Professor Stephen Hawking that AI could destroy mankind. In June, Jaxen discussed AI technology that utilizes human brain tissue.

Steven Middendorp

Steven Middendorp is an investigative journalist, musician, and teacher. He has been a freelance writer and journalist for over 20 years. More recently, he has focused on issues dealing with corruption and negligence in the judicial system. He is a homesteading hobby farmer who encourages people to grow their own food, eat locally, and care for the land that provides sustenance to the community.

Other Headlines

Coronavirus

ICAN Releases Recordings of FDA’s Peter Marks; Vaccine Official Dismissed Vaccine Injuries

The Informed Consent Action Network (ICAN) and REACT 19 held a joint press conference today to reveal a trove of secret recordings and FOIA documents of Peter Marks, the former Director of the Center for Biologics Evaluation and Research. The press conference included ICAN Founder Del Bigtree, Attorney Aaron Siri, and REACT 19’s Brianne Dressen.Continue reading ICAN Releases Recordings of FDA’s Peter Marks; Vaccine Official Dismissed Vaccine Injuries

More news about Coronavirus

Health & Nutrition

Florida Investigation Finds 78% of “Traditional” Candy Tested Contains Arsenic

Florida Governor Ron DeSantis, First Lady Casey DeSantis, and Florida Surgeon General Dr. Joseph Ladapo released details about a candy safety investigation that revealed 28 out of 46 products contained arsenic ranging from 180 to 570 parts per billion. While the United States does not have actionable levels of arsenic for candy, it does setContinue reading Florida Investigation Finds 78% of “Traditional” Candy Tested Contains Arsenic

More news about Health & Nutrition

Vaccines

Pfizer Partners With Walgreens to “Provide Informative Vaccine Content” – Appointment Scheduling

Pfizer has announced a partnership with Walgreens with the “goal of providing patients with informative vaccine content from their pharmacy,” raising concerns about a conflict of interest. The press release states that Walgreens pharmacists will provide a personalized touch for patients, including answering questions about safety concerns. Patients who are eligible for a COVID-19 vaccineContinue reading Pfizer Partners With Walgreens to “Provide Informative Vaccine Content” – Appointment Scheduling

More news about Vaccines

Science & Tech

Rocky Mountain Labs Had Safety Breach With Deadly Foreign Pathogen In November, WCWP Reveals

Rocky Mountain Lab isolated and monitored an employee they believed was potentially exposed to the highly pathogenic disease Crimean-Congo Hemorrhagic Fever (CCHF), as reported by Greg Piper for Just The News, after the White Coat Waste Project (WCWP) uncovered a “biological incident” on 11/13/25. Piper reported that the Department of Health and Human Services (HHS)Continue reading Rocky Mountain Labs Had Safety Breach With Deadly Foreign Pathogen In November, WCWP Reveals

More news about Science & Tech

Environment

Residential Biolab Raided By FBI In Las Vegas Containing 1000+ Unknown Biological Agents

The FBI raided a biolab in a Las Vegas residence, which was also an Airbnb rental property that contained more than 1,000 vials of unknown biological agents. The property is connected to a 2023 California residential biolab with a Chinese national who has a “complex connection” to the Chinese Communist Party. Jia Bei Zhu hasContinue reading Residential Biolab Raided By FBI In Las Vegas Containing 1000+ Unknown Biological Agents

More news about Environment

Policy

Wisconsin Dairy Farmer Warns About ESG Climate Change Regulations – Impact on Small Farms

A Wisconsin dairy farmer has gone viral after sharing her story regarding new Environmental, Social, and Governance (ESG) metrics implemented by corporate giants like Nestlé that are adding extra work for small dairy farmers who are already stretched for time and resources. The farmer said the letter she received states that providing the information isContinue reading Wisconsin Dairy Farmer Warns About ESG Climate Change Regulations – Impact on Small Farms

More news about Policy