“The Food and Drug Administration is responsible for protecting the public health by ensuring the safety, efficacy, and security of human and veterinary drugs, biological products, and medical devices; and by ensuring the safety of our nation’s food supply, cosmetics, and products that emit radiation.
[…]
FDA is responsible for advancing the public health by helping to speed innovations that make medical products more effective, safer, and more affordable and by helping the public get the accurate, science-based information they need to use medical products and foods to maintain and improve their health.”
FDA warning letters are published for the public to read. As of December 2022, the word “algorithm” and related words come up a shockingly rare number of times.
FDA warning letters are very serious. From the FDA website:, “when FDA finds that a manufacturer has significantly violated FDA regulations, FDA notifies the manufacturer. This notification is often in the form of a Warning Letter. The Warning Letter identifies the violation, such as poor manufacturing practices, problems with claims for what a product can do, or incorrect directions for use.”
Failing to follw FDA regulations can result in an orders to cease operations, fines, and criminal investigations.
Company: Global Medical Technology SL
Letter issue date: 2019 December 12
Issuing office: Center for Devices and Radiological Health
Subject: CGMP/QSR/Medical Devices/Adulterated
Key quotes:
The object in question is the “Platinum GMT IPL System Cloud,” which is a hair removal device.
No records were provided to demonstrate that the system software has been validated. For example, the software includes an algorithm that automatically selects the treatment energy level parameters based on the patients’ characteristics (e.g., (b)(4)). However, no records were provided to demonstrate that this software function has been validated to ensure that correct parameters are set when delivering treatment to the patients.
Highlights: What does an unvalidated algorithm mean? This is still being defined for increasingly complex algorithms, but I am confident that a validated algorithm would follow the general validated GMP software expectations for user’s limitations: clearly set user permissions, individual user accountability, and permanently recorded data outputs. For a machine learning or artificial intelligence algorithm that is not easy to recreate manually, the algorithm’s training data would need to be documented and archived so that abnormal algorithm outputs could be thoroughly investigated. My interpretation of this letter is that there was not a human check on the algorithm’s output parameters before the treatment energy level parameter output was applied to a patient, which means this warning letter focuses more on the user permissions side of software, rather than delving into the algorithm’s training data.
Company: Galt Pharmaceuticals LLC
Letter issue date: 2019 September 13
Issuing office: Office of Prescription Drug Promotion
Subject: False & Misleading Claims/Misbranded
Key quotes:
An advertisement about Doral sleep medication was discussed extentively in the FDA warning letter.
Concerned about Abuse potential of sleep medications?”: “Doral’s relative likelihood of abuse is considerably lower than some of the widely used sleep aids (i.e. Zolpidem & Temazepam)” “Doral was ranked even lower than OTC product Diphenhydramine for relative abuse potential” A figure comparing the “Relative Likelihood of Abuse” of 19 drugs, with Quazepam shown as having a score lower than 16 of the drugs depicted “Doral’s abuse potential is 1/2 of Zolpidem & 1/3 of Temazepam” These claims and presentation are misleading because they minimize the risks of abuse and dependence associated with Doral and suggest that this C-IV scheduled drug is superior in safety to other prescription and over-the-counter (OTC) products. The cited reference by Griffiths et al., provides an algorithm that purportedly differentiates the likelihood of abuse and relative toxicity among 19 compounds, including Doral (quazepam). However, as FDA pointed out in 2014, “the ‘algorithm’ lacks actual abuse data in human subjects and has not been validated.” While we acknowledge the figure includes the following statement, “Please see complete prescribing information for detailed information on each product. The above chart is not intended for efficacy comparison. The authors algorithm, while comprehensive, does lack prospective abuse data in human subjects and had not been validated in subsequent research,” this statement does not mitigate the overwhelming impression that Doral is superior in safety to other prescription and OTC products.
Highlights: This letter’s algorithm critique focuses on misbranding based on a misunderstanding of algorithm output and an algorithm that was not validated. It appears that the company that received the warning letter did not create the algorithm: Galt Pharmaceuticals LLC relied on an algorithm that appears to have been published in a research paper rather than developed and controlled by Galt Pharmaceuticals LLC. Additionally, the FDA addressed that the algorithm described in the advertising used data input from non-human subjects. I cannot underscore enough how crucial it is to document an algorithm’s input data since that data is instrumental to understand the algorithm output.
Company: RadLogics, Inc.
Letter issue date: 2018 May 04
Issuing office: Center for Devices and Radiological Health
Subject: CGMP/QSR/Medical Devices/PMA/Adulterated/Misbranded
Key quotes:
Language on your firm’s website indicates that the AlphaPoint software provides CADe functionality, utilizing machine learning algorithms to automatically detect and mark abnormalities on medical images, including lung nodules, pneumothorax, and pleural effusion. Your firm’s website states that the AlphaPoint software is capable of functioning as a “Virtual Resident” because it automatically performs an initial review of radiologic images and generates a report listing and characterizing the abnormalities detected in the images, relieving the physician of the task of reviewing the images to identify abnormalities.
The statements cited above provide evidence that you have made a major change or modification to the intended use of your device (see 21 CFR 807.81(a)(3)(ii)) that requires a new premarket submission. […] Your firm has provided no evidence to FDA supporting the safety and efficacy of AlphaPoint’s CADe capabilities, and FDA is currently unaware of any literature that could support RADLogics’ claims regarding AlphaPoint’s CADe functionality. The lack of evidence demonstrating the medical images 3 minutes faster than software without safety and efficacy of AlphaPoint’s automatic detection and characterization capabilities raises public health concerns. Specifically, the risks for the device as advertised are low sensitivity and specificity (i.e., the device has unknown false positive and false negative rates). […] Your firm’s promotional materials increase these public health concerns further because they market AlphaPoint as a physician replacement that can make an initial review of radiology images and provide automatic detection and characterization of abnormalities, which leaves the physician to only review the software’s findings and provide a diagnosis. These new risks are serious, and are not effectively mitigated because the AlphaPoint software was not cleared for computer-assisted detection intended use.
Highlights: This letter is based on RadLogics’ making such a big change to their product that the device needed a new round of FDA authorization.
Additionally, the company did not publish the device’s Type I and Type II error rates to support its low sensitivity claim. The FDA explained the risk of Type I and Type II errors for this product by giving the example, if there is a false negative, a risk to the patient is that lung nodules may go undetected or untreated, which may increase the risk of late diagnosis of lung cancer. For a false positive, there is a risk that the patient may be referred for unnecessary biopsy and/or other surgical procedures.
I predict algorithm efficacy and content will generally go unchallenged unless a civil suit is brought against the people who create an algorithm. And with no clear Standard Operating Procedures or compliance guidelines for how to build a resilient algorithm, I don’t expect the status quo to change. Algorithms used in the development of food and drug products would be subject to FDA oversight. The interesting gray area is algorithms that are not related to food and drug products, yet may have an impact on the public’s health. For example, AI-driven mental health care is predicted to become a $37 billion industry by 2026, according to the Brookings Institute, and social media algorithms have measurable impacts on mental health.
The FDA has released an Artificial Intelligence/Machine Learning action plan, which errs on the side of being vague, probably to avoid hamstringing future innovations by rules that didn’t predict future innovations, but the lack of warning letters regarding algorithms indicate that algorithms are not being investigated. Not every algorithm would pass or fail, but you’d expect more failures if algorithms were being thoroughly investigated in this new field. Having only one warning letter for an unvalidated algorithm is a giant sign that the FDA does not have its eyes on this upcoming regulatory problem.
In the long-run, algorithm documentability and explainability are increasingly relevant in today’s world. This may need to be its own branch other than the FDA, but for not, the FDA is a good fit as a functioning and organized agency. This issue transcends political lines; health and safety of the US public is a nonpartisan issue.