A recent article published in NPJ Digital Medicine found that large language models (LLM) used in healthcare propagate racist medical advice that has largely been debunked. This is not a surprise to me, but it is an alarming finding that reminds us that the data we use to train our AI-driven devices and healthcare decision programs is horribly biased. Industry and regulators must ensure that data being used to train these devices are diverse, equitable, and representative to prevent potential patient harm.
Read MoreOne of the greatest risks to patients from use of AI in medical devices, and healthcare in general, is the presence of bias in the models used to train them. Given that we use our histories and historical data to train our AI models, we must address the inequity, racism, disparities, and disenfranchisement present in these data. This urgent issue must be taken up by industry to generate agreement on the definition of bias and how to address it. Regulators must hold industry accountable for ensuring equity in the models being used to train AI. We cannot afford to create greater public health disparities as we rush toward a future of AI in healthcare.
Read MoreA recent Northeastern University panel discussion about about the history and future of governance in artificial intelligence got me thinking about harmonization of medical device regulatory policy and wondering why it is so hard to do. As medical device technology is advancing at breakneck speed to bring AI-driven medical devices to patients and practitioners, we are seeing a rapid increase in policy implementation and legislation all intended to govern the use of AI in medical devices - but without much harmonization. Similar lack of harmonization in device classification, quality systems, and RWD/RWE governance have created barriers and costs to industry. With IMDRF being only a voluntary-based collaborative organization, what can be done to facilitate global harmonization of medical device regulation?
Read MoreAt the October NORD conference, Jeff Shuren suggested that AI-driven medical devices could be designed and evaluated using an FDA AI Assurance Lab. He suggested that this lab could house quality data on which AI algorithms could be trained. But, is this idea too good to be true? Where will the data come from? What therapeutic areas will it include? And will industry play nice in FDA’s AI Assurance Lab sandbox? Given the need for rational, consistent and transparent AI regulatory policy, this could be a great idea. But, the practical application of such a plan is unclear.
Read MoreRegulation of artificial intelligence/machine learning (AI/ML) medical devices is heating up worldwide. While FDA has only issued one guidance document and an action plan, the EU is preparing to enact significant legislation that will govern the regulation of AI, including AI medical devices, in the EU. The EU AI Act is scheduled to go into affect in 2024. Medical Device sponsors with AI/ML devices must be ready to meet the requirements of the AI Act along with EU MDR.
Read More