A recent article published in NPJ Digital Medicine found that large language models (LLM) used in healthcare propagate racist medical advice that has largely been debunked. This is not a surprise to me, but it is an alarming finding that reminds us that the data we use to train our AI-driven devices and healthcare decision programs is horribly biased. Industry and regulators must ensure that data being used to train these devices are diverse, equitable, and representative to prevent potential patient harm.
Read MoreDozens of papers, workshops, and white papers have been published in the last few years addressing the myriad of challenges and opportunities associated with the use of real-world evidence (RWE) as evidence of medical product safety and effectiveness. This quarter, I will be teaching the first course addressing real-world evidence in biomedical research for Northeastern University's online graduate regulatory affairs program. The wealth of information available on all facets of real-world evidence use is nearly overwhelming. Yet, it remains unclear whether we can effectively harness the volumes of available data in an ethical and effective manner.
Read MorePlanet formation provides an unlikely metaphor for rigorously assessing the patient experience. When individual patient stories are objectively assembled using scientific methods, they gain weight and the ability to influence decision-makers.
Read More