One of the greatest risks to patients from use of AI in medical devices, and healthcare in general, is the presence of bias in the models used to train them. Given that we use our histories and historical data to train our AI models, we must address the inequity, racism, disparities, and disenfranchisement present in these data. This urgent issue must be taken up by industry to generate agreement on the definition of bias and how to address it. Regulators must hold industry accountable for ensuring equity in the models being used to train AI. We cannot afford to create greater public health disparities as we rush toward a future of AI in healthcare.
Read MoreWhile attending a session about crowd-sourcing cures for rare disease, I was struck by the need for credible scientific information accessible by patients. But the conversation also raised questions for me about how we can get patients to trust science again and generate credible information from anecdotal patient experience. There has to be a better way for patients to get the information they need to thrive.
Read More