One of the greatest risks to patients from use of AI in medical devices, and healthcare in general, is the presence of bias in the models used to train them. Given that we use our histories and historical data to train our AI models, we must address the inequity, racism, disparities, and disenfranchisement present in these data. This urgent issue must be taken up by industry to generate agreement on the definition of bias and how to address it. Regulators must hold industry accountable for ensuring equity in the models being used to train AI. We cannot afford to create greater public health disparities as we rush toward a future of AI in healthcare.
Read More