Through the lens of time, the study of medicine dwarfs the age of modern technology by centuries. Historically, most medical treatments require decades of research and extensive studies before they are approved and implemented into practice. Traditionally, physicians alone have been charged with the task of making treatment decisions for patients. The healthcare industry has pivoted to evidence-based care planning, where patient treatment decisions are derived from available information during systematic reviews. Â
Should we be trusting Data Science tools like Artificial Intelligence (AI) and Machine Learning (ML) to make decisions related to our health? Â
In the first installment of this series, Algorithmic Bias: The Dark Side of Artificial Intelligence, we explored the detrimental effects of algorithmic bias and the consequences for companies that fail to practice responsible AI. Applications for Big Data processing in the healthcare and insurance industry have been found to exponentially amplify bias, which creates significant disparities related to oppressed and marginalized groups. Researchers are playing catch-up to find solutions to alleviate these disparities.Â
A study published by Science provided that a healthcare risk prediction algorithm, used on over 200 million people in the U.S., was found to be biased due to dependence on a faulty metric used to determine need. The algorithm was deployed to help hospitals determine risk levels for prioritizing patient care and necessary treatment plans. The study reported that African American patients tended to receive lower risk scores. African American patients also tended to pay for emergency visits for diabetes or hypertension complications.Â
Another study, conducted by Emory University’s Healthcare Innovations and Translational Informatics Lab, revealed that a deep learning model used in radiologic imaging, which was created to speed up the process of detecting bone fractures and lung issues like pneumonia, could pretty accurately predict the race of patients. Â
 “In radiology, when we are looking at x-rays and MRIs to determine the presence or absence of disease or injury, a patient’s race is not relevant to that task. We call that being race agnostic: we don’t know, and don’t need to know someone’s race to detect a cancerous tumor in a CT or a bone fracture in an x-ray,” stated Judy W. Gichoya, MD, assistant professor and director of Emory’s Lab.Â
Bias in healthcare data management doesn’t just stop at race. These examples scratch the surface of the potential for AI to go very wrong when used in healthcare data analysis. Before using AI to make decisions, the accuracy and relevancy of datasets, their analysis, and all possible outcomes need to be studied before subjecting the public to algorithm-based decision-making in healthcare planning and treatment.Â
Health Data PovertyÂ
More concerted effort and thorough research needs to be on the agendas of health organizations working with AI. A 2021 study by Lancet Digital Health defined health data poverty as: the inability for individuals, groups, or populations to benefit from a discovery or innovation due to a scarcity of data that are adequately representative. Â
“Health data poverty is a threat to global health that could prevent the benefits of data-driven digital health technologies from being more widely realized and might even lead to them causing harm. The time to act is now to avoid creating a digital health divide that exacerbates existing healthcare inequalities and to ensure that no one is left behind in the digital era.” Â
A study by the Journal of Medical Internet Research identified the catalysts to growing data disparities in health care:Â
- Data Absenteeism: a lack of representation from underprivileged groups.Â
- Data Chauvinism: faith in the size of data without considerations for quality and contexts.Â
Responsible AI in Healthcare Data ManagementÂ
Being a responsible data steward in healthcare care requires a higher level of attention to dataset quality to prevent discrimination and bias. The burden of change rests on health organizations to “go beyond the current fad” to coordinate and facilitate extensive and effective strategic efforts that realistically address data-based health disparities. Â
Health organizations seeking to advocate for the responsible use of AI need a multi-disciplinary approach that includes Â
- Prioritizing addressing data poverty.
- Communicating with citizens transparently.Â
- Acknowledging and working to account for the digital divide that exists for disparaged groups.Â
- Implementing best practices for gathering data that informs health care treatment.Â
- Working with representative datasets that support equitable provision of treatment using digital health care.
- Developing internal teams for data analytics and processing reviews and audits.Â
To fight bias, it takes a team effort as well as a well-researched portfolio of technical tools. Instead of seeking to replace humans with computers, it would be better to facilitate an environment where they can share responsibility. Use these resources to learn more about responsible AI in health care management.Â
The post The Crisis AI Has Created in Healthcare Data Management appeared first on Actian.