Search for:
How Artificial Intelligence Will First Find Its Way Into Mental Health


Artificial intelligence (AI) startup Woebot Health made the news recently for some of its disastrously flawed artificial bot responses to text messages that were sent to it mimicking a mental health crisis. Woebot, which raised $90 million in a Series B round, responded that it is not intended for use during crises. Company leadership woefully […]

The post How Artificial Intelligence Will First Find Its Way Into Mental Health appeared first on DATAVERSITY.


Read More
Author: Bruce Bassi

How Reducing Bias in AI Models Boosts Success


Artificial intelligence (AI) has the potential to revolutionize industries and improve decision-making processes, but it is not without challenges. One challenge is how to address the issue of bias in AI models to ensure fairness, equity, and satisfying outcomes. AI bias can arise from various sources, including training data, algorithm design, and human influence during […]

The post How Reducing Bias in AI Models Boosts Success appeared first on DATAVERSITY.


Read More
Author: Mohan Krishna Mangamuri

The Crisis AI Has Created in Healthcare Data Management 

Through the lens of time, the study of medicine dwarfs the age of modern technology by centuries. Historically, most medical treatments require decades of research and extensive studies before they are approved and implemented into practice. Traditionally, physicians alone have been charged with the task of making treatment decisions for patients. The healthcare industry has pivoted to evidence-based care planning, where patient treatment decisions are derived from available information during systematic reviews.  

Should we be trusting Data Science tools like Artificial Intelligence (AI) and Machine Learning (ML) to make decisions related to our health?  

In the first installment of this series, Algorithmic Bias: The Dark Side of Artificial Intelligence, we explored the detrimental effects of algorithmic bias and the consequences for companies that fail to practice responsible AI. Applications for Big Data processing in the healthcare and insurance industry have been found to exponentially amplify bias, which creates significant disparities related to oppressed and marginalized groups. Researchers are playing catch-up to find solutions to alleviate these disparities. 

A study published by Science provided that a healthcare risk prediction algorithm, used on over 200 million people in the U.S., was found to be biased due to dependence on a faulty metric used to determine need. The algorithm was deployed to help hospitals determine risk levels for prioritizing patient care and necessary treatment plans. The study reported that African American patients tended to receive lower risk scores. African American patients also tended to pay for emergency visits for diabetes or hypertension complications. 

Another study, conducted by Emory University’s Healthcare Innovations and Translational Informatics Lab, revealed that a deep learning model used in radiologic imaging, which was created to speed up the process of detecting bone fractures and lung issues like pneumonia, could pretty accurately predict the race of patients.  

 “In radiology, when we are looking at x-rays and MRIs to determine the presence or absence of disease or injury, a patient’s race is not relevant to that task. We call that being race agnostic: we don’t know, and don’t need to know someone’s race to detect a cancerous tumor in a CT or a bone fracture in an x-ray,” stated Judy W. Gichoya, MD, assistant professor and director of Emory’s Lab. 

Bias in healthcare data management doesn’t just stop at race. These examples scratch the surface of the potential for AI to go very wrong when used in healthcare data analysis. Before using AI to make decisions, the accuracy and relevancy of datasets, their analysis, and all possible outcomes need to be studied before subjecting the public to algorithm-based decision-making in healthcare planning and treatment. 

Health Data Poverty 

More concerted effort and thorough research needs to be on the agendas of health organizations working with AI. A 2021 study by Lancet Digital Health defined health data poverty as: the inability for individuals, groups, or populations to benefit from a discovery or innovation due to a scarcity of data that are adequately representative.  

“Health data poverty is a threat to global health that could prevent the benefits of data-driven digital health technologies from being more widely realized and might even lead to them causing harm. The time to act is now to avoid creating a digital health divide that exacerbates existing healthcare inequalities and to ensure that no one is left behind in the digital era.”  

A study by the Journal of Medical Internet Research identified the catalysts to growing data disparities in health care: 

  • Data Absenteeism: a lack of representation from underprivileged groups. 
  • Data Chauvinism: faith in the size of data without considerations for quality and contexts. 

Responsible AI in Healthcare Data Management 

Being a responsible data steward in healthcare care requires a higher level of attention to dataset quality to prevent discrimination and bias. The burden of change rests on health organizations to “go beyond the current fad” to coordinate and facilitate extensive and effective strategic efforts that realistically address data-based health disparities.  

Health organizations seeking to advocate for the responsible use of AI need a multi-disciplinary approach that includes  

  • Prioritizing addressing data poverty.
  • Communicating with citizens transparently. 
  • Acknowledging and working to account for the digital divide that exists for disparaged groups. 
  • Implementing best practices for gathering data that informs health care treatment. 
  • Working with representative datasets that support equitable provision of treatment using digital health care.
  • Developing internal teams for data analytics and processing reviews and audits. 

To fight bias, it takes a team effort as well as a well-researched portfolio of technical tools. Instead of seeking to replace humans with computers, it would be better to facilitate an environment where they can share responsibility. Use these resources to learn more about responsible AI in health care management. 

The post The Crisis AI Has Created in Healthcare Data Management  appeared first on Actian.


Read More
Author: Saquondria Burris