Improving Healthcare Equity Through Artificial Intelligence

Improving Healthcare Equity Through Artificial Intelligence

The exponential growth of artificial intelligence (AI) and its much-touted form of machine learning (ML) has proven invaluable in many areas of patient care – from detection of malignant, hard-to-detect tumors in Barrett’s esophagus to cardiac monitoring through wearables to predicting the severity of COVID-19 of critically-ill patients in their ICUs. With COVID-19 accelerating the adoption of AI, constrained health systems and biomedical organizations are exploring ways that AI can improve operational, clinical, and research networks to provide better healthcare access, treatments and outcomes. Here, we discuss the growth of healthcare AI and what leadership and executives need to consider to improve equity in the health system. Within the healthcare industry, the adoption of AI and ML has skyrocketed in recent years with an exponential increase in the number of use cases and applications. Therefore, there is an increased need for AI and ML supporters to push funding in all areas of patient and clinical care, including to improve healthcare equity.

With such pervasiveness of inequity, the unfortunate reality is that our present healthcare ecosystem and its infrastructure (health systems, medical research, delivery models, therapies, caregivers, access, outcomes, information systems), and most importantly the “data” all inadvertently carry and propagate some form this inherent bias and skewness. How removing bias in AI and ML systems will improve healthcare equity By improving the completeness of data (to be more inclusive and diverse) and designing AI and ML algorithms that are free from bias, we can more effectively impact how we deliver care, how it is accessed, and how we measure outcomes.

In addition, AI and ML also have the potential to drive personalized care decisions based on each individual characteristics (social, clinical, economic) if the training data sets include all of these nodal variances, and algorithms are then built to more fairly account for treatment decisions based on such differences. As we develop our AI and ML models for the myriad of healthcare applications, we have an opportunity to be conscientious of not inducing any additional bias and explore opportunities to address both epistemic and normative concerns present in our systems and decision-making. Given that lot of the representative and training data sets in healthcare used by AI and ML models already have a certain element of inherent bias, the design and applications, especially in unsupervised and deep learning models, may experience a level of apophenic effect and misapplication.

Next Article

Did you find this useful?

Medigy Innovation Network

Connecting innovation decision makers to authoritative information, institutions, people and insights.

Medigy Logo

The latest News, Insights & Events

Medigy accurately delivers healthcare and technology information, news and insight from around the world.

The best products, services & solutions

Medigy surfaces the world's best crowdsourced health tech offerings with social interactions and peer reviews.

© 2024 Netspective Media LLC. All Rights Reserved.

Built on May 23, 2024 at 3:57am