Healthcare Algorithms and Racial Bias

An algorithm designed to predict health care costs as a proxy for health needs critically underestimates the needs of Black patients, with life-threatening consequences.

Reviewed by Becky Mer

Introduction

This article addresses the growing public concern regarding the automation of racial discrimination through digital tools and technology. Throughout the paper, the author, Dr. Ruha Benjamin, focuses her discussion on a notable publication by Obermeyer et al. entitled, “Dissecting racial bias in an algorithm used to manage the health of populations.”

Unlike most researchers who lack access to proprietary algorithms, Obermeyer et al. completed one of the first studies to examine the training data, algorithm, contextual data, and outputs of one of the largest commercial tools used by the health insurance industry. This tool allows insurers to identify patients who need increased attention before care becomes too costly and severe. Since the tool was designed to use potential cost as a proxy for patients’ needs, and because providers allocate significantly less resources to Black patients’ care, Obermeyer et al. found that Black patients whose risk score is the same as white patients tend to be much more sick. By measuring the racial disparity and building new predictors, the researchers concluded that, as long as the tool effectively predicts costs, its results will be racially biased, even without explicit attempts to account for race. 

Dr. Benjamin discusses the broader implications of the study through a range of historical, hypothetical, and modern-day cases. She underscores how algorithmic and other labels, like health care costs, may initially appear to be race-neutral, but ultimately play critical and harmful roles in the lives of millions of Black patients in the United States. Dr. Benjamin’s analysis is situated within her larger body of research on race and the social dimensions of science, technology, and medicine. At Princeton University, Dr. Benjamin is an Associate Professor of African American Studies, founder of the Ida B. Wells Just Data Lab, Executive Committee member at the Program in Global Health and Health Policy and Center for Digital Humanities, and Faculty Associate in the Center for Information Technology Policy, Program on History of Science, Center for Health and Wellbeing, Program on Gender and Sexuality Studies, and the Department of Sociology.

Methods and Findings

Employing examples from health care, housing, and social media, Dr. Benjamin demonstrates how historical systems of racial discrimination are inextricably linked to modern-day, seemingly colorblind automated systems. She presents, among others, the following paired cases:

  1. Imagine if Henrietta Lacks, an African American mother of five, was “digitally triaged” at Johns Hopkins Hospital in 1951 after arriving with severe abdominal symptoms. The hospital’s cutting-edge automated tool would assess her risk based on the predicted cost of care  ̶  far less than typically spent on white patients despite Black patients’ actual health needs  ̶  leading providers to underestimate her level of need and discharge her with ultimately fatal consequences. 
  1. Consider, in reality, Ms. Lacks’ admission to, and experience in, the Negro wing of Johns Hopkins Hospital, during a period in American history when overt racial discrimination was legally sanctioned.

Resulting in much of the same catastrophic health outcomes, these cases highlight how the legacy of Jim Crow policies continue to feed its modern automated equivalent, termed in this paper as the “New Jim Code.” Racially biased and historical human decisions shape both algorithmic design and inputs, such as data from segregated healthcare facilities, unequal insurance systems, and racist medical training. Yet, the power of these automated tools can reach far beyond the scale of individual behavior, as they are capable of perpetuating unjust discrimination at a much greater level. Given this context, relying on top-down reform efforts, whether by shifts in federal law or institutional policy, will not diminish discrimination alone.

Conclusions

Dr. Benjamin concludes that labels matter significantly, both in the design and analysis of algorithms. Rather than employing tropes that Black patients “cost less” or that Black patients’ poor care is the result of patients’ “non-compliance” or “lack of trust,” researchers, hospital staff, and analysts must adopt a more socially conscious analysis. The issue, put simply, is that Black patients are valued less, structural and interpersonal racism are persistent in the American healthcare system, and that the medical industry—not Black patients—is accountable for lack of trustworthiness. Although Obermeyer et al. describe some of this context, their descriptions are insufficient to reveal the very social processes that make their work so important. 

Concern over algorithmic bias, although critical, must not outweigh focus on the context of racial discrimination. Indeed, this context is what made the promise of neutral technology so critical in the first place. Automated tools like the one studied by Obermeyer et al. might work similarly for all patients if companies, institutions, and individuals provided the same level of care for Black patients, such that their care would not “cost less”than the care provided for non-Black patients. Overall, beyond the automated tools considered in this particular study, Dr. Benjamin recommends moving away from individual risk assessment tools, and instead adopting those that evaluate the risks produced by institutions and organizations. Through the development of such tools, the public can uncover agents and/or patterns of discrimination and ultimately hold institutions accountable for providing high quality care for all patients.

Topics

Thank you for visiting RRAPP

Please help us improve the site by answering three short questions.