Editor’s Note (12/21/21): This article is being showcased in a special collection about equity in health care that was made possible by the support of Takeda Pharmaceuticals. The article was published independently and without sponsorship.

We don't think of everyday devices as biased against race or gender, but they can be. Electrical engineer and computer scientist Achuta Kadambi is familiar with the problem both professionally and personally. “Being fairly dark-skinned myself,” Kadambi says, he sometimes cannot activate no-touch soap dispensers and faucets that detect light bouncing off skin. At one airport, he recalls, “I had to ask a lighter-skinned traveler to trigger a faucet for me.”

Medical devices, too, can be biased—an issue that has gained attention during the COVID pandemic, along with many other inequities that affect health. In a recent article in Science, Kadambi, an assistant professor at the University of California, Los Angeles, Samueli School of Engineering, describes three ways that racial and gender bias can permeate medical devices and suggests a number of solutions. Fairness, he argues, should be a criterion for evaluating new technology, along with effectiveness.

The first problem, Kadambi says, is physical bias, which is inherent in the mechanics of the device. Then there is computational bias, which lies in the software or in the data sets used to develop the gadget. Finally, there is interpretation bias, which resides not in the machine but in its user. It occurs when clinicians apply unequal, race-based standards to the readouts from medical devices and tests—an alarmingly common practice. “Bias is multidimensional,” Kadambi says. “By understanding where it originates, we can better correct it.”

Physical bias made news last December when a study at the University of Michigan found that pulse oximeters—which use light transmitted through skin and tissue to measure the oxygen in a person's blood—are three times more likely to miss low oxygen levels in Black patients than in white ones. Other instruments can have trouble with skin color, too. Remote plethysmography, a new technology that measures heart rates by analyzing live or recorded video, works less well for people of color when programmed to pick up blushlike changes in the skin. But, Kadambi says, “there are multiple ways to extract signals, with varying degrees of bias.” A team at the Massachusetts Institute of Technology, for example, created a remote plethysmograph that reads tiny changes in head motion that occur when the heart beats. Kadambi's laboratory is trying other solutions, including analyzing video images with thermal wavelengths rather than visible light.

Computational biases can creep into medical technology when it is tested primarily on a homogeneous group of subjects—typically white males. For instance, an artificial-intelligence system used to analyze chest x-rays and identify 14 different lung and chest diseases worked less well for women when trained on largely male scans, according to a 2020 analysis by a team of scientists in Argentina. But training the system on a gender-balanced sample produced the best overall results, with no significant loss of accuracy for men. One reason, Kadambi suspects, may have to do with a concept called domain randomization—adding more variety to the training data tends to improve performance.

Stopping computational bias means making a much greater effort to recruit people from different populations to participate in the design and testing of medical devices. It would help if research teams were themselves more diverse, observes Rachel Hardeman, a public health scientist at the University of Minnesota, who studies reproductive health and racial equity. “When you have a history of distrust [of medical experiments], plus you don't see anyone who looks like you that's actually doing the work, it's one more signal that it's not for you,” she says.

In addition to building diversity among researchers, Hardeman favors mandatory training of medical personnel in the fundamental ways in which racism impacts health, a step that might also help counter practices that lead to interpretation bias. California has moved in this direction, she notes, with a 2020 law requiring health-care providers treating pregnant women and their newborns to complete a curriculum (one Hardeman is designing) aimed at closing racial gaps in maternal and infant mortality.

For engineers to get the overall message, Kadambi proposes another mandate: include a “fairness statement” in published work on any new medical device that indicates how well it performs across different populations. Journals and engineering conferences could require that information just as they require conflict-of-interest statements. “If we add a metric that incentivizes fairness, who knows what new ideas will evolve?” Kadambi suggests. “We may invent radically different ways of solving engineering problems.”