Back

 Industry News Details

 
Practical strategies to minimize bias in machine learning Posted on : Nov 21 - 2020

We’ve been seeing the headlines for years: “Researchers find flaws in the algorithms used…” for nearly every use case for AI, including finance, health care, education, policing, or object identification. Most conclude that if the algorithm had only used the right data, was well vetted, or was trained to minimize drift over time, then the bias never would have happened. But the question isn’t if a machine learning model will systematically discriminate against people, it’s who, when, and how.

There are several practical strategies that you can adopt to instrument, monitor, and mitigate bias through a disparate impact measure. For models that are used in production today, you can start by instrumenting and baselining the impact live. For analysis or models used in one-time or periodic decision making, you’ll benefit from all strategies except for live impact monitoring. And if you’re considering adding AI to your product, you’ll want to understand these initial and ongoing requirements to start on — and stay on — the right path.

Who

To measure bias, you first need to define who your models are impacting. It’s instructive to consider this from two angles: from the perspective of your business and from that of the people impacted by algorithms. Both angles are important to define and measure, because your model will impact both.

Internally, your business team defines segments, products, and outcomes you’re hoping to achieve based on knowledge of the market, cost of doing business, and profit drivers. The people impacted by your algorithms can sometimes be the direct customer of your models but, more often than not, are the people impacted by customers paying for the algorithm. For example, in a case where numerous U.S. hospitals were using an algorithm to allocate health care to patients, the customers were the hospitals that bought the software, but the people impacted by the biased decisions of the model were the patients.

So how do you start defining “who”? First, internally be sure to label your data with various business segments so that you can measure the impact differences. For the people that are the subjects of your models, you’ll need to know what you’re allowed to collect, or at the very least what you’re allowed to monitor. In addition, keep in mind any regulatory requirements for data collection and storage in specific areas, such as in health care, loan applications, and hiring decisions.

When

Defining when you measure is just as important as who you’re impacting. The world changes quickly and slowly, and the training data you have may contain micro and/or macro patterns that will change over time. It isn’t enough to evaluate your data, features, or models only once — especially if you’re putting a model into production. Even static data or “facts” that we already know for certain change over time. In addition, models outlive their creators and often get used outside of their originally intended context. Therefore, even if all you have is the outcome of a model (i.e., an API that you’re paying for), it’s important to record impact continuously, each time your model provides a result. View More