Back

 Industry News Details

 
6 ways to reduce different types of bias in machine learning Posted on : Jun 11 - 2020

As adoption of machine learning grows, companies must become data experts -- or risk results that are inaccurate, unfair or even dangerous. Here's how to combat ML bias.

As companies step up the use of machine learning-enabled systems in their day-to-day operations, they become increasingly reliant on those systems to help them make critical business decisions. In some cases, the machine learning systems operate autonomously, making it especially important that the automated decision-making works as intended.

However, machine learning-based systems are only as good as the data that's used to train them. If there are inherent biases in the data used to feed a machine learning algorithm, the result could be systems that are untrustworthy and potentially harmful.

In this article, you'll learn why bias in AI systems is a cause for concern, how to identify different types of biases and six effective methods for reducing bias in machine learning.

Why is eliminating bias important?

The power of machine learning comes from its ability to learn from data and apply that learning experience to new data the systems have never seen before. However, one of the challenges data scientists have is ensuring that the data that's fed into machine learning algorithms is not only clean, accurate and -- in the case of supervised learning, well-labeled -- but also free of any inherently biased data that can skew machine learning results.

The power of supervised learning, one of the core approaches to machine learning, in particular depends heavily on the quality of the training data. So it should be no surprise that when biased training data is used to teach these systems, the results are biased AI systems. Biased AI systems that are put into implementation can cause problems, especially when used in automated decision-making systems, autonomous operation, or facial recognition software that makes predictions or renders judgment on individuals.

Some notable examples of the bad outcomes caused by algorithmic bias include: a Google image recognition system that misidentified images of minorities in an offensive way; automated credit applications from Goldman Sachs that have sparked an investigation into gender bias; and a racially biased AI program used to sentence criminals. Enterprises must be hyper-vigilant about machine learning bias: Any value delivered by AI and machine learning systems in terms of efficiency or productivity will be wiped out if the algorithms discriminate against individuals and subsets of the population. View More