Back

 Industry News Details

 
AI is already learning how to discriminate Posted on : Mar 15 - 2018

What happens when robots take our jobs, or take on military roles, or drive our vehicles? When we ask these questions about the rapidly-expanding role of AI, there are others we’re often overlooking—like the subject of a WEF paper released this week: how do we prevent discrimination and marginalization of humans in artificial intelligence?

Machines are increasingly automating decisions. In New York City, for instance, machine learning systems have been used to decide where garbage gets collected, how many police officers to send to which neighborhoods, and whether a teacher should keep their job. These decision-making technologies bring up equally important questions.

While using technology to automate decisions isn’t a new practice, the nature of machine learning technology—its ubiquity, complexity, exclusiveness, and opaqueness can amplify long-standing problems related to discrimination. We have already seen this happen: A Google photo tagging mechanism, for instance, mistakenly categorized people as gorillas. Predictive policing tools that have been shown to amplify racial bias. And hiring platforms have prevented people with disabilities from getting jobs. The potential for machine learning systems to amplify discrimination is not going away on its own. Companies need to actively teach their technology to not discriminate.

What happens when machines learn to discriminate?

In many parts of the world, particularly in middle and low income countries, the implications of using machine learning to make decisions that fundamentally affect people’s lives—without taking adequate precautions to prevent discrimination—are likely to have far reaching, long-lasting, and potentially irreversible consequences. For example:

Insurance companies can now predict an individual’s future health risks. At least two private multinational insurance companies operating in Mexico today are using machine learning to figure out how they can maximize the efficiency and profitability of their operations. The obvious way to do this in the health insurance field is to get as many customers who are healthy (i.e., low cost) as possible and deter customers who are less healthy (i.e., high cost). We can easily imagine a scenario in which these multinational insurance companies, in Mexico and elsewhere, can use machine learning to mine a large variety of incidentally collected data (from shopping history, public records, demographic data, etc.) to recognize patterns associated with high-risk customers and charge those customers exorbitant and exclusionary costs for health insurance. Thus, a huge segment of the population—the poorest, sickest people—would be unable to afford insurance and deprived of access to health services. View More