Back

 Industry News Details

 
From black box to white box: Reclaiming human power in AI Posted on : Nov 11 - 2019

It’s hard to imagine what life was like before the peak of AI hype in which we currently find ourselves. But it was just a few years ago, in 2012, that Apple gave the world the first integrated version of Siri on the iPhone 4S, which people used to impress their friends by asking it banal questions. Google was just beginning to test its self-driving cars in Nevada. And the McKinsey Global Institute had recently released “Big data: The next frontier for innovation, competition, and productivity.”

On the starting blocks of the race to release the next big AI-powered thing, no one was talking about explainable AI. Doing it first, even if no one truly understood how it worked, was paramount. That McKinsey Global Institute report gave a small amount of foreshadowing, predicting that businesses in nearly all sectors of the U.S. economy had at least an average of 200 terabytes of stored data. Back then, some companies were even doing something with that data, but those applications were mostly behind-the-scenes or extremely specialized. They were projects — largely siloed off from the core functions — that were maybe for those new people called data scientists to worry about, but certainly not the core of the business.

In the years that followed, things took off. By late 2012, data scientist, as most people are sick of hearing by now, was dubbed the sexiest job of the 21st century, and data teams started working feverishly with the masses of data that companies were storing. In fact, the roots of today’s AI movement crept into our lives with little resistance, despite (or perhaps because of) the fact that in the grand scheme of things, very few people actually understood the fundamentals of data science or machine learning.

Today, people are refused or given loans, accepted or denied entrance to universities, offered a lower or higher price on car insurance, and more, all at the hands of AI systems that usually offer no explanations. In many cases, humans who work for those companies can’t even explain the decisions. This is black box AI, and consumers increasingly — and often unknowingly — find themselves at its mercy. The issue has garnered so much attention that Gartner put explainable AI on the Top 10 Data and Analytics Technology Trends for 2019.

To be clear, “black box” is not synonymous with “malicious.” There are plenty of examples of black box systems that are doing good things, like analyzing imagery in healthcare to detect cancers or other conditions. The point is that while these systems are potentially more accurate from a technological perspective, models where humans cannot explain the outcomes — no matter what they’re trying to predict — can be harmful to consumers and to businesses. Harm aside, people simply have a hard time trusting what cannot be explained. The aforementioned healthcare example is instructive here, as AI systems often have high technical accuracy, but people don’t trust the machine-generated results. View More