Back

 Industry News Details

 
Explainable AI : The margins of accountability Posted on : Nov 12 - 2018

How much can anyone trust a recommendation from an AI? Yaroslav Kuflinski, from Iflexion gives an explanation of explainable AI

You’re pacing along a hospital corridor, holding your child’s hand. She is lying sedated on a gurney that’s bumping towards the operating theater. It squeaks to a halt and a hurried member of hospital staff thrusts a form at you to sign. It describes the urgent surgical procedure your child is about to undergo—and it requires your signature if the operation is to go ahead. But here’s the rub—at the top of the form in large, bold letters it says “DIAGNOSIS AND SURGICAL PLAN COPYRIGHT ACME ARTIFICIAL INTELLIGENCE COMPANY.” At this specific moment, do you think you are owed a reasonable, plain-English explanation of all the inscrutable decisions that an AI has lately been making on your daughter’s behalf? in short, do we need explainable AI?

There are many other examples where one or more of the actors may consider themselves entitled to an explanation of the reasoning processes behind the decisions of an AI. What about the use of AI to prioritize targets in the modern battlespace? Or when an AI becomes involved in the criminal justice system? These scenarios are not the stuff of science fiction; they are business-as-usual today and are in the vanguard of the emerging explainable AI (XAI) movement.

The drive towards explainable AI has been gathering momentum for some time. At its core, it’s all about trust. How much can anyone trust a recommendation from an AI? What if that recommendation involved a high-stakes choice? These are, at least partially, social issues, which are motivated by organizations who are eagerly awaiting a staggering $16 trillion tidal wave of AI solutions that’s said to be on its way.

Trust in me

In PwC’s 2017 Global CEO Survey, 67% of business leaders believed that AI and automation will negatively affect stakeholder trust levels in their industry in the next five years.

People are happy to trust an AI when they don’t have too much skin in the game—say, when they’re asking for a movie recommendation. But with high-impact decisions, such as medical diagnosis, they are much more discerning. Crucially, there’s a tension and balance between getting the best decisions and getting the best explanations. The grotesque tragedy is that the most successful algorithms (recursive neural nets etc.) are the least good at explaining themselves. Some experts would go as far as to say that it’s this very quality that gives them the ability to achieve the best results, this making explainable AI, a challenge, to put it mildly.

Tension and balance

Although in reality there are many more intermediate technologies than are shown, diagrams such as the one below are often used to give an intuition about the tension and balance between predictive power and explainability. View More