Back

 Industry News Details

 
Opening Up Black Boxes with Explainable AI Posted on : May 31 - 2018

One of the biggest challenges with deep learning is explaining to customers and regulators how the models get their answers. In many cases, we simply don’t know how the models generated their answers, even if we’re very confident in the answers themselves. However, in the age of GDPR, this black box-style of predictive computing will not suffice, which is driving a push by FICO and others to develop explainable AI.

Describing deep learning as a black box is not meant to denigrate the practice. In many instances, in fact, the black box aspect of a deep learning model isn’t a bug – it’s a feature. After, all, we’re thrilled that, when we build a convolutional neural network with hundreds of input variables and more than a thousand hidden layers (as the biggest CNNs are), it just works. We don’t exactly know how it works, but we’re grateful that it does work. If we had we been required to explicitly code a program to do the same thing as the CNN does, it likely would be a complete disaster. We simply could not build the decision-making systems we’re building today without the benefit of self-learning machines.

But as good as deep learning has gotten over the past five years, it’s still not good enough. There simply isn’t enough free goodwill floating about our current world for a hundred-billion-dollar corporation or a trillion-dollar government to tell its consumers or citizens to “trust us” when making life-changing decisions. It’s not just a wary public, but also skeptical regulators buoyed by the GDPR’s new requirements for greater transparency in data processing, that’s driving for greater clarity in how today’s AI-based systems are making the decisions they make.

One of the companies on the cutting edge of helping to make AI more explainable is FICO. The San Jose, California-based company is well-known for developing a patented credit scoring methodology (the “FICO score”) that many banks use to determine the credit risk of consumers. It also uses machine learning tech in its Decision Management Suite (DMS), which companies use to automate a range of decision-making process.

Before joining FICO two years ago, FICO Vice President of Product and Technology Jari Koister worked at Salesforce.com and Twitter, where the use of cutting edge machine learning technologies and techniques was widely accepted – perhaps expected – as a means of doing business. But FICO’s clients are more conservative in their adoption. “A lot of our customers cannot really deploy machine learning algorithms in a lot of contexts,” View More