Back

 Industry News Details

 
Real Progress Being Made in Explaining AI Posted on : Dec 10 - 2019

One of the biggest roadblocks that could prevent the widespread adoption of AI is explaining how it works. Deep neural networks, in particular, are extremely complex and resist clear description, which is a problem when it comes to ensuring that decisions made by AI are made fairly and free of human bias. But real progress is being made the explainable AI (XAI) problem on several fronts.

Google made headlines several weeks ago with the launch of Google Cloud Explainable AI.  Explainable AI is a collection of frameworks and tools that explain to the user how each data factor contributed to the output of a machine learning model.

“These summaries help enterprises understand why the model made the decisions it did,” wrote Tracy Frey, Google’s director of strategy for Cloud AI, in a November 21 blog post. “You can use this information to further improve your models or share useful insights with the model’s consumers.”

Google’s Explainable AI exposes some of the internal technology that Google created to give its developers more insight into how its large scale search engine and question-answering systems provide the answers they do. These frameworks and tools leverage complicated mathematical equations, according to a Google white paper on its Explainable AI.

One of the key mathematical elements used is Shapley Values, which is a concept created by Nobel Prize-winning mathematician Lloyd Shapley in the field of cooperative game theory in 1953. Shapley Values are helpful in creating “counterfactuals,” or foils, where the algorithm continually assesses what result it would have given if a value for a certain data point was different. View More