Back

 Industry News Details

 
Google ponders the shortcomings of machine learning Posted on : Oct 20 - 2018

Scientists of AI at Google's Google Brain and DeepMind units acknowledge machine learning is falling short of human cognition and propose that using models of networks might be a way to find relations between things that allow computers to generalize more broadly about the world.

Critics of the current mode of artificial intelligence technology have grown louder in the last couple of years, and this week, Google, one of the biggest commercial beneficiaries of the current vogue, offered a response, if, perhaps, not an answer, to the critics.

In a paper published by the Google Brain and the Deep Mind units of Google, researchers address shortcomings of the field and offer some techniques they hope will bring machine learning farther along the path to what would be "artificial general intelligence," something more like human reasoning.

The research acknowledges that current "deep learning" approaches to AI have failed to achieve the ability to even approach human cognitive skills. Without dumping all that's been achieved with things such as "convolutional neural networks," or CNNs, the shining success of machine learning, they propose ways to impart broader reasoning skills.

The paper, "Relational inductive biases, deep learning, and graph networks," posted on the arXiv pre-print service, is authored by Peter W. Battaglia of Google's DeepMind unit, along with colleagues from Google Brain, MIT, and the University of Edinburgh. It proposes the use of network "graphs" as a means to better generalize from one instance of a problem to another.

Battaglia and colleagues, calling their work "part position paper, part review, and part unification," observe that AI "has undergone a renaissance recently," thanks to "cheap data and cheap compute resources."

However, "many defining characteristics of human intelligence, which developed under much different pressures, remain out of reach for current approaches," especially "generalizing beyond one's experiences."

Hence, "A vast gap between human and machine intelligence remains, especially with respect to efficient, generalizable learning."

The authors cite some prominent critics of AI, such as NYU professor Gary Marcus.

In response, they argue for "blending powerful deep learning approaches with structured representations," and their solution is something called a "graph network." These are models of collections of objects, or entities, whose relationships are explicitly mapped out as "edges" connecting the objects.

"Human cognition makes the strong assumption that the world is composed of objects and relations," they write, "and because GNs [graph networks] make a similar assumption, their behavior tends to be more interpretable." View More