Back

 Industry News Details

 
Dreaming of artificial intelligence in ancient Greece and Silicon Valley Posted on : Dec 14 - 2018

The largest artificial-intelligence conference, Neural Information Processing Systems (with the regrettable acronym NIPS, changed last month to NeurIPS), has become a hot ticket. This year, it sold out in less than 12 minutes. At last year’s event, Intel threw a packed party with the rapper Flo Rida. Meanwhile, Burning Man — the desert festival begun in 1986, a year before NeurIPS — sold out in under half an hour this year. The theme: “I, Robot.” Reacting to the theme’s announcement, the AI researcher Miles Brundage tweeted, “NIPS is the new Burning Man; Burning Man is the new NIPS.”

AI was once a fringe academic pursuit that reached public consciousness mostly through sci-fi movies like “2001” and “The Terminator.” Now it nets researchers seven-figure salaries and converses with our kids through appliances and phones. Ethical dilemmas like those in the movies — How much autonomy should machines have? Whose priorities should they serve? — have become urgent topics with near-term consequences. And now that AI is replacing jobs and creating art, it is forcing us to confront an age-old question with new intensity: What makes humans so special?

Artificial intelligence has many definitions, but broadly it refers to software that perceives the world or makes decisions. It uses algorithms, or step-by-step instructions (a recipe is an algorithm). Within AI is an area called machine learning, in which algorithms are not hand-coded but trained. Give the computer lots of labeled photos, and it figures out how to label new ones. And within machine learning is deep learning, which uses algorithms loosely modeled on the brain. So-called neural networks pass data among many connected nodes, each performing a bit of computation, like the brain’s neurons. It’s deep learning that’s behind self-driving cars, speech recognition, and superhuman players of Go and poker. It’s deep learning that’s made NeurIPS the new Burning Man.

One of its pioneers traces its history in a new book, “The Deep Learning Revolution.” Terrence J. Sejnowski started as a physicist in the 1970s before finding that his mathematical tools could be used to study information processing in the brain — and to create new forms of information processing in computers. (He’s now a computational neuroscientist at the Salk Institute for Biological Studies and since 1993 has been the president of NeurIPS.) Neural networks have always had devotees, but they were not always popular. Despite initial promise, they couldn’t do much until the rise of large, multi-layered networks — deep learning — in the past few years. We finally have the powerful computers, software refinements and giant data sets to train and operate them.

One is struck by how badly even experts misjudge the progress of this (and other) technology. A key ancestor of deep learning was a one-neuron algorithm developed in the 1950s called a perceptron. A 1958 article in the New York Times read, “The Navy revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence.” Presumably it meant within a generation. The project soon hit snags. While it appeared to spot tanks in photos, it was relying on sky brightness. In 1969 researchers Marvin Minsky and Seymour Papert published a book arguing that complex tasks would require multiple layers of perceptrons but that such networks might not be trainable. They were wrong about training, but their pessimism helped cause a “winter” in the research field that lasted until the 1980s. Deep learning’s limitations, or lack thereof, are still debated. View More