Back

 Industry News Details

 
How To Make AI The Best Thing To Happen To Us Posted on : Oct 19 - 2017

Many leading AI researchers think that in a matter of decades, artificial intelligence will be able to do not merely some of our jobs, but all of our jobs, forever transforming life on Earth.

The reason that many dismiss this as science fiction is that we've traditionally thought of intelligence as something mysterious that can only exist in biological organisms, especially humans. But such carbon chauvinism is unscientific.

From my perspective as a physicist and AI researcher, intelligence is simply a certain kind of information-processing performed by elementary particles moving around, and there is no law of physics that says one can't build machines more intelligent than us in all ways. This suggests that we've only seen the tip of the intelligence iceberg and that there is an amazing potential to unlock the full intelligence that is latent in nature and use it to help humanity flourish - or flounder.

If we get it right, the upside is huge: Since everything we love about civilization is the product of intelligence, amplifying our own intelligence with AI has the potential to solve tomorrow's thorniest problems. For example, why risk our loved ones dying in traffic accidents that self-driving cars could prevent or succumbing to cancers that AI might help us find cures for? Why not grow productivity and prosperity through automation and use AI to accelerate our research and development of affordable sustainable energy?

I'm optimistic that we can thrive with advanced AI as long as we win the race between the growing power of our technology and the wisdom with which we manage it. But this requires ditching our outdated strategy of learning from mistakes. That helped us win the wisdom race with less powerful technology: We messed up with fire and then invented fire extinguishers, and we messed up with cars and then invented seat belts.

However, it's an awful strategy for more powerful technologies, such as nuclear weapons or superintelligent AI - where even a single mistake is unacceptable and we need to get things right the first time. Studying AI risk isn't Luddite scaremongering - it's safety engineering. When the leaders of the Apollo program carefully thought through everything that could go wrong when sending a rocket with astronauts to the moon, they weren't being alarmist. They were doing precisely what ultimately led to the success of the mission. View More