Back Industry News

To control AI, we need to understand more about humans Posted on Sep 13 - 2017

Share This :

From Frankenstein to I, Robot, we have for centuries been intrigued with and terrified of creating beings that might develop autonomy and free will.

And now that we stand on the cusp of the age of ever-more-powerful artificial intelligence, the urgency of developing ways to ensure our creations always do what we want them to do is growing.

For some in AI, like Mark Zuckerberg, AI is just getting better all the time and if problems come up, technology will solve them. But for others, like Elon Musk, the time to start figuring out how to regulate powerful machine-learning-based systems is now.

On this point, I’m with Musk. Not because I think the doomsday scenario that Hollywood loves to scare us with is around the corner but because Zuckerberg’s confidence that we can solve any future problems is contingent on Musk’s insistence that we need to “learn as much as possible” now.

And among the things we urgently need to learn more about is not just how artificial intelligence works, but how humans work.

Humans are the most elaborately cooperative species on the planet. We outflank every other animal in cognition and communication – tools that have enabled a division of labor and shared living in which we have to depend on others to do their part. That’s what our market economies and systems of government are all about.

But sophisticated cognition and language—which AIs are already starting to use—are not the only features that make humans so wildly successful at cooperation.

Humans are also the only species to have developed “group normativity” – an elaborate system of rules and norms that designate what is collectively acceptable and not acceptable for other people to do, kept in check by group efforts to punish those who break the rules.

Many of these rules can be enforced by officials with prisons and courts but the simplest and most common punishments are enacted in groups:  criticism and exclusion—refusing to play, in the park, market, or workplace, with those who violate norms.

When it comes to the risks of AIs exercising free will, then, what we are really worried about is whether or not they will continue to play by and help enforce our rules.

So far the AI community and the donors funding AI safety research – investors like Musk and several foundations – have mostly turned to ethicists and philosophers to help think through the challenge of building AI that plays nice.  Thinkers like Nick Bostrom have raised important questions about the values AI, and AI researchers, should care about.

But our complex normative social orders are less about ethical choices than they are about the coordination of billions of people making millions of choices on a daily basis about how to behave.

How that coordination is accomplished is something we don’t really understand. Culture is a set of rules, but what makes it change – sometimes slowly, sometimes quickly – is something we have yet to fully understand. Law is another set of rules that we can change simply in theory but less so in reality. View More

x

Get the Global Big Data Conference
Newsletter.

Weekly insight from industry insiders.
Plus exclusive content and offers.