Back

 Industry News Details

 
Is diversity the key to collaboration? New AI research suggests so Posted on : May 27 - 2022

As artificial intelligence gets better at performing tasks once solely in the hands of humans, like driving cars, many see teaming intelligence as a next frontier. In this future, humans and AI are true partners in high-stakes jobs, such as performing complex surgery or defending from missiles. But before teaming intelligence can take off, researchers must overcome a problem that corrodes cooperation: humans often do not like or trust their AI partners.

Now, new research points to diversity as being a key parameter for making AI a better team player.

MIT Lincoln Laboratory researchers have found that training an AI model with mathematically "diverse" teammates improves its ability to collaborate with other AI it has never worked with before, in the card game Hanabi. Moreover, both Facebook and Google's DeepMind concurrently published independent work that also infused diversity into training to improve outcomes in human-AI collaborative games.

Altogether, the results may point researchers down a promising path to making AI that can both perform well and be seen as good collaborators by human teammates.

"The fact that we all converged on the same idea—that if you want to cooperate, you need to train in a diverse setting—is exciting, and I believe it really sets the stage for the future work in cooperative AI," says Ross Allen, a researcher in Lincoln Laboratory's Artificial Intelligence Technology Group and co-author of a paper detailing this work, which was recently presented at the International Conference on Autonomous Agents and Multi-Agent Systems.

Adapting to different behaviors

To develop cooperative AI, many researchers are using Hanabi as a testing ground. Hanabi challenges players to work together to stack cards in order, but players can only see their teammates' cards and can only give sparse clues to each other about which cards they hold.

In a previous experiment, Lincoln Laboratory researchers tested one of the world's best-performing Hanabi AI models with humans. They were surprised to find that humans strongly disliked playing with this AI model, calling it a confusing and unpredictable teammate. "The conclusion was that we're missing something about human preference, and we're not yet good at making models that might work in the real world," Allen says.

The team wondered if cooperative AI needs to be trained differently. The type of AI being used, called reinforcement learning, traditionally learns how to succeed at complex tasks by discovering which actions yield the highest reward. It is often trained and evaluated against models similar to itself. This process has created unmatched AI players in competitive games like Go and StarCraft. View more