Back

Speaker "Dr. Rui Wang" Details Back

 

Topic

Open-Ended Reinforcement Learning: Endlessly Generating Diverse Challenges and and their Solutions in a Single Run

Abstract

Reinforcement learning (RL) is good at solving some hand-designed challenges, and already reliably beats human in challenging games such as Go and DOTA 2. However, in real-life applications such as autonomous driving, there are often tremendous number of edge cases that are impossible to design by hand, and some of those cases are too challenging to solve. In this talk, we introduce and combine the concept of open-endedness with RL, which leads to a novel class of algorithms that can create increasingly diverse and challenging problems all by itself, while automatically forming a curriculum that teaches agents to solve those problems. In a single run of the algorithm, the capability of agents increase along with the difficult levels/diversity of the created tasks, and finally solved extremely challenging tasks that cannot be solved by direct optimization or by learning through a hand-designed curriculum. Open-ended RL has interesting applications in robotics, and can potentially be used for inventing new video games, new proteins or chemical processes, or even find its application in algorithmic trading. Open-Ended RL algorithms often require significant amount of compute and this heavily utilizes distributed computing. We will also introduce the distributed computing framework that we designed and implemented that can efficiently scale RL-type computing workload across multiple machines, which facilitate large-scale RL and population-based training over clusters and cloud.
Who is this presentation for?
na
Prerequisite knowledge:
na
What you'll learn?
na

Profile

Dr. Rui Wang holds a Ph.D. in Electrical and Computer Engineering from University of Illinois at Urbana Champaign with over 9 years of professional experiences in software and internet industry. Rui is currently an AI researcher at Uber AI, leading research and engineering projects in deep reinforcement learning, neuroevolution, etc., and is passionate about advancing fundamental research in AI/ML and connecting cutting-edge advances to the broader business and products. His work was published at top-tier ML/AI conferences (e.g., GECCO, IJCAI, NeurIPS, Deep RL Workshop, Optimization Workshop, VizGEC, etc.) as well as on many other IEEE journals and conferences. He regularly delivered invited talks at industry forums (e.g., AI NEXTCon, AI Accelerator Summit, AI Tech Meetups, etc). His recent research on open-ended AI algorithms won a Best Paper Award at GECCO 2019 and was covered by mainstream tech media such as Science, Wired, Quanta Magazine, etc.