Speaker "Jeremy Gu" Details Back



Improve customer experience through Multi-arm Bandit Subtitle: A Reinforcement Learning-based optimization


In order to accelerate innovation and learning, the data science team at uber is looking to optimize Driver, Rider, Eater, Restaurant and Courier experience through reinforcement learning methods. The team has implemented bandits methods of optimization which learn iteratively and rapidly from continuous evaluation of related metric performance. Recently, we completed an AI-powered experiment using bandits techniques for content optimization to improve the customer engagement. The technique helped improve customer experience compared to any classic hypothesis testing methods. In this session we will explain various use cases at Uber that this technique proven its value and how bandits have helped optimize and improve customer experience and engagement at Uber.


Jeremy Gu is Senior Data Scientist on Experimentation Platform at Uber. Jeremy works on the production of the continuous experiments at Uber. Previously, Jeremy was an applied scientist at Amazon, working on automated advertising and Amazon Web Services (AWS) teams. Jeremy is Vice President of the San Francisco Bay Area Chapter of the American Statistical Association(ASA). Previously, he was chapter officer of ASA in Seattle for two years. He promotes the practice and profession of statistics in the bay area and organizes the events such as career development and seminars for local students and working professionals.