Back

Speaker "Sanaz Bahargan" Details Back

 

Topic

NLP challenges in task-based dialog systems

Abstract

Goal-oriented dialog systems enable users to complete specific goals like booking a flight, ordering food, or receiving weather information. In most cases, the dialog systems consist of multiple ML models trained on labeled data to perform natural language understanding, state tracking, policy learning, natural language generation, and slot filling. A grand challenge in this scenario is getting labeled data for each domain, as the annotated data for dialogs is scarce and annotation is time-consuming and expensive. In this talk, I address how we can overcome this challenge by generating annotated data via a novel dialogue simulator based on a few seed dialogues and specifications of APIs and entities provided by the developer.
 
In addition, I will discuss different components of the dialog system such as dialog context encoder, NER, action prediction, and argument filling and how we train each model. Finally, I will show the technique can significantly reduce developer burden while making robust experiences.

Profile

Sanaz is an applied scientist at Amazon Lab 126 working on NLP and Deep Learning. Before that Sanaz was at Twitter, working on Deep Learning, NLP and Learning to Rank problems with a focus on Search, Explore, and Trends/Events products. During the last few years, she has been extensively working on Learning to Rank, Text Classification, Transfer Learning, Continuous learning, and Model Optimization. 
 
Sanaz completed a Ph.D. degree at the Computer Science Department of Boston University in Machine Learning in 2017.