Back

Speaker "Stepan Pushkarev" Details Back

 

Topic

Unpredictable predictions of self-driving cars AI. Handling inference in anomalous ever changing environment.

Abstract

No matter how good your Machine Learning model is trained, the inference output space leaves a wide range for appearing irrelevant and unexpected results when real world gives a model an unforeseen challenge. Those error inferences may lead to accidental outcomes, there are notorious cases we all know. For business to rely on AI/ML such an outcomes are unacceptable. The solution is robust monitoring for the edge cases and implementing the Active Learning concept into businesses' AI/ML operations for those cases to be handled and learned. The talk will be dedicated to the problems of including edge cases into self-driving cars AI inference space, practical solutions and their implementation into business operations. Schematically and generally the process of implementing of ML into business processes clear and simple: data scientists prepare initial data set to build and train a model, then that model is deployed to serve in computer application to make inferences and predictions based on a real data fed to model’s inputs. While model works making inferences it is being monitored for accuracy and supposed to get re-trained in case of particular degree of discrepancy showed and then re-deployed into production. Train, deploy, monitor, over and again, and that is it. That is not, of course. Taking a self-driving cars case - the AI is prone not only to deliberate adversarial attacks - messing or vandalizing road signs or putting a mirror in front of AI-sight camera - but a natural environmental edge cases - pedestrian wearing a mascot costume or non-standard type of a road light - make it deliver decisions we call dumb and in some cases dangerous. No matter how good your Machine Learning model is trained, the inference output space leaves a wide range for appearing irrelevant and unexpected results when real world gives a model an unforeseen challenge. Those error inferences may lead to error spreading further to a business process of even to an accidental outcomes, there are notorious cases we all know. For business to rely on AI/ML such an outcomes are unacceptable. The talk will be dedicated to the problems of including edge cases into self-driving cars AI inference space, practical solutions and their implementation into business operations. We will discuss a robust monitoring and retraining solution for the edge cases and implementing the Active Learning concept into businesses' AI/ML operations for those cases to be handled and learned. In-depth technical topics to be covered in this talk are: Generative Adversarial Networks for concept drift detection Density based clustering algorithms for concept drift monitoring Masked AutoEncoder for Density Estimation (MADE) for edge concepts detection Active Learning optimisation

Profile

Stepan Pushkarev is a CTO at Hydrosphere.io. His background is in engineering of data- and AI/ML platforms. He has spent last couple of years building continuous delivery and monitoring tools for machine learning applications as well as designing streaming data platforms. He works closely with data scientists to make them productive in their daily operations and efficient in delivering the value.