Back

Speaker "Sneha Jha" Details Back

 

Topic

Data doesn't lie... and other lies

Abstract

Data-driven decision making systems are already ubiquitous and only growing in use. Predictive models have begun to aid human decisions in a variety of domains. The narrative is that if complex decisions, often known to be subject to human biases, are delegated to algorithms, they are run by mathematical principles untouched by human prejudices. The current trend in data-driven technologies clearly violate this notion and can potentially propagate discrimination. We attempt a journey into application domains where the biases of such data systems can be highlighted and understood . We also attempt to examine the causes of bias inherent in the use of big data and how a lack of transparency makes the problem difficult to diagnose and fix. Finally, we present some steps in the direction of detecting and addressing problems of bias in the use of automated data-driven systems.

Profile

Sneha Jha is a Senior Researcher in the Natural Language and AI group at Nuance Communications at the intersection of natural language processing, machine learning and healthcare. At Nuance, Sneha primarily works on clinical NLP, information extraction, interpretability of statistical models and knowledge engineering for rule-based expert systems. She has a keen interest in the role of technology in policy, law and ethics.