Back

Speaker "Maya Gupta" Details Back

 

Topic

The Power of Monotonicity: TensorFlow Lattice

Abstract

After a brief introduction to core TensorFlow concepts, we'll focus on the TensorFlow Lattice package, which empowers you to train more interpretable machine-learned models without sacrificing accuracy. TF Lattice enables you to input your prior information about global trends into your models, such as that closer coffee shops are better (if all other features are the same). Such global trends may be missed by flexible models like RF's and DNN's when trained on noisy data, and may only become problems when you run your model on examples that are different than your training data (data shift). By learning your preferred global trends, TF Lattice produces models that generalize better, and that you can explain and debug more easily, because you know what the model is doing. We'll show you how to use TF Lattice's pre-built TF Estimators, and how to use the underlying TF operators to build your own deep lattice network models or plug-n-play with other TF models including DNN's. Suitable for TF newbies and advanced TF users.

Profile

Gupta runs the GlassBox Machine Learning R&D Team at Google, focusing on designing controllable and interpretable machine learning solutions that achieve state-of-the-art accuracy. Before joining Google in 2012, Gupta was a Professor of Electrical Engineering at the University of Washington in Seattle. She did her PhD at Stanford University in EE. She has also worked for Ricoh, NATO, and HP, and runs the laser-cut jigsaw puzzle company Artifact Puzzles.