Back

Speaker "Chris Fregly" Details Back

 

Topic

Real-Time, Multi-Cloud ML/AI with PipelineAI, TensorFlow, GPU, TPU, Spark, Kafka

Abstract

Traditional machine learning pipelines end with life-less models sitting on disk in the research lab.  These traditional models are typically trained on stale, offline, historical batch data.

Static models and stale data are not sufficient to power today's modern, AI-first Enterprises that require continuous model training, continuous model optimizations, and lightning-fast model experiments directly in production.

Through a series of open source, hands-on demos and exercises, we will use PipelineAI to breathe life into these models using 3 new techniques that we’ve pioneered:

* Continuous Optimizing (CO)
* Continuous Training (CT)
* Continuous Experimentation (CE)

CO, CT, and CE have proven to maximize pipeline efficiency, minimize pipeline costs, and increase pipeline insight at every stage from continuous model training (offline) to live model serving (online.)

Profile

Chris Fregly is a Developer Advocate for AI and Machine Learning at Amazon Web Services (AWS) based in San Francisco, California. He is co-author of the O'Reilly Book, "Data Science on AWS."
 
Chris is also the Founder of many global meetups focused on Apache Spark, TensorFlow, and KubeFlow. He regularly speaks at AI and Machine Learning conferences across the world including O’Reilly AI & Strata, Open Data Science Conference (ODSC), and GPU Technology Conference (GTC).
 
Previously, Chris was Founder at PipelineAI where he worked with many AI-first startups and enterprises to continuously deploy ML/AI Pipelines using Apache Spark ML, Kubernetes, TensorFlow, Kubeflow, Amazon EKS, and Amazon SageMaker.