Back

Speaker "Jon Peck" Details Back

 

Topic

Productionizing your Machine Learning Models

Abstract

You've developed and trained your ML model, and it performs beautifully in your development environment -- but what happens when you move that into production, and it suddenly has to scale massively varying elastic workloads, compete with other models for memory and processing resources, or mesh with models deployed in other languages and frameworks?

It isn't enough to simply fire up a machine instance, write a Flask wrapper, and call it a day: properly productionizing a model requires a deep understanding of container management, load balancing, CI/CD, dynamic resource allocation, and more. In this talk, we'll look at what your team does and does not need to build in order to move from weeks of deployment time to mere minutes, while preserving elasticity, low-latency, and flexibility.

Profile

A full-stack developer with two decades of industry experience, Jon Peck now focuses on bringing scalable, discoverable, and secure machine-learning microservices to developers across a wide variety of platforms via Algorithmia.com Organizer: Seattle Building Intelligent Applications Meetup Speaking Experience: Galvanize, CodeFellows, Metis, Epicodus (tech talks); OpenSeattle; SeattleJS Educator: Cascadia College, Seattle C&W, independent instruction Lead Developer: Empower Engine, Giftstarter, Mass General Hospital, Cornell University