Back

Speaker "Miro Enev" Details Back

 

Topic

Accelerating Hyperparameter Optimization (HPO) with RAPIDS on AWS SageMaker.

Abstract

 
Hyper Parameter Optimization (HPO) improves model quality by searching over hyperparameters, parameters not typically learned during the training process but rather values that control the learning process itself (e.g., model size). This search can significantly boost model quality relative to default settings and non-expert tuning; however, HPO can take an exceedingly long time on a non-accelerated platform.
 
In this workshop, we'll show you how to use SageMaker to run an HPO workflow which is vastly accelerated using RAPIDS and GPUs. For instance, we can get a 12x speedup and a 4.5x reduction in cost when comparing between GPU and CPU EC2 Spot instances.
 
In addition to covering key concepts, we'll walk through a notebook that allows you to replicate this workflow on a cloud instance and show you how you can plug in your own dataset. We'll also cover model deployment and serving using on-demand or large batch inputs.
 
Requirements: Participants are encouraged to have an AWS Account with access to GPU instances before joining the workshop so that they can follow along.
 

Profile

Sr. Solution Architect & Data Scientist @ NVIDIA focusing on Deep Learning.