Back

Speaker "Rustem Feyzkhanov" Details Back

 

Topic

Building scalable end-to-end deep learning pipelines in the cloud

Abstract

Machine and deep learning became essential for a lot of companies for internal and external use. One of the main issues with its deployment is finding the right way to train and operationalize the model within the company. Serverless approach for deep learning provides simple, scalable, affordable and reliable architecture for it. My presentation will show how to do so within AWS infrastructure.
Serverless architecture changes the rules of the game - instead of thinking about cluster management, scalability, and queue processing, you can now focus entirely on training the model. The downside of this approach is that you have to keep in mind certain limitations and how to organize training and deployment of your model in a right fashion. My presentation will show how to utilize services like AWS Batch, AWS Fargate, Amazon SageMaker, AWS Lambda, and AWS Step Functions to organize scalable deep learning pipelines. 

Profile

Rustem Feyzkhanov is a senior machine learning engineer at Instrumental, where he works on analytical models for the manufacturing industry, and AWS Machine Learning Hero. Rustem is passionate about serverless infrastructure (and AI deployments on it) and is the author of the course and book "Serverless Deep Learning with TensorFlow and AWS Lambda" and "Practical Deep Learning on the Cloud". Also, he is the main contributor to the open-source repository for serverless packages https://github.com/ryfeus/lambda-packs.