Industry News Details

Google Adds Deep Learning Containers to AI Lineup Posted on : Jul 12 - 2019

Deep learning containers, a service recently rolled out in beta by Google, aims to help AI developers package the storage and computing dependencies required for services like its Kubernetes Engine when developers find themselves prototyping locally while using multiple cloud tools.

The goal is to ensure that those dependencies are packaged correctly and consistently for use with container runtimes handling emerging AI applications, the search giant (NASDAQ: GOOGL) said.

Along with making it easier to scale deep learning and other applications in the cloud or on-premises, the new Google Cloud container service also comes with optimized versions of TensorFlow. That feature can be used, for example, to train models on Nvidia (NASDAQ: NVDA) GPUs or deploy them on Intel (NASDAQ: INTC) CPUs.

In a recent blog post, Mike Cheng, a Google software engineer, noted that deep learning containers come with a preconfigured Jupyter Notebook environment. The cloud provider bills the new service as delivering a “consistent and portable environment” for accelerating development and deployment of machine learning projects

“At some point, [developers] will likely need a beefier machine than what [a] local machine has to offer,” with local data and packages that must be installed in that environment, Cheng said. “Deep Learning Containers can be extended to include local files, and then these custom containers can then be deployed in a Cloud AI Platform Notebooks instance” or Google Kubernetes Engine.

Google promotes its Kubernetes Engine as a way to accelerate the deployment and updating of applications and services by provisioning cloud resources based on a user’s computing, memory and storage requirements. View More