Back

 Industry News Details

 
The Rise Of Artificial Intelligence As A Service In The Public Cloud Posted on : Feb 22 - 2018

The first wave of cloud computing is attributed to platforms. Google App Engine, Engine Yard, Heroku, Azure delivered Platform as a Service (PaaS) to developers. The next big thing in the cloud was Infrastructure as a Service where customers could provision virtual machines and storage all by themselves. The third wave of cloud was centered around data. From relational databases to big data to graph databases, cloud providers offered data platform services covering a wide range of offerings. Whether it is AWS or Azure or GCP, compute, storage and databases are the cash cows of the public cloud.

The next wave that would drive the growth of public cloud is artificial intelligence. Cloud providers are gearing up to offer a comprehensive stack that delivers AI as a Service.

Here is a closer look at the AI stack in the public cloud.

AI Infrastructure

The two critical pillars of artificial intelligence and machine learning are data and compute.

Machine learning models are generated when a massive amount of data is applied to statistical algorithms. These models learn from a variety of patterns from existing data. The more the data, the better the accuracy of prediction. For example, tens of thousands of radiology reports are used to train deep learning networks, which will evolve models to detect tumors. Irrespective of the industry vertical and business problem, machine learning needs data which can be in multiple forms. Relational databases, unstructured data stored as binary objects, annotations stored in NoSQL databases, raw data ingested into data lakes act as input to machine learning models.

Deep learning and neural networks -- advanced techniques of machine learning – perform complex computations that demand a combination of CPUs and GPUs. The Graphics Processing Unit complements the CPU through faster calculations. In the current context, GPUs are more expensive than CPUs. Cloud providers are offering clusters of GPU-backed VMs and containers through a pay-by-use model. With the data residing in the cloud, it makes perfect sense to exploit the compute infrastructure to train machine learning models.

Additional compute services such as batch processing, container orchestration and serverless computing are used for parallelizing and automating machine learning tasks. Apache Spark, the real-time data processing engine, is tightly integrated with machine learning. View More