Back

 Industry News Details

 
Windows ML - Microsoft's Grand Plan To Bring AI Closer To Windows Developers Posted on : Mar 17 - 2018

Microsoft is leaving no stone unturned to democratize AI. The latest move from Redmond aims to enable almost every Windows developer to build AI-infused applications.

Microsoft has announced that the next major update to Windows will include the ability to run machine learning models natively with hardware acceleration. This feature instantly turns existing Windows devices, ranging from IoT edge devices to HoloLens to 2-in-1s and desktop PCs, into AI-enabled devices. Data scientists and developers creating AI models will take advantage of this to deploy intelligent applications targeting a large user base.

Machine learning models have two essential stages – training & inference. The first phase includes training a model to learn from the patterns found in existing datasets. Typically, training is done on clusters of high-end cloud servers powered by GPUs. Once the model is tested and evaluated, it can predict from unseen and unknown data, which is called as inference.  While training is often confined to the cloud, an inference model can be run on a wide range of devices with just enough horsepower.

Developers can use Windows ML -  a runtime Microsoft is going to embed in Windows - for inference at runtime. Inference works offline with no dependencies on the cloud making it ideal for integrating into IoT devices and portable PCs.

Machine learning models are typically developed using popular frameworks such as Apache MXNet, TensorFlow, Microsoft CNTK, Caffe and Torch. These frameworks are expected to be present at runtime for inferences. Bundling multiple ML frameworks is not an ideal scenario for platform vendors. To address the interoperability issue of ML frameworks, Microsoft, Facebook, and Amazon announced an ecosystem called Open Neural Network Exchange (ONNX). Subsequently, other companies such as AMD, ARM, Huawei, IBM, Intel and Qualcomm announced their support for ONNX.

Microsoft is embedding ONNX in Windows through Windows ML. Since ONNX is independent of the frameworks, developers can run any model for inference. This is a smart move to avoid the dependency on one single ML framework. ONNX insulates Microsoft from dealing with a variety of frameworks and their versions. View More