Back

 Industry News Details

 
Management AI: GPU and FPGA, Why They Are Important for Artificial Intelligence Posted on : Jun 15 - 2018

In business software, the computer chip has been forgotten. It’s a commodity lying way down deep, underneath the business applications. Robotics has been more tightly tied to individual hardware devices, so manufacturing applications are still a bit more focused on hardware.

The current state of Artificial Intelligence (AI), in general, and Deep Learning (DL) in specific, is more tightly tying hardware to software than at any time in computers since the 1970s. While my last few “management AI” articles were about overfit and bias, two key risks in a machine learning (ML) system. This column digs deeper to address the question many managers, especially business line managers, might have about the hardware acronyms constantly mentioned in the ML ecosystem: Graphics Processing Unit (GPU) and Field Programmable Gate Array (FPGA).

It helps to understand that the GPU is valuable because it accelerates the tensor (math) processing necessary for deep learning applications. The FPGA is of interest in finding a way to research new AI algorithms, train those systems, and to begin to deploy low volume, custom systems now being investigated in many industrial AI applications. While there are research discussions about the FPGA’s ability to do training, I see the early use to be the reason for the F, use in the field.

For instance, training an inference engine (the heart of an ML “machine”) can take gigabytes, even terabytes of data. When running inference in a datacenter, the machine must manage the potential of an ever increasing number of concurrent user requests. In an edge application, whether resident in a drone inspecting a pipeline or in your smartphone, a device must be small and still effective, but also adaptable. A CPU and a GPU are, simply put, two devices, while an FPGA can have different blocks do different things and potentially provide a robust system on a chip. Given all those varied demands, it’s good to have an idea of the current state of system architectures that can support the different needs.

There are two key classes of chip designs driving current ML systems, GPU and FPGA. There are also hints of new technologies that might be game changers in the mid-term future (at least years). Let’s take a look.

Graphics Processing Unit (GPU)

The chip that is the largest player in the ML world is the GPU, the graphics processing unit. How did something mainly created for making computer games look better on your computer monitor become so critical to machine learning? To understand that, we must bump back up to the software layer. View More