Back

 Industry News Details

 
Artificial Intelligence Is Driving A Silicon Renaissance Posted on : May 11 - 2020

The semiconductor is the foundational technology of the digital age. It gave Silicon Valley its name. It sits at the heart of the computing revolution that has transformed every facet of society over the past half-century.

The pace of improvement in computing capabilities has been breathtaking and relentless since Intel introduced the world's first microprocessor in 1971. In line with Moore’s Law, computer chips today are many millions of times more powerful than they were fifty years ago.

Yet while processing power has skyrocketed over the decades, the basic architecture of the computer chip has until recently remained largely static. For the most part, innovation in silicon has entailed further miniaturizing transistors in order to squeeze more of them onto integrated circuits. Companies like Intel and AMD have thrived for decades by reliably improving CPU capabilities in a process that Clayton Christensen would identify as “sustaining innovation”.

Today, this is changing in dramatic fashion. AI has ushered in a new golden age of semiconductor innovation. The unique demands and limitless opportunities of machine learning have, for the first time in decades, spurred entrepreneurs to revisit and rethink even the most fundamental tenets of chip architecture.

Their goal is to design a new type of chip, purpose-built for AI, that will power the next generation of computing. It is one of the largest market opportunities in all of hardware today.

A New Computing Paradigm

For most of the history of computing, the prevailing chip architecture has been the CPU, or central processing unit. CPUs are ubiquitous today: they power your laptop, your mobile device, and most data centers.

The CPU’s basic architecture was conceived in 1945 by the legendary John von Neumann. Remarkably, its design has remained essentially unchanged since then: most computers produced today are still von Neumann machines.

The CPU’s dominance across use cases is a result of its flexibility: CPUs are general-purpose machines, capable of carrying out effectively any computation required by software. But while CPUs’ key advantage is versatility, today's leading AI techniques demand a very specific—and intensive—set of computations.

Deep learning entails the iterative execution of millions or billions of relatively simple multiplication and addition steps. Grounded in linear algebra, deep learning is fundamentally trial-and-error-based: parameters are tweaked, matrices are multiplied, and figures are summed over and over again across the neural network as the model gradually optimizes itself.

This repetitive, computationally intensive workflow has a few important implications for hardware architecture. Parallelization—the ability for a processor to carry out many calculations at the same time, rather than one by one—becomes critical. Relatedly, because deep learning involves the continuous transformation of huge volumes of data, locating the chip's memory and computational core as close together as possible enables massive speed and efficiency gains by reducing data movement. View More