Back

 Industry News Details

 
Google AI chief Dean sees evolution in MLPerf benchmark for machine learning Posted on : Feb 25 - 2020

Efforts to benchmark computer systems, known as MLPerf, are essential to measure the expanding world of artificial intelligence silicon, according to Jeff Dean, head of AI efforts at Google, but the benchmarks will also have to evolve to better reflect real-world concerns.

If computer systems are to evolve to handle ever larger machine learning models, a standard way to compare the effectiveness of those systems is essential, according to Google head of AI, Jeff Dean. But that system of measurement itself must evolve over time, he said.

"I think the MLPerf benchmark suite is actually going to be very effective," said Dean, in an interview with ZDNet, last week, referring to the consortium of commercial and academic organizations known as MLPerf, founded within the last few years. The MLPerf group have formulated test suites that measure how different systems do on various AI tasks such as the number of image "convolutions" per second.

Google, along with Nvidia and others, regularly trumpet the performance of its latest computer systems on the tests, like students comparing grades.

Dean spoke to ZDNet from the sidelines of the International Solid-State Circuits Conference in San Francisco last week, where he was the keynote speaker. Among his topics was the emergence of new kinds of chips for AI. MLPerf, he told ZDNet, can help sort out the proliferation of chips that speed up certain aspects of machine learning.

"It'll be interesting to see which ones hold up, in terms of, are they generally useful for a lot of things, or are they very specialized and accelerate one kind of thing but don't do well on others," Dean said of the various chip efforts.

MLPerf, however, has its critics. Some people in the chip industry have called MLPerf biased in favor of large companies such as Google, claiming the large tech firms engineer machine learning results to look good on the benchmarks. That raises the question of whether benchmarks like MLPerf actually capture metrics that are relevant in the real world.

Something of that skeptical attitude was implicit in remarks last fall by AI startup Cerebras Systems in an interview with ZDNet. Cerebras, unlike some other AI chip startups, has declined to provide MLPerf results for its "CS-1" system, saying the tests are not relevant to actual workloads. View More