AI acceleration to take center stage at chip conference

LONDON — In a sign of the times, half of the talks at this year’s Hot Chips are focused on AI acceleration. The annual gathering for microprocessor designers once focused most of its talks on CPUs for PCs and servers.

Startups Cerebras, Habana, and UpMem will unveil new deep-learning processors. Cerebras will describe a much-anticipated device using wafer-scale integration. Habana, already shipping an inference chip, will show its follow-on for training.

Grenoble-based UpMem will disclose a new processor-in-memory, believed to be using DRAM, aiming at multiple uses. Graphcore was invited but was not ready to share more details of its chips.

The startups will compete with giants such as Intel, which is describing Spring Hill and Spring Crest, its inference and training chips based on its Nervana architecture. In a rare appearance, Alibaba will disclose an inference processor for embedded systems.

In addition, Huawei, MIPS, Nvidia, and Xilinx will provide new details on their existing deep-learning chips. Members of the MLPerf group are expected to describe their inference benchmark for data center and embedded systems, a follow-on to their training benchmark.

Organizers hope that a senior engineer from Huawei will be able to give a talk about its Ascend 310/910 AI chips. However, given that the company is in the crosshairs of the U.S./China trade war, it’s unclear whether the speaker will be able to get a visa or will be confronted with other obstacles.

Nvidia dominates the market for AI training chips with its V100. Given its market lead, it chose not to launch new silicon this year. So it will describe a research effort on a multi-chip module for inference tasks that it says delivers 0.11 picojoules/operation across a range of 0.32–128 tera-operations/second.

In an extra treat, the three top cloud-computing providers in the U.S. will host tutorials on their AI hardware. It’s rare for any of the trio to speak on the topic at events they do not host, let alone join one where their competitors are speaking.

Google will describe details of its liquid-cooled, third-generation TPU. A representative of Microsoft’s Azure will discuss its next-generation FPGA hardware. And a member of Amazon’s AWS will cover its I/O and system acceleration hardware.

In addition, a Facebook engineer will describe Zion, its multiprocessor system for training announced at the Open Compute Summit earlier this year. “Facebook and its Open Compute partners are more and more setting the standards for form factors and interconnect approaches for data center servers,” said a Hot Chips organizer.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.