Advertisement

The emerging AI SoC lineup

November 01, 2018

junko.yoshida-November 01, 2018

MADISON, Wis. — Beyond big guns like Google, Facebook, Amazon, and Baidu, who have been designing their own chips for deep learning (both for training and inference), we’re hearing — almost weekly — about “nouvelle” AI SoC architectures invented by startups that nobody’s ever heard of.

The flood of AI chip announcements prompted one veteran industry analyst, Kevin Krewell of Tirias Research, to remind us: “There’s a lot of claims and counter-claims in machine-learning processing, but it’s only working silicon and software that can tell us” their real capabilities.

Indeed, many of these products won’t reach market this year or even next. There is no way of knowing what’s real and what’s smoke and mirrors — until there’s an actual chip.

However, a recent interview with Kurt Shuler, vice president of marketing at Arteris Inc., reminded us that sometimes the answer to what’s really happening in a nascent, overhyped market like AI chips lies deeper in the food chain.

Who are at play?
Arteris announced on Wednesday the company’s new interconnect IP with AI package called FlexNoc 4. The company’s new offerings are designed to accelerate development of next-generation deep neural network (DNN) and machine-learning systems, according to Shuler.

EE Times’ discussion with Shuler revealed that Arteris, armed with its network-on-chip (NoC) intellectual property, has found itself atop a perch where it can see who’s doing what in the global AI SoC design space.

At a time when AI chips designed for training are growing bigger and more complex than ever before, often integrated with massive parallel processors, “interconnect has become more important,” according to Shuler.

During the interview, he shared a list of chip companies currently working on AI SoCs with Arteris’ interconnect IP and tools.

While the table below includes many unnamed startups and incumbent systems vendors (including a Japanese camera OEM and some large system OEMs), it paints a clear picture of how several incumbent SoC companies are also progressing on AI chip designs.

Click here for larger image
(Source: Arteris) Click here for larger image
(Source: Arteris)

In the automotive field, the list includes well-known incumbents such as Mobileye, NXP, and Toshiba. For mobility, HiSilicon is prominently listed. In the category of machine learning for network and automation, Arteris’ clients include Movidius and Baidu.

Shuler also observed, “A bit of a gold rush going on in AI chip development in China.” These activities are heavily backed by the Chinese government. Intellifusion, Enflame (Suiyuan) Technology, Iluvatar Corex, Cambricon Technologies, and Canaan Creative are among many Chinese companies with whom Arteris is working on AI chips.

Who are driving AI architecture?
AI SoCs are nothing like applications processors or IoT chips whose architecture are already well-defined. Shuler said, “For example, apps processors are basically one architecture.” But with AI SoCs, he added, “Everyone is trying everything.”

No single SoC architecture rules the AI world yet. There is no single right way to design an AI SoC, either. That makes “flexibility” an important element in design, noted Shuler.

He said, “Those who are doing these AI chips are mostly software people.”

This leads software guys to suggest, “Hey, let’s take a look at this particular type of DNN. We are mathematicians, so we can figure out which part of it can be accelerated by hardware.” So everyone does just that, until someone asks, “Oh, what about pruning data? We should get rid of data we don’t need. Can we develop hardware for that so that we can get to the answer we need faster?” Sure, we can, they say. Next, the same software types can’t help but ask, “How do we accelerate this convolution?” So it goes, on and on.

As a result, a lot of design teams gravitate toward individual processing elements — each with some math aspects, each with some local memory, explained Shuler. In the end, however, the real unsolved problem, he noted, is “data flow.”

While the processing elements must be able to communicate among one another, traffic between processing elements and memory must be managed. “Data flow is one problem they don’t really understand,” said Shuler. But they must be able to “keep flowing this data stuff in the most efficient way.” That’s where the interconnect IP and tools come in.

Architectural issues
Arteris’ experience working with a broad range of system and SoC companies on AI chips has facilitated a clear view of the architectural issues with which AI chips are grappling.

There are three challenges, said Shuler. Speaking mainly of AI training chips, he cited network topologies, large chip sizes “as big as a door mat,” and huge bandwidth, including on-chip data flow and access to off-chip memory.

>> Continue reading page two of this article on our sister site, EE Times: "Who's Who in AI SoCs."

 

 

Loading comments...