While the tech industry continues to tout a “renaissance” of artificial intelligence, the number of AI chip startups has begun to plateau. AI startups are finding that the entry barriers to datacenters, once a promising market, are high — perhaps prohibitively so. Their problem traces to hyperscalers such as Google, Amazon, and Facebook now developing their own AI processors and accelerators that fit their specific needs.
To be clear, machine learning (ML) continues to advance. More variations of neural networks are popping up. AI is becoming intrinsic to every electronics system.
Laurent Moll, chief operating officer at Arteris, predicts that in the future, “everyone has some kind of AI in their SoCs.” That is good news for Arteris, because its business is in helping companies (large and small, or new and old) integrate SoCs by providing network-on-chip (NoC) IP and IP development tools.
For AI chip startups? Not so much. The competition is getting tough, complicating the challenge of cracking market segments suited to a particular AI design.
EE Times will next month unveil our “Silicon 100” (2021 version), an annual list of emerging electronics and semiconductor startups. The author of the report, Peter Clarke, has been closely tracking semiconductor startups for two decades. He tells us that the number of specialized chip startups, focused on GPUs and AI, “is flat compared with the previous year.” He observes, “We sense that the industry may have reached the point of ‘peak AI.’”
In short, the salad days of AI chip startups might be over.
Kevin Krewell, principal analyst at Tirias Research, expects more acquisitions of AI chip startups. “After all, the explosion of AI startup funding happened after Intel bought Nervana. VCs and angels saw a possible lucrative exit strategy.” He added, there are “too many [AI] startups today than the industry can support long term. I’m sure a few more will pop up with more exotic solutions involving analog or optical. [But] eventually, AI/ML functions will be subsumed into larger SoCs or into chiplet designs.”
Against this backdrop, EE Times recently sat down with Arteris’ newly appointed chief operating officer. Moll, who was once CTO of Arteris, spent more than seven years at Qualcomm, most recently as the mobile chip giant’s vice president of engineering.
We asked Moll about changes in the AI chip landscape and where the startups are going.
Unsurprisingly, Moll described the industry’s dash to AI as “one of the biggest gold rushes” he has ever seen. However, these latter-day 49ers are no longer just startups or smaller companies. The prospectors include companies who “have been making a silicon for a long time, and a lot of new people who have not made silicon [before],” Moll said. Everybody is “playing in the same arena,” and everyone is “trying to crack the nut.”
The growing base of developers and diversifying applications plays to Arteris’ advantage, but it paints a very different picture for AI chip startups. They no longer compete just with fellow AI startups with similarly bright new ideas. But now they are also up against the big boys. Hyperscalers and car OEMs are muscling into AI development so that they can use their own chips for their systems.
Still in expansion phase
The AI chip market is “still in the expansion phase” with “everyone still exploring,” Arteris’ Moll observed. Nonetheless, he is seeing the emergence of “a little bit of order” on the datacenter front. This is largely because hyperscalers are taking control of their destiny by developing their own AI processors and accelerators.
The distinction between hyperscalers and other AI chip designers boils down to one factor. “They own data sets,” said Moll. Hyperscalers aren’t sharing data sets with others, but they are developing proprietary software stacks. “And they feel they can create silicon, much more optimized for their own data access.”
Meanwhile, external vendors — smaller AI chip startups — are “developing new methods of structuring SoCs, new ways of using SRAM and DRAM, stacking, using optical,” said Moll. “There are many ways of creating a secret sauce, enabling them to do AI a lot better than what off-the-shelf AI chips can do today. The smaller guys are changing the game, they are very smart about doing things differently from others.”
In contrast, AI chips pursued by hyperscalers are not so innovative. Hyperscalers can afford to use a more traditional approach, Moll observed. A good example is Google’s TPU. “If you look at it, the architecture is great, but it’s not revolutionary — in many ways.” Despite that, “It works extremely well for what Google wants to do. So, it serves their purpose.”
If smaller AI startups’ chips are so novel, shouldn’t they be worming their way into hyperscalers’ data centers?
“No, no, no,” Moll said. “It’s unlikely that any of the smaller guys expand in the datacenter market… or hyperscalers buy their products.” However, he noted that “hyperscalers will definitely buy some of these startups, once they’ve seen that their technologies are useful and applicable to what they want to do.”
Moll described the hyperscaler train of thoughts as: “I know what my dataset is. I know how to do a kind of more centered architecture. If somebody has a great idea that works well, let’s grab this set of people and the IP, and let’s improve our own product.”
Tirias Research’s Krewell agreed. “You have to do something spectacular to get hyperscalars to commit to using your machine learning chip.” Cerebras, for example, pushed the envelope with its wafer-sized chip, said Krewell. “Nvidia is still the default platform for AI development work because of its ubiquitous software and scalability.”
What about the edge?
For AI chip designers, “the edge is an entirely different story,” compared to datacenters, noted Moll. The end market for the edge is versatile, with a desire for a much larger range of solutions. “A lot of people are still figuring out where to apply AI, and how to implement it,” said Moll.
click for full size image
19 percent of semiconductor total available market will be related to AI/ML in 2025. (Source: Bernstein; Cisco; Gartner; IC Insights; IHS Markit; Machina Research; McKinsey Analysis — Compiled by Arteris)
Tirias Research’s Krewell concurred. “Edge is still a relatively unexplored area. There are still opportunities to add ML to sensors and edge devices. Very low power analog and in-memory devices have promise, as well as accelerators in MCUs and App processors. I see a lot of potential for INT4 and INT2 inference in edge processors — good accuracy with much lower power and memory requirements.”
While diverse applications sound exciting, it carries the danger of getting caught up in the Edge AI hype cycle.
Edge AI became a buzzword not because edge is a new market, nor does it denote any specific product category. Rather, a lack of definition has turned “edge” into a catch-all with which startups can associate their products.
Among broad edge applications, Moll sees two diverging trends. One is an “AI inside a chip that does something else,” he noted. “That’s where the explosion is.”
This market for embedded systems is “where things like form factors, power and thermal really matter,” he added.
Another trend at the other end of the spectrum is “enormous chips that just do AI,” noted Moll. Applications for big chips at the edge, however, are still evolving.
The best example of “AI inside a chip” is probably applications processors for smartphones, which Moll knows well. AI accelerators have played a key role in voice recognition and vision processing. Today, AI has become a large part of the cell phone’s sales appeal. One result is that “incumbents in mobile [ such as Qualcomm] have advantages,” Moll acknowledged.
AI in automotive
Moll sees AI in vehicles is an entirely different story.
He noted that there will be a spectrum of solutions, from AI-heavy computer vision chips to a big AI chip that does all the heavy processing. As vehicles advance from ADAS to autonomy, Moll expects bigger AI processors to play a critical role at the higher- end vehicle market.
While incumbents in automotive, often armed with their own small AI chips, have advantages in ADAS, there is ample room for AI chip startups in the autonomy market with fairly large AI chips.
But here’s the twist.
Car OEMs — mimicking hyperscalers — are also going vertical. Tesla has already designed its own chip, called a “Full Self-Driving” computer. A few weeks ago, Volkswagen CEO Herbert Diess told a German newspaper that the company plans to design and develop its own high-powered chips for autonomous vehicles, along with the required software.
Moll confirmed that carmakers “are all looking at this very carefully.” Even though Arteris is an IP company, “We get calls from car OEMs because they want to understand the whole stack, and they want to be in control” of “the big pile of silicon” that’s about to come in and alter the vehicle’s architecture.
AI chip startups such as Recogni, Blaize and Mythic list automotive as an edge AI market segment they are targeting. How automakers will eventually implement such chips in a vehicle remains to be seen.
Krewell stressed, “Automotive platforms are still evolving. Distributed functionality has the advantages of modularity and reduced risk, but it’s more expensive to build and maintain than a centralized processing complex.”
He added, “The other issue is data. Sensors will be sending lots of data, having intelligence at the edge reduces the data transfers, but at the tradeoff of increased sensor lag and more distributed power in the chassis. Some balance of lightweight edge processing at the sensor could reduce the load on the central processor without adding excessive latency or require too much distributed power.”
AI battle shifts from chips to software
Krewell observed, “I see the focus of AI moving from chips to software. Deploying ML functionality requires good software. And to make ML accessible to more embedded design engineers and programmers requires making ML low-code. It also requires automating the creation of custom models for the specific application.”
Moll has reached a similar conclusion. Asked why he decided to come back to Arteris from Qualcomm, he cited two points.
First, the Arteris used to play in a niche — “a narrow place between IP vendors.” But that niche has now become “one of the key spaces” where AI chip designers look for help to “assemble very large and complicated SoCs” by building a lot of networks on chips. That’s where a Network on Chip (NOC) from Arteris can come in to solve problems in a holistic fashion.
Second, Arteris IP acquired Magillem last year. Moll sees the “software layer” offered by Magillem as another key to creating a very large and complicated SoC. Having been responsible at Qualcomm for the team delivering top-level chips, “I’ve come to recognize the value of what Arteris offers as a user, not as a marketer.”
>> This article was originally published on our sister site, EE Times.
- Tools Move up the Value Chain to Take the Mystery Out of Vision AI
- Training AI models on the edge
- Applying machine learning in embedded systems
- AI at the edge: what to look for in 2021
- Microcontrollers take on growing role in edge AI
- Processor-in-memory chip speeds AI computations
- Edge AI challenges memory technology
For more Embedded, subscribe to Embedded’s weekly email newsletter.