What is the reality of the AI chip market today? In recent years, this market was over-hyped to the point where many of us became sceptical. Nvidia’s onslaught in the datacentre market combined with the hype surrounding fully autonomous vehicles made it seem like this new market would single-handedly solve all the semiconductor industry’s problems, including Moore’s Law.
At a recent press event, Richard Kingston, Vice President of Market Intelligence, Investor and Public Relations at Ceva, pointed to Nvidia’s share price as a good indicator of where we are in the hype cycle. Nvidia, a leader in the AI silicon market, experienced a significant fall in price around the start of 2019. Kingston suggested this means we have exited the hype stage and moved on to a stage where the market begins to more accurately reflect reality.
Nvidia stock price, 2017-2019, showing the drop Ceva thinks is indicative of an inflection point (Image: Ceva)
“Now we are in 2019, the reality of what AI can do and where it will feature is much more realistic,” Kingston said.
This is nowhere more true than in the automotive sector.
“People jumped in doing level 4 or level 5 autonomous platforms two years ago,” he said. “Now I think most of the automotive silicon vendors and manufacturers who are doing their own in-house designs today have realised that before you do 4 or 5, there’s a ton of money to be made at levels two and three.”
Level two corresponds to vehicles that can control all aspects of driving (steering, accelerating and braking, like Tesla’s Autopilot feature). Level three vehicles can drive themselves with the driver not needing to pay attention until the vehicle encounters something difficult when the driver has to take over.
Technologies enabling these levels of autonomous driving are seeing a lot of VC funding today; Kingston says it may be because they represent a realistic target.
“VCs see how much money is to be made there and how much business is to be done,” he said. “Meanwhile, levels 4 and 5 are being pushed further and further out.”
What does this mean for AI chips?
If the hype really is over, what does this mean for the AI processor market?
“It sounds a bit dramatic, but the AI inference processor as a standalone unit is essentially dead,” Kingston said. “There is no longer a requirement for a standalone AI engine in [edge] devices that run any kind of AI application.”
Richard Kingston (Image: Ceva)
As an example, he cited image processing applications, where there is a lot of crossover between AI and more traditional computer vision workloads.
“The idea of having two separate architectures to deal with what’s essentially one workload, it doesn’t make a lot of sense when you’re trying to save the cost of components,” he said. “More and more, we’re finding that the idea of having the standalone AI engine and its own set of tools and compilers doesn’t make a lot of sense. It makes more sense to have a dedicated engine with AI capabilities targeted for the end markets that the customer is going after.”
This line of thinking produced NeuPro-S, an AI processor core for vision-based applications in the automotive and consumer arenas.
According to Kingston, most AI applications will benefit from heterogeneous computing, and that is part of the reason that standalone processors are not the way forward.
“A lot of semiconductor companies are not going down the full AI processor route. But they are developing blocks or engines for specific functions within AI,” Kingston said.
Ceva’s experience in the automotive industry is that semiconductor companies typically invest money in developing their own AI engines to handle their specific workloads. However, these engines are not a full processor product complete with a matching mature toolchain, and can’t be delivered to customers in that format.
Ceva’s AI processor core IP for acceleration of AI vision workloads, NeuPro-S (Image: Ceva)
“These AI custom engines are part of many companies today, much higher up in the food chain than we would normally deal with, but they need help from companies like Ceva to bring this stuff into production,” Kingston said. This is part of the reasoning behind Ceva’s CDNN-Invite compiler, which allows companies to add their own custom AI engines alongside NeuPro-S to make AI chips that rely on heterogeneous computing.
Whether or not Kingston is right about the AI chip market (or whether the death of the AI inference chip has been greatly overstated), today there are dozens of companies, from startups to the big names, designing and producing these AI inference co-processors. While integration is right for some end markets, it may not be right for all. And let’s not forget that while vision applications dominate the AI sector today, tomorrow’s workloads may look quite different, in terms of both size and shape. I therefore suspect the standalone inference co-processor is set to be with us for quite a while yet.
>> This article was originally published on our sister site, EE Times: “‘The AI Inference Processor is Dead’.”