Chip vendors sharpen focus on AI chips

SAN JOSE — It’s easy to list semiconductor companies working on some form of artificial intelligence — pretty much all of them are. The broad potential for machine learning is drawing nearly every chip vendor to explore the still-emerging technology, especially in inference processing at the edge of the network.

“It seems like every week, I run into a new company in this space, sometimes someone in China that I’ve never heard of,” said David Kanter, a microprocessor analyst at Real World Technologies.

Deep neural networks are essentially a new way of computing. Instead of writing a program to run on a processor that spits out data, you stream data through an algorithmic model that filters out results in what’s called inference processing.

The approach started getting attention after the 2012 ImageNet contest, when some algorithms delivered better results identifying pictures than a human. Computer vision was the first field to feel a big boost.

Since then, web giants such as Amazon, Google, and Facebook have started applying deep learning to video, speech, and translation. Last year, more than 300 million smartphones shipped with some form of neural-networking capabilities; 800,000 AI accelerators will ship to data centers this year; and every day, 700 million people now use some form of smart personal assistant like an Amazon Echo or Apple’s Siri.

As many as 50 companies are already said to be selling or preparing some form of silicon AI accelerators. Some are IP blocks for SoCs, some are chips, and a few are systems.

They share a goal of being designed into everything from data center servers to smartphones, smart speakers, and a host of other products. In a sign of how pervasive the technology may become, researchers from Toshiba recently presented a paper on a neural-network accelerator block designed to be embedded in a sensor.

The core technology is still evolving. As many as 50 technical papers on AI are published daily, “and it’s going up — it couldn’t be a more exciting field,” said David Patterson, the veteran co-developer of RISC who worked on Google’s Tensor Processing Unit.

“Theoretically, you have better information to work with. We know that neural networks often have non-intuitive algorithms, so we need to consider what digital processing may have thrown away that a neural net can see — that’s exciting.”Analyst Kanter notes that machine learning involves new system architectures. For example, tomorrow’s surveillance cameras may decide to put deep-learning accelerators next to their CMOS sensors and process raw data in the analog domain before it gets converted to digital bits and sent to an image processor, he said.

The Imec research institute is conducting its own experiment in machine-learning architectures with work on accelerators using single-bit precision. It aims to push the envelope in a debate over what level of precision is ideal for AI tasks.

“The interesting thing is that there’s no consensus about where we do inference processing,” said Kanter. “In all likelihood, we will wind up with a combination of edge and data center inference processing as well as some vertical-specific locations like self-driving cars.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.