AI audio chips support speech interfaces -

AI audio chips support speech interfaces


SAN JOSE, Calif. — Startup Syntiant announced a pair of low-power neural-network accelerators for audio tasks. The NDP100 and NDP101 will detect sound patterns at power levels below 200 µW, enabling speech interfaces on a wide range of devices.

The new chips represent something of a surprise because they use digital techniques. The startup debuted last year, describing an approach for processing deep-learning jobs in the analog domain using an array of hundreds of thousands of multiply-accumulate units linked to NOR cells. Rival Mythic is taking a similar analog approach, initially attacking imaging and video apps, an area that Syntiant said it will target in next-generation chips in 2020.

The processor-in-memory architecture has long been seen as an interesting but difficult-to-implement approach. Both startups will likely need to shift from 40-nm NOR to ReRAM or MRAM arrays to scale to 28-nm designs.

The new chips use SRAM caches in a digital MAC array. They save power, in part, by reducing data movements as much as possile and by processing at reduced precison levels of 4-bits for weights and 8-bits for activations. The surprse release of digital parts suggests the company realized both the value of pruning neural networks and the challenges of analog computing.

Syntiant claims that its new digital chips are as much as a hundredfold more power-efficient than conventional CPUs and DSPs, but it provided no benchmark data. It said that it is still working on the analog chips, that the digital chips have design wins in hearing aids and mobile phones, and that sample chips have been tested in smart speakers, home automation devices, and laptops.

In a press statement, an executive from Motorola Solutions praised the digital chips for enabling public safety systems to power “a new breed of applications at the edge.” An Infineon executive praised the quality of the chips handling near- and far-field audio on a single IM69D130 microphone without DSPs or cloud connectivity.

Syntiant said that the digital chips will be used for keyword spotting, wake word detection, speaker identification, identifying audio events, and handling sensor analytics. The device supports 63 spoken words and can be programmed to recognize sounds such as breaking glass. It can also monitor gas sensor data or use passive infrared data to detect people.

The new chips use a digital block (in pink) that the startup plans to replace with an analog processor-in-memory in the future to save more power without changing the surrounding IP blocks (in blue). Click to enlarge. (Source: Syntiant)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.