SRAM-based approach accelerates AI inference

SAN JOSE — In his spare time, an engineer at Tektronix sketched out a novel deep-learning accelerator, and now, his two-person startup is the latest example of the groundswell of enthusiasm that deep learning is generating.

Behdad Youssefi defined an SRAM with specialized cells that can handle the matrix multiplication, quantization, storage, and other jobs needed for an inference processor. After four years of solo work on the concept, originally planned as a Ph.D. thesis, he formed startup Areanna with a colleague at Tektronix and a Berkeley professor as an advisor.

In Spice simulations, the design delivers more than 100 tera-operations/second/watt (TOPS/W) when recognizing handwritten digits using 8-bit integer math. Youssefi claims that it could beat Google’s TPU in computational density by an order of magnitude.

The design is the latest in a growing line of accelerators using a compute-in-memory approach. Startups Mythic and Syntiant are designing deep-learning processors using 40-nm NOR flash cells, targeting low-power chips for devices such as surveillance cameras.

Youssefi said that his design uses few analog circuits, so it could scale to fine process nodes. He believes that it could be the engine in low-power processors for everything from the edge to the cloud.


Areanna manipulates SRAM cells in unique ways to handle deep-learning tasks. Click to enlarge. (Source: Areanna)

The Areanna design lets users create custom parameters for everything from weights to neural-network layers and even individual neurons. The flexibility could enable designs for training processors in the future. However, no software stack exists yet to program the design, something THAT Youssefi may address later or leave to future customers.

It’s still early days for the startup. Youssefi and partner Patrick Satarzadeh, a senior IC designer at Tektronix, have shown the design to a small handful of VCs and established companies. Based on the feedback they’ve received, they hope to define and tape out a test chip early next year. The duo might license the design as IP as a first step to get revenue to fund their own ASIC designs.

Youssefi has two patents pending on his design. Separately, he is listed on four patents on other work from his career as an analog designer for more than a decade at Tek, Qualcomm, and SanDisk.

Areanna is the latest example of an explosion of new architectures spawned by the rise of deep learning. The debut of a whole new approach to computing has fired imaginations of engineers around the industry hoping to be the next Hewlett and Packard.

Dozens of startups have gotten funding for AI accelerators with a wide variety of targets. So far, only one — Graphcore — claims that it has shipped silicon, and none have reported results on the newly minted MLPerf benchmarks.

>> This article was originally published on our sister site, EE Times: “Startup Runs AI in Novel SRAM.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.