Startup looking to build better processors for autonomous vehicle
MADISON, Wis. — There are way too many AI stories out there.
In the past few years, barely a month (if not a week) has passed without a stop-the-presses alert about yet another incumbent or startup’s brand-new AI processor.
To bring order to the parade, my colleague Rick Merritt has put together a reference entitled “Embedded AI: A Designer’s Guide.” It offers a veteran reporter’s overview of what’s out there and what all of those processors are supposed to do.
The questions linger, however: Why so many AI processors? Which specific problems are each AI chip designed to solve? More importantly, which pieces are still missing in today’s AI puzzle?
Kevin Krewell, principal analyst at Tirias Research, stated simply that AI is “bringing a new paradigm and changing a whole computer system.” Incumbents and startups alike are scrambling for a place in the still-chaotic AI-ready computing field.
One startup that we’ve recently acquainted with is an Israeli company, Hailo. A designer of a proprietary chip for “deep learning on edge devices,” it announced this month the completion of a $12.5 million Series A round. With Hailo’s stated goal “to bring intelligence to any product,” CEO Orr Danon is calling for “a complete redesign of the pillars of computer architecture — memory, control, and compute — and the relations between them.”
That’s a laudable goal. Hailo, however, is neither ready to disclose its architecture — “maybe later this year,” according to Danon — nor launch its first AI processor until the first half of 2019.
Danon contends that none of the autonomous vehicle (AV) vendors today can find — among myriad new-generation AI processors — what they need for their AVs.
Automotive is an immediate target market that Hailo plans to address with its new AI processor. Danon noted that today’s testing AVs are literally running on public roads with a data center in the trunk. To fill the huge divide between these testing vehicles and AVs that must be built for mass deployment, tier ones and car OEMs need a new AI processor that can run the same deep-learning tasks much more efficiently, he said.
Based on publicly available information, Hailo created a spreadsheet listing deep-learning TMACS per watt for each AI processor. The company shared it with EE Times to make the case — how far the AV industry still is from getting a performance-efficient AI processor that it needs to drive highly automated vehicles.
Although EE Times can’t yet report much on Hailo’s new AI processor, we grilled Danon about the larger AI problems that the industry must solve and how the industry is proceeding.
EE Times also talked to several industry analysts and other AI startup executives, asking them to outline what they view as AI’s big roadblocks.
1. Modern CPU architecture won’t cut it for AI
Most industry observers unanimously agree that the current CPU processors based on von Neumann architecture can’t efficiently cope with today’s AI processing.
Linley Gwennap, principal analyst at The Linley Group, told us, “Von Neumann is not a good fit for AI.” He explained that each computation has to fetch and decode an instruction as well as collect and store data to the register file. “To improve compute per watt, you need to do more computing and less fetching.”
Tirias Research’s Krewell agreed. “Von Neumann architectures are great for control and sequential calculations: ‘If-Then-Else’ operations.” In contrast, “Neural nets (NNs), just like graphics, are highly parallel and [rely on] memory-bandwidth processes. It’s expensive (power and cost) to try to scale NNs with CPUs.”
Hailo’s Danon said, “Although the von Neumann approach and modern CPUs in general are very flexible, there are many cases where this flexibility is not necessary.” This applies to neural networks and other operations when, for example, behavior is predetermined for many cycles down the line. In such cases, a more efficient way to design the system is to “avoid the need to read an instruction to guide the behavior of the system every and each cycle,” he noted. “And it’s important to maintain the flexibility to change the element’s behavior each cycle.”
In Danon’s opinion, “Neural networks take this notion to the extreme. The ‘structure’ — the manuscript that determines connectivity between compute elements — determines the behavior (aka ‘computational graph’) for the whole session.” In short, what the AI community needs is not a von-Neumann-based processor but a “domain-specific processor that’s good at describing neural-network structure,” he concluded.
>> Continue to page 2 on our sister site, EE Times: "Hailo hunts missing AI puzzle pieces."