SAN JOSE, Calif. — A startup that designed a lighting system for smart homes called for cheaper SoCs and DRAMs to bring machine learning to the Internet of Things. Noon Home is not using neural networks yet, but it wants to and it needs lower-cost chips to do it.
To serve consumer markets, “we look to single-digit component costs … [and] a lot of AI apps require a lot of memory bandwidth,” said Saket Vora, a former engineering manager on the first Apple Watch and now head of hardware engineering at Noon Home.
The 55-person startup uses traditional analytics in its cloud service today, while its data science team explores the possibilities for AI. Ideally, its future lighting controllers could track movements of multiple people throughout a home using programmable inference engines in an SoC costing $5 or less. Today, such chips cost $12 to $15, said Vora.
DRAM costs and availability present equally big barriers. An estimated 512-Mbyte memory for an SoC running AI apps may cost $10 to $12 just for DRAM. That’s twice the price of a processor to run embedded Linux, and “the DRAM supply is tight and pricing is volatile,” he said.
The problem is that the high-volume smartphone and server markets drive memory architectures, prices, and supplies. Meanwhile, trailing-edge components favored in IoT design such as “LPDDR2 are going away and getting into last-time buys,” he said.
The semiconductor industry needs to listen to calls from system architects and even end users if it is going to succeed in IoT, said Maciej Kranz, an author of a popular book on IoT and a vice president of strategic innovation at Cisco.
“Traditionally, chipmakers work with system vendors to get their requirements, but in IoT, this is not a successful strategy,” he added. “You have to talk to end customers, too … you need to standardize on a few use cases and connect the dots [among user requirements] to find repeatable patterns.”