Neural networks—artificial intelligence processing systems inspired by the human brain—are a hot topic in technology, as large companies like Facebook, Google and Microsoft are developing them and putting them into use.
Most neural network technology in place today runs on graphics processing units (GPUs) from Nvidia Corp. and others. EDA and intellectual property vendor Cadence Design Systems Inc. stepped into the fray on on Monday (May 2), rolling out a new version of its Tensilica Vision processing core optimized specifically for vision/deep learning applications.
“Everybody is spending a lot of time developing a lot of research and producing a lot of technology,” said Pulin Desai, director of product marketing for Cadence’s Imaging/Vision Group, in an interview with EE Times. “The market is very hot. Maybe it’s hot because everything is being run on GPUs.”
Click here for larger image
Cadence (San Jose, Calif.) maintains that neural networks deployed in embedded systems that are built on programmable digital signal processor (DSP)-based SoCs offer advantages in power consumption, performance and time to market compared with networks built on CPUs or combinations of CPUs and GPUs.
“Our value add is always going to be in the low-power, high-performance and highly energy efficient product,” Desai said. “Most of the GPU type of products were developed to do classical 3D image processing and they were developed to do a lot of activities in floating point and to do multiple threads at the same time. At the end of the day they are not power efficient based on a lot of these reasons.”
The Cadence Tensilica Vision P6 DSP features new instructions, increased math throughput and other enhancements that increase its performance by up to four-fold compared with the Vision P5 DSP, which was rolled out in September. Compared to commercially available GPUs, the Vision P6 DSP can achieve twice the frame rate at lower power consumption on a typical neural network implementation, Cadence said.