CEVA announced NeuPro-S, its second-generation AI processor architecture for deep neural network inferencing at the edge. In conjunction with NeuPro-S, CEVA also introduced the CDNN-Invite API, an industry-first deep neural network compiler technology that supports heterogeneous co-processing of NeuPro-S cores together with custom neural network engines, in a unified neural network optimizing run-time firmware. NeuPro-S, along with CDNN-Invite API, is ideal for any vision-based device with the need for edge AI processing, including autonomous cars, smartphones, surveillance cameras, consumer cameras and the emerging use cases in AR/VR headsets, robots and industrial applications.
Designed to optimally process neural networks for segmentation, detection and classification of objects within videos and images in edge devices, NeuPro-S includes system-aware enhancements that deliver significant performance improvements. These include support for multi-level memory systems to reduce costly transfers with external SDRAM, multiple weight compression options, and heterogeneous scalability that enables various combinations of CEVA-XM6 vision DSPs, NeuPro-S cores and custom AI engines in a single, unified architecture. This enables NeuPro-S to achieve on average, 50% higher performance, 40% lower memory bandwidth and 30% lower power consumption than CEVA’s first-generation AI processor.
The NeuPro-S family includes NPS1000, NPS2000 and NPS4000, pre-configured processors with 1000, 2000 and 4000 8-bit MACs respectively per cycle. The NPS4000 offers the highest CNN performance per core with up to 12.5 Tera Operations Per Second (TOPS) @ 1.5GHz and is fully scalable to reach up to 100 TOPS.
The NeuPro-S further builds upon CEVA’s success in automotive use cases by providing solutions that meet safety requirements including quality assurance standards IATF 16949 and automotive standards including ISO 26262 and A-Spice.