Neural-network accelerator enhances programmable extensibility
LONDON — Imagination Technologies has launched an update to its neural-network accelerator (NNA) with its new PowerVR Series 3NX architecture, which can deliver over 160 tera operations per second (TOPS) in multicore designs and provide programmable extensibility.
The new PowerVR Series3NX features architectural improvements over the previous-generation PowerVR Series2NX that include lossless weight compression, security enhancements, and multicore support. In addition, programmable extensibility is provided in the form of the PowerVR Series3NX-F (flexible) IP configuration, enabling additional functionality and flexibility using a new compute SDK. Using this capability, customers can differentiate and add value to their offerings through the OpenCL framework.
A single Series3NX core scales from 0.6 to 10 TOPS, while multicore implementations can scale beyond 160 TOPS. The new NNA features a 40% boost in performance in the same silicon area over the previous generation, giving SoC manufacturers a nearly 60% improvement in performance efficiency and a 35% reduction in bandwidth.
The architecture enables SoC manufacturers to optimize compute power and performance across a range of embedded markets such as automotive, mobile, smart surveillance, and IoT edge devices. Imagination hopes that with the flexibility and scalability, combined with near-doubling in top-line performance, it can further enable mass AI adoption in embedded devices.
>> Continue reading this article on our sister site, EE Times: "Neural Network Accelerator Boasts Programmable Extensibility."