Sorting out evolving AI requirements

The advent of artificial intelligence (AI) will require diverse new microelectronic solutions to meet the evolving demands of large-scale data centers, “mid-size” systems like autonomous vehicles and robots, and a growing array of mobile devices, appliances, wearables, and as-yet un-envisioned applications. Of central importance is the need to achieve unprecedented efficiency and speed in the collection and analysis of data, while also managing power consumption and form factor.

In the hardware domain, this will require innovative thinking and new paradigms in sensors, processors, memory, interconnection, and packaging. Promising options are beginning to materialize from established and emerging research efforts, which we will review in the context of Edge AI and other broad trends. Going forward, interdisciplinary pre-industrial collaboration will be needed to create practical, manufacturable solutions from these efforts.

We can envision the coming AI marketplace by comparing applications based on computing capability and power consumption requirements (Figure 1). Wearables have the greatest power restrictions and (in relative terms) lowest computing needs. Data centers are at the opposite end, with smart appliances, augmented reality, robots, and autonomous vehicles in between.

Figure 1. (Source: Leti)

Edge AI, in which most data analysis takes place at the point of collection, is well-suited to applications on the left-hand side. While this is simple to describe, it requires unprecedented levels of sensor and processor capability in extremely small packages. Sensors will need to take inspiration from human eyes and ears, becoming far more adaptable by changing their characteristics (such as dynamic range) based on cognition and local intelligence.

Larger-scale applications, meanwhile, will strain traditional computing paradigms, particularly constant memory read/write cycles that consume both time and energy.

With these requirements in mind, Leti has prioritized research into smart sensors and innovative computing approaches.

One focus is a fundamental problem of modern computing: moving data between memory and processor now costs vastly more than computation, both in time and energy consumption. Data transfer and memory access account for up to 90% of system energy usage, and because applications like artificial neural networks rely on large databases and simple computation operations, reducing data movement becomes critical.

Stacking of memory onto processors, to shorten the physical links, is the subject of long-standing Leti research into 3D circuitry. We are now also pursuing new memory designs, which allow addition, subtraction, and Boolean logic to be performed within SRAM. The area cost is negligible and, more important, data never leaves the memory. These in-memory-computing (IMC) processors have strong potential for applications like neural networks and cryptography, and we believe that by the 2020s they can provide 100 times the throughput of conventional processors on AI applications while maintaining the same frequency and energy budget.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.