Automotive processor features integrated AI accelerator -

Automotive processor features integrated AI accelerator

TI has added a dedicated AI accelerator to one of its automotive SoCs for the first time, in a move that perfectly illustrates the growing adoption of deep learning techniques in automotive ADAS systems. The new deep learning block is based on TI’s brand new C7x DSP IP plus an in-house-developed matrix multiplication accelerator.

The TDA4VM, one of the two first SoCs launched as part of the Jacinto 7 series, combines sensor pre-processing and data analytics designed to handle inputs from 8-megapixel front-mounted camera systems. Alternatively, the TDA4VM could handle four to six 3-megapixel cameras operating simultaneously alongside inputs from radar, lidar and ultrasonic sensors. These cameras and sensors enable advanced driver assistance systems (ADAS) such as automated parking. Deep learning can be used to fuse data from different sensors or to enable techniques such as object detection.

The TDA4VM includes a deep learning accelerator for ADAS features based on analysis of camera, Radar, Lidar and ultrasound data (Image: TI)


At a TI press event in Munich, Germany, EETimes Europe spoke with Sameer Wasson, vice president and business unit manager of TI’s processor business, and Curt Moore, general manager and product line manager of TI’s Jacinto product line.

“This is the first SoC that has the C7x [DSP] on it,” said Moore. “We added instructions for vectors, which is for computer vision, but we also recognized that if you look at how DSPs have traditionally been used, a lot of that heritage is around things like communication infrastructure, [where the problem is] how you feed a massive amount of data into an SoC, or into a math engine, how you crunch it, and how you get it out. It’s very hard.”

click for larger image

Figure: TDA4VM Functional Diagram. (Source: Texas Instruments)

The new C7x DSP specialises in processing large amounts of data and performing complex maths operations in difficult real-time environments. The DSP’s data streaming capability was combined with a matrix multiplication accelerator to boost deep learning applications.

Sameer Wasson (Image: TI)

“We lovingly call it the MMA,” said Wasson. “There are different situations for how we can use it with our libraries… we have TIDL [Texas Instruments Deep Learning] which is a top layer that abstracts the complexities of the MMA, you can program it through that. But the beauty of it is how the C7x interacts with it, to be able to get the data in and out faster.”

The TDA4VM is for ADAS systems between 5W and 20W. In practice, Wasson said front camera systems typically come in at a power budget below 7W, but the same SoC also suits more complex systems like automatic valet parking which might come in closer to 20W.

Part of TI’s pitch is that using a high-tech SoC like this one can actually reduce system cost for applications like front camera systems.

“If you have the right kind of deep learning, you may not need stereo cameras,” Wasson said. “You could do it with a lower end, cheaper lens. So for an OEM or a Tier 1, that is significantly lower cost, but you have the engine there which is [effectively] compensating for it, and giving you an upgrade in performance.”

Range of Compute

Curt Moore (Image: TI)

The deep learning engine in the TDA4VM is capable of 8 TOPS. As the first part launched into the Jacinto 7 series, Moore said that it is intended to be a mid-range part in terms of computing power; future devices will come in both above and below it. Future parts with, say, 2 TOPS might be useful for less compute-intensive features such as driver monitoring or occupancy detection.

“One of the beautiful things about the automotive market is that all of these use-cases co-exist,” Wasson said. “Even when an OEM comes out with a brand new, updated platform, in the same platform there are different car lines, and they all co-exist. The biggest challenge then becomes how do they become software compatible… if you make the most scalable platform and you scale the SoC with different use cases, now you’ve given them a canvas which they can go and express themselves on.”

Moore described the broad range of vehicles which is now expected to have ADAS features, starting from vehicles costing $10-12,000 right up to $100,000 and beyond.

“Drivers in these vehicles have different expectations,” Moore said, pointing out that a $3,000 ADAS system in a $100,000 vehicle is a completely different proposition to placing the same $3,000 system in car that will retail at $12,000.

“The other challenge these companies have is, if you think about even a big car company, their [development budget] might be $10 million a year,” Moore said. “They have to amortise that development cost over a relatively small number of vehicles as compared to a handset manufacturer, who builds a couple of models, and there’s tens of millions [of units shipped].”

Volume production of the TDA4VM is expected to begin in the second half of 2020. Preproduction devices and the TDA4VMXEVM evaluation module are available now.

>> An earlier version of this article was originally published on our sister site, EE Times.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.