Taking advantage of nextgen graphics-intensive processors - Embedded.com

Taking advantage of nextgen graphics-intensive processors

As the technology editor of Embedded.com I often get a chance to see the newest and best design ideas from hardware and silicon developers long before the applications they are working on make it to the market and to end users.

A case in point is the processor power now becoming available in affordable packages for most consumers. When I was working at the California Institute of Technology, along with the students in an electronics class I was taking there, I was blown away by the just-introduced Intel 4040 which had processing power that was previously only possible on a minicomputer and was then being “embedded” in industrial controllers, white goods, handheld calculators, digital LED wristwatches.

Each year since then, the wonders have kept on coming, and they continue to amaze me. Right now, new heterogeneous multicore designs emerging from the likes of ARM, AMD, Intel, Imagination, and NVidia make possible levels of visualization and graphics that a few years ago I would have not dreamed would be available on affordable embedded and mobile devices.

One example of these new capabilities is described in “Ray tracing: the future is now, ” which is included in this week's Tech Focus Newsletter. In that article, Peter McGuinness of Imagination Technologies describes stunning visual and graphics capabilities previously only possible on supercomputers and now soon to be available on nextgen mobile and embedded devices.

“The ray tracing solution described here is available today for silicon implementation in a cost and power profile suitable for handheld and mobile devices,” he writes. “The performance and features it offers along with a low-risk migration path is compelling to developers who want to simplify their content creation flow at the same time as creating more compelling, more realistic games. “

According to the authors of “Vision-based artificial intelligence brings awareness to surveillance,” beyond these immediate graphical and display benefits are the capabilities these advanced processors give to developers to create a range of vision-based designs that are breathtaking in their scope and range.

“Initial vision applications such as motion detection sought to draw the attention of on-duty surveillance personnel, or to trigger recording for later forensic analysis,” the authors write. “Early in-camera implementations were usually elementary, using simple DSP algorithms to detect gross changes in grayscale video, while those relying on PC servers for processing generally deployed more sophisticated detection and tracking algorithms.”

However, they point out, over the years, vision applications in cost-effective mobile and embedded designs have substantially narrowed the performance gap with servers, with each processor generation integrating more powerful components, including multiple powerful general computing cores as well as dedicated image and vision accelerators.

To take advantage of these capabilities will require becoming familiar with a new set of tools and algorithms. One good resource for this has been the Embedded Vision Alliance, a consortium of DSP, GPU, and multicore manufacturers, some of whom have contributed articles to Embedded.com. Since its formation in 2011, the alliance has made an aggressive effort to make developers aware of the wide range of tools and algorithms available for a variety of vision-based applications.

That educational effort is already beginning to pay off in concrete real world designs for use in mobile platforms, automobile safety systems, robotics, and, of course, consumer electronics, recent examples of which are included in this week's Embedded Tech Focus Newsletter . The ones on my Editor's Top Picks list because of their imaginative concepts and immediate real-world applicability include:

Detection of traffic signs in real-world images, which describes a real-world, vision-based benchmark data set for computer-based traffic sign detection, and the evaluation metrics, baseline results, and a web-interface for evaluating the best approaches.

Vision-based parking guidance , about a low cost vision-based parking guidance system to help drivers parking their cars that relies on ARM-based embedded hardware and a wide-angle camera to capture images for analysis without the need for steering sensors.

Towards a human-like vision system for driver assistance, on a vision architecture for driver assistance systems based on task-dependent perception, inspired by the human visual system.

I look forward to seeing what else you send me in the weeks ahead on this topic and others for use on Embedded.com to share with other developers.

Embedded.com Site Editor Bernard Cole is also editor of the twice-a-week Embedded.com newsletters as well as a partner in the TechRite Associates editorial services consultancy. He welcomes your feedback. Send an email to , or call 928-525-9087.

See more articles and column like this one on Embedded.com.Sign up for subscriptions and newsletters. Copyright © 2014 UBM–All rights reserved.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.