My vision for embedded vision - Embedded.com

My vision for embedded vision

BDTI’s Jeff Bier, founder of the Embedded Vision Alliance, shares his vision of embedded consumer & mobile devices designed to “see” us, recognize our touch or voice & respond via new types of user interfaces as well as what it will take to get there.

For many years, computer vision was a pretty esoteric technology. It was used in a variety of applications, but those applications – like video surveillance, factory automation, and military equipment – were things that most people didn’t encounter on a routine basis.

One of the reasons why computer vision was a niche technology for many years is that the hardware required to implement it was too expensive for widespread use. Typical computer vision applications use complex algorithms to process real-time video data, and that requires an enormous amount of processing power.

So, for decades, computer vision was outside of the awareness of most engineers, and it was too expensive to incorporate into cost-sensitive embedded systems. But (no surprise here), year after year, embedded processors deliver more performance – and more performance per watt and per dollar.

In the last few years, embedded processors have begun to deliver enough processing performance to make it possible to implement computer vision in embedded systems – a combination that I call “embedded vision”.

Awareness of computer vision has also been growing. Probably the single biggest factor contributing to this rising awareness is the Microsoft Kinect. The Kinect is mainly used as an input device for the XBOX 360 game console, enabling players to control video games simply by moving their bodies.

But the Kinect is quite a versatile device, and engineers have been creating an amazing variety of clever applications with it – most of them unrelated to video games.

You can see me demonstrating a PC application that uses Kinect to enable gesture-based control of presentations here. There’s an entire web site dedicated to Kinect projects, and last month Microsoft introduced the beta version of a Windows SDK for Kinect.

Now, one can argue that the Kinect isn’t really an embedded vision system, since it relies on the XBOX 360 console (or a PC, or some other host) to provide much of the processing power needed for its computer vision functionality. Be that as it may, there’s no doubt that the Kinect has created a significant surge of interest in computer vision, and has spawned a burst of application development activity.

Meanwhile, those increasingly powerful embedded processors I mentioned are powering a variety of amazing products that are more easily recognizable as true embedded vision systems.

One of my favorite examples is vehicle safety systems that warn drivers of hazards such as pedestrians, cyclists, and insufficient following distance. Approximately 1.2 million people die annually in vehicle accidents. If vision-based safety systems can achieve even a modest reduction in that number, that will be a huge benefit. You can see a demonstration of one vision-based car safety system here.

The more I think about it, and the more research I do, the more it seems to me that embedded vision has valuable potential in hundreds of applications. For example, some hospitals are using vision-based systems to reduce infections by alerting doctors and nurses when they’ve forgotten to follow hand-washing protocol.

In Japan, vending machines use vision to dispense cigarettes only to customers who are old enough to legally consume them. In retail stores, vision systems built into digital signs enable advertisers to measure the effectiveness of ads by assessing how many people actually look at the sign (and for how long). Embedded vision also offers a more natural way for people to interact with consumer electronics – from set-top boxes to tablet computers to clock radios.

But implementing vision capabilities isn’t simple. Embedded vision is a multi-disciplinary technology, involving lighting, optics, image sensors, processors, complex algorithms, and a mountain of software.

Outside of a few niches where vision has been used for many years, most embedded system designers lack experience with many of the required technologies and techniques.

When my colleagues and I began working with vision applications a few years ago, we were initially encouraged by the large amount of published literature on computer vision.

But we quickly realized that the vast majority of that literature is highly theoretical, academic material – e.g., textbooks crammed with multivariable calculus.

While those theoretical underpinnings are important, they don’t provide the kind of practical insights that engineers in industry find most useful.

To help embedded system design engineers understand what embedded vision technology is capable of, and how to implement vision capabilities in real systems, 17 companies recently formed the Embedded Vision Alliance.

Alliance members are sharing their excitement about the potential for vision technology to add compelling capabilities to many types of products – and in some cases, to catalyze the creation of entirely new kinds of products.

They’re also providing engineers with practical know-how about processors, sensors, tools, algorithms, software, and design and development techniques. I invite you to visit the Alliance’s web site to explore the possibilities and join the conversation with our growing community of engineers.

Jeff Bier is president of BDTI, a technology analysis and engineering services company specializing in embedded digital signal processing applications. He is also founder of the Embedded Vision Alliance, an industry association focused on inspiring and empowering design engineers to incorporate vision capabilities into their products.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.