Embedded Vision Summit sparks design of machines that see - Embedded.com

Embedded Vision Summit sparks design of machines that see

“Computer vision is the next big thing for embedded systems, making them safer, more responsive to people, more efficient and more perceptive,” said Jeff Bier, founder of the Embedded Vision Alliance and session chair of the upcoming Embedded Vision Summit.

In fact, nearly every category of consumer, automotive and industrial application is being enhanced today by embedded vision capabilities. Learn how to add the latest pattern-recognition capabilities into your embedded vision application at the Embedded Vision Summit on April 25, co-located with DESIGN West 2013.

[Click here to register for the Embedded Vision Summit, Thursday April 25th, at the San Jose McEnery Convention Center . See the day's agenda here.  The Summit is co-located with DESIGN West.] 

Embedded vision started out as an esoteric technology that was expensive to implement, requiring a team of domain experts with deep experience in the black-magic of pattern recognition to get it right. “NASA pioneered embedded vision for space exploration and the military has been using it for target recognition for decades,” said Bier. “But until now it has been a niche technology in industry, such as for parts inspection. Now, the sensors and processors to perform the tens of billions of operations per second necessary to process millions of pixels are much more cost effective, enabling computer vision to be added to almost any embedded system.”

Today a wide variety of applications are adding embedded vision capabilities, from automotive systems that avoid collisions by warning drivers, to security systems that detect nervous people acting suspicious, to smartphones that allow users to control video playback using their eye movements.


Using 3-D sensors such as Primesense's Carmine (licensed and popularized by Microsoft as the Kinect for Xbox) has brought down the expense of embedded vision solutions by estimating the distance from the sensor to objects in the scene (pictured). At the Embedded Vision Summit, Texas Instrument's Goksel Dedeoglu will present techniques for low-cost implementation of stereoscopic 3-D vision. SOURCE: TI

“My favorite embedded vision application comes from Affectiva, which uses webcams to detect the emotions of a user,” said Bier. “Imagine educational apps that pace learning by detecting frustration levels, or toys that stimulate a child's intellect when they detect boredom. The possibilities are endless, now that embedded vision technology is cheap enough for almost any app.”

In fact, as more and more competitors add vision-based pattern-recognition algorithms, a new era of applications are emerging which require that embedded vision be integrated in order to succeed. Unfortunately, many engineers do not realize how useful computer vision can be, nor are they aware of the easy-to-use open-source embedded-vision algorithms that are available to streamline the development process.

“The biggest problem is that engineers are not aware of how useful and relatively easy it is to add computer vision to their embedded systems,” said Bier.
At the Embedded Vision Summit, all these issues will be addressed in keynote addresses, seminars, technical presentations and over 20 hands-on demonstrations. Designers will get a leg up on what types of pattern-recognition enhancements are available to embedded systems, as well as get a comprehensive overview of how to add embedded vision capabilities to their app.

New embedded-vision development tools will also be unveiled at the Summit where experts from the 33 member companies of the Embedded Vision Alliance will be available for one-on-one discussions regarding how-to add embedded vision to any application.

Professor Pieter Abbeel from University of California Berkeley's Department of Electrical Engineering and Computer Sciences delivers the keynote “Artificial Intelligence for Robotic Butlers and Surgeons” on April 25th. Abbeel has been on the forefront of designing perception into robots, including research into how robots can better handle uncertainty. 

“Significant improvements in robotic perception are critical to advance robotic capabilities in unstructured environments,” writes Abbeel on his research page. “I believe that rather than following the current common practice of trying to build a system that can detect chairs, for example, from having seen a relatively small number of example images of chairs, it is more fruitful for robotic perception to work on instance recognition.” Click here for examples of his work — such as robots folding clothes (something at which you would want your personal robot valet to excel).

The rest of the first day consists of sessions from two tracks. When signing up, attendees may mix the tracks to make the program that best fits their needs. To see embedded vision tracks, click here. 

Embedded Vision Summit technical speakers include Jose Alvarez, Mario Bergeron, Ning Bi, Jeff Bier, Goksel Dedeoglu, Eric Gregori, Tim Jones, Gershom Kutliroff, Simon Morris, Michael Tusch, Markus Wloka, and Paul Zoratti. You can find their bios here.

Participants who want hands-on training may attend Friday's Blackfin Embedded Vision Starter Kit Hands-on Workshop, April 26, 8:30 am to 1:30 pm at San Jose Convention Center. You can sign up for the workshop here. The hands-on session offers developers a chance to explore a wide range of embedded vision applications through a tutorial using the Avnet Embedded Vision Starter kit. Experts from BDTI, Analog Devices, and Avnet will lead the training. 

Information Redux

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.