Making mobile & embedded designs more visionary
In the late 1990s - just about the time the IPv6 upgrade of the Internet Protocol was made available - a set of software tools algorithms for creating real time embedded vision applications - called OpenCV – was developed by Intel Corp. and donated to the open source community.
Both are now broadly available and changing the way we use computers. But each has traveled different paths to their now increasingly wide acceptance among both the developer community and the broader device user public.
IPv6 was introduced to deal with the rapidly dwindling number of URLs available with the previous IPv4. But it was resisted and took ten years to come into common use because of the reluctance of most organizations to invest in the infrastructure needed. IPv4 URLs are now all used up and there is no choice but to make the shift.
OpenCV, on the other hand, was quickly adopted amongst a small coterie of developers in particular embedded market segments, such as factory automation and military/aerospace where machine vision was critically important.
And now, about ten years later, the pace of its acceptance has rapidly accelerated as a growing number of companies – and developers - see it as the tool set of choice for making embedded computing platforms more user friendly, not only in mobile devices but in the many new embedded consumer apps in home automation, home networking, lighting, smart TVs, power grid metering and smart appliances.
The charter of this “embedded vision” alliance - spearheaded by companies such as AMD, Analog Devices, BDTI, CEVA, Freescale, Intel, Invidia, Mathworks, National Instruments, Synopsys, Tensilica and Texas Instruments, among others - is to move beyond the current touch-based interfaces.
Where such MEMS and capacitive sensor-based interfaces are simpler than previous mouse and GUI-based PC interfaces, they still require that the user learn how to operate the computing system. Taking a completely different approach, the aim in such vision-based designs is to create software mechanisms by which the user does not have to learn how to use the computing device.
Instead the strategy is to build the software infrastructure that will make it possible for computers to understand us by means of vision algorithms that recognize and interpret correctly many common – and innate - human gestures, facial changes, eye movements and other natural cues.
This week’s Embedded Tech Focus Newsletter on “Designing vision apps with OpenCV“ contains some of the recent design articles, white papers, blogs and webinars on OpenCV and vision app design to help you get started. To give you some insight into the challenges involved – and the opportunities they represent – I recommend that you read also:
In addition to the wealth of tools and algorithms available on OpenCV.org and the Embedded Vision Alliance, there are a number of useful technical papers and conference submissions I have found, of which my Editor’s Top Picks are:
Several other papers that I found informative and revelatory of the role of OpenCV in nextgen embedded vision designs include:
Using OpenCV for distance determination
Sophisticated Image Encryption Using OpenCV
Real-time computer vision with OpenCV
Facial Expression Analysis with OpenCV
An OpenCV Algorithm for using a face as a Pointing Device
In this rapidly evolving segment of embedded systems development, new resources are constantly become available. As I come across them I will do what I can to make you aware of them. And if you come across resources in this area that you think are useful, let me know.
Also, let me know about your experiences in the form of blogs or design and development articles you might wish to share with the embedded developer community.
Embedded.com Site Editor Bernard Cole is also editor of the twice-a-week Embedded.com newsletters as well as a partner in the TechRite Associates editorial services consultancy. He welcomes your feedback. Send an email to firstname.lastname@example.org, or call 928-525-9087.