3D visualization and display technologies and algorithms are on the vergeof transforming many aspects of embedded consumer, automotive, and industrialsystems.
They are making it possible for humans to recreate the richness of ourvisual three dimensional world and transform it into virtual 3D images forviewing on 2D display screens. In addition, they will also make possiblemachines “see” in a wide range of consumer, mobile and industrial roboticsapplications.
Many of these visionary technologies were on display at the EmbeddedVision Summit at lastweek’s ESC DESIGN West, where the presentations included:
“TargetingComputer Vision Algorithms to Embedded Hardware” by Avnet’s MarioBergeron
“Embedded3D Stereo Vision: How it Works, How to Implement It, and How to Use It“by Goksel Dedeoglu of Texas Instruments, and
“Challengesand Opportunities in Accelerating OpenCV” by José Alvarez,Xilinx
This week’s Embedded Tech Focus on “Using 3D in embedded systems design,”includes several a number of recent articles, webinars, and tech papers onthe tools, such as OpenCV, that will be helpful to developers who want totake advantage of such opportunities, including:
Developing OpenCV computer vision apps for the Android platform
Vision Guided Robotics
A GPU-Accelerated Face Annotation System for Smartphones
An OpenCV Algorithm for using a face as a Pointing Device
Combined with new graphics processing engines being developed by AMD,ARM, Fujitsu, Intel, and NVidia and open source graphics algorithms suchas OpenGL and OpenVG such techniquesoffer the possibility of implementing 3D graphics viewing capabilities toa range of industrial – and automotive – applications previously confinedto 2D representations. Some recent articles on these techniques include:
In consumer designs, driven by the new 3D display capabilities incorporatedinto the H.265successor to the H.264 video compression standard, many next generationtelevisions – and even some smartphones – are being designed to transformthe home entertainment, bringing realistic 3D content to the video screennearest you. Below are the links to a number of articles that will give yousome idea of the new capabilities available:
Emerging Markets for H.264 Video Encoding
Implementing the right audio/video transcoding scheme in consumer SoC devices
Achieving Optimized DSP Encoding for Video Applications
A tutorial on the H.264/H.265 scalable video codec
What’s next? For one thing, game designers I have talked to are alreadythinking of the potential of fusing these diverse 3D modes. Rather than interactionwith graphical 2D and 3D representations of humans, I am told that with sucha convergence, game player-directed personas will be able to take actionsbased on feedback from machine vision algorithms.
Such feedback could beused to recognize gestures, movements and facial expressions of human actorsin video images – rather than graphical representations, allowing machines– and gamesters – to take actions based on those cues from actual images.
Embedded.com Site Editor Bernard Cole is also editor of thetwice-a-week Embedded.comnewsletters as well as a partner in the TechRite Associates editorialservices consultancy. He welcomes your feedback. Send an email to , or call928-525-9087.