Taking embedded robotics to the next level
One of the things about embedded systems hardware and software developers that has always impressed me is the omnivorous interest you have in areas of technology that normally lie outside your immediate needs.
Your are always on the prowl for good ideas, even if you see a technique or tool that is clearly not optimized for the dedicated, resource constrained environment in which you work. If so, you’ll will find a way to adapt it to your needs.
This characteristic was most recently apparent in the way various vision algorthims and tools such as OpenCV (developed originally for industrial automation and control) have been adapted and used to design the next generation of consumer and mobile devices.
Now, as the authors of “Giving robotic systems spatial sensing with visual intelligence,” write in this week’s Tech Focus Newsletter, a whole new range of algorithms, such as MSER, SURF, the Shi-Tomasi technique, the Viola-Jones framework, and Kanade-Lucas-Tomasi filters are now available.
Developed originally for industrial robotics and automation applications, they are now being adapted for use a wide range of home, personal and consumer designs. According to the authors, just as humans use their eyes and other senses to navigate through the world, robotic systems should be able to do the same thing. But until recently, the technology needed has been found only in complex, expensive systems in industry and military designs.
“However, cost, performance, and power consumption advances in digital integrated circuits are now paving the way for the proliferation of ‘vision’ into diverse and high-volume applications, including robot implementations,”they write. “Challenges remain, but they're more easily, rapidly, and cost-effectively solved than has been possible before.”
If such efforts are successful, consumer robotics will be more than just Roomba vacuum cleaner-like novelties, and will be applied to a wide range of real human needs and desires. Among my Editor’s Top Picks of designs that are just now coming out of the laboratories that reflect this trend are:
“Object maps for robotic housework representation and use,” in which the authors make use of Semantic Objects Maps (SOMs) as information resources for autonomous service robots performing everyday manipulation tasks in kitchen environments.
“Vision Processing on the Bunny Robot Humanoid Robot,” where researchers describe the construction of a low power budged humanoid “Bunny Robot" with the Simultaneous Location & Mapping (SLAM) algorithm for use as a platform for teaching robotics.
Other technical reports and conference papers that I think will be useful in expanding your knowledge of these new techniques include:
A Simulink library for model-based robot manipulators
Kalman-based method for motion coordination of groups of fast mobile robots
A framework for computer vision and cognitive robotics, and
Omnidirectional vision for soccer robots using object detection
In addition to the resources included in this week’s newsletter, some other useful Embedded.com design articles you should add to your library of embedded robotic design resources are:
Let me know what you are working on, the problems you are having, the designs you are considering, and what tools and techniques you are have found and are using. Leave a comment here, or contact me and we will work together on blogs or how-to design articles about what you have learned.
Embedded.com Site Editor Bernard Cole is also editor of the twice-a-week Embedded.com newsletters as well as a partner in the TechRite Associates editorial services consultancy. He welcomes your feedback. Send an email to firstname.lastname@example.org, or call 928-525-9087.