Machine vision is a well-established technology in industrial applications such as automated inspection, but the advent of relatively inexpensive 3D image sensors is opening doors for many new vision applications. From automating truck loading to enhancing factory safety, depth perception is empowering new vision opportunities in industrial automation. The upcoming Embedded Vision Summit will highlight some of these opportunities and the technologies behind them.
“The use of 3D in vision applications has been evolving for a while,” said Jeff Bier, founder of the Embedded Vision Alliance, in an interview with EE Times, “but in the last year or so it crossed a critical threshold. Multiple suppliers are now offering 3D vision modules at low cost, making it practical in industrial applications where it wasn't before.” Bier noted that five years ago 3D imaging required large, expensive equipment but now the cost is being driven down by consumer-oriented modules for smartphones and the like. He pointed to Intel's RealSense and SoftKinetic's DepthSense offerings as examples of 3D image sensors coming out of the consumer space.
Image processing software, including 3D processing, has also become increasingly available. On the open source front, OpenCV revision 3.0, currently in beta, offers expanded support for 3D vision. Commercial middleware for 3D vision has also become available, such as the Starry Night object recognition and reconstruction package from VanGogh Imaging and the Triclops SDK from Point Grey for its stereo vision products. There are even specialized 3D image processors available from companies such as Inuitive.
There are also several approaches to providing 3D. The use of two cameras to provide a stereoscopic view is the traditional approach. A newer option is the combination of a 2D camera with time-of-flight sensors working with a scanned, pulsed illumination pattern. In this approach, a pulsed (typically IR) light source scans the scene and a sensor measures the time between pulse generation and the returned reflection, yielding distance. Correlating the resulting depth map with the pixels from the camera gives the third dimension.
More recently mobility has become a source for additional dimensionality. A moving camera capturing images of a relatively stationary object field can obtain depth information from an analysis of successive frames. Companies such as VanGogh Imaging and vidantis are supporting this approach.
Bier pointed out that this expanding availability of 3D vision technology and options will prove a boon for factory automation. For example, today's production lines might use a robot for assembly and a separate vision system further down the line to inspect the results. But if a vision system is on the robot, that vision can detect (and potentially correct) problems as they occur rather than have them propagate down the assembly line. With 3D vision the robot could, for example, be able to readily identify individual parts, their shapes, and their relative positions, which would allow it to correct for such things as misalignments and incorrect sizing. Likewise, a 3D vision system on a transport robot that moves parts around a warehouse would be able to navigate a relatively unstructured environment, bypassing obstacles and avoiding collisions, rather than requiring clear, pre-defined pathways.
The addition of 3D vision can also help improve factory safety and permit closer interaction between robots and human workers. Currently many industrial robots must be surrounded by cages to prevent inadvertent contact with nearby humans. A 3D vision system would allow a robotic system to detect the presence of humans in danger areas and to avoid them. This is one goal of the iMinds' Claxon project, which aims to improve human-robot interaction in the factory.
In addition to increasing safety, such an ability to interact closely with humans would allow more flexibility in industrial robot design. Without the need for caging or careful isolation from human activity, robots can more easily be moved or reconfigured to perform different activities. Such reconfigurability would, in turn, make robots more cost-effective for smaller plants where the production line needs to adapt to a variety of products. Baxter and Sawyer from Rethink Robotics are examples of configurable robots designed to work in close proximity with humans.
Bier pointed out a number of pilot projects underway to use 3D vision in novel applications.
Join over 2,000 technical professionals and embedded systems hardware, software, and firmware developers at ESC Silicon Valley July 20-22, 2015 and learn about the latest techniques and tips for reducing time, cost, and complexity in the embedded development process.
The Embedded Systems Conference and Embedded.com are owned by UBM Canon.