Building 3D visual maps of interior space -

Building 3D visual maps of interior space

For a mobile robot to figure out where it is and to determine how it should get to its destination, it needs some kind of a model of the environment. In some cases, such models of space – usually called maps – can be constructed from prior knowledge of the layout of the environment, its geometry, its topology, the reflectance properties of its surfaces, and, as recently demonstrated, even functional properties.

But there exist many important situations where this type of prior knowledge is not available, at least not available at the level of detail that would be needed by a robot trying to navigate around in the environment.

When such is the case, one of course has the option of actually generating the maps manually. But doing so can be difficult and expensive. An easier alternative consists of letting the robot make its own maps of the interior. When done properly, such a process can result in maps that directly take into account the different properties of the sensors that would subsequently be used for navigation.

Incorporating sensor limitations into maps generated by other means, say manually, or from CAD models of the interior, is highly non-trivial. But for a robot engaged in map building, it must obviously know its position in a world frame so that the spatial elements constructed from the current sensed data can be properly situated in the world frame. This implies that an effort at map building cannot be separated from self-localization.

The joint problems of map building and localization are sometimes referred to by the acronym SLAM that stands for ‘‘Simultaneous Localization And Mapping’’. Since map building cannot be carried out without localization, in the rest of this paper when we mention ‘‘map building’’, we are referring to SLAM.

It is now generally recognized that sensor-fusion is the best approach to the accurate construction of environment maps by a sensor-equipped mobile robot. Typically, range data collected with a range sensor is combined with the reflectance data obtained from one or more cameras mounted on the robot.

In much of the past work on hierarchical approaches to map construction, the fusion was carried out only at the lowest level of the hierarchy. As a result, in those approaches, only the fused data was made available to the higher levels in the hierarchy.

This implied that any errors caused by sensor fusion would propagate upwards into the higher level representations of an interior map.

Our work, on the other hand, checks for consistency between the data elements produced by the different sensors at all levels of the hierarchy. This consistency checking is carried out with the help of an interval-based representation of uncertainties in the sensor data.

In addition to demonstrating that our approach to the fusion of range and image data results in dense 3D maps of the interior space, we also provide validation of our overall framework by presenting a set of loop closure results.

These results demonstrate that our overall errors in the maps remain small (within 0.91% of the distance traveled for map construction) even when the robot has to traverse over large loops inside a building.

Our work is based on the premise that, first, it is necessary to use a hierarchical representation of space when maps are desired for large interior spaces, and, second, the sensor fusion must be carried out at all levels of the data object hierarchy. It should be possible for sensor-fusion-created spatial assertions to be undone if the higher-level objects in the hierarchy cannot satisfy the geometrical constraints placed on them.

Using a hierarchical representation of space and using aninterval-based representation of uncertainty that allows sensor fusion to be carried out at all levels of the hierarchy, the work we present in this paper shows that a mobile robot can construct high-quality (meaning dense and preserving reflectance properties of the surfaces) maps of the interior space.

Moreover it can at the same time maintain with high accuracy its localization information, by combining simple 2D range measurements acquired quickly with a laser sensor with the 3D reconstructions from a stereo head.

At the lowest level of processing in the SLAM presented in thispaper, our system extracts line features from the range data produced by a laser scanner that scans the space horizontally in a plane close to the floor. These line features correspond typically to the flat vertical surfaces such as walls in the environment.

The system then associates 2D camera images of these surfaces with the range line features. This process can be envisioned as associating a vertical surface with a range line and then texture mapping the surface with the camera images. The locations of the vertical surfaces are further adjusted with the help of 3D vertical lines extracted from stereo images that correspond to jump discontinuities identified in the range data.

To read this external content in full, download the complete paper from the author archives on line at Purdue University.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.