What HD mapping brings to autonomous vehicles

ST. LOUIS PARK, Minn. — The world now knows that Tesla’s Elon Musk thinks that high-precision GPS maps for self-driving cars are a “really bad idea.”

During the company’s Autonomy Day in April, Musk made it abundantly clear that too much dependency on HD Maps can turn an autonomous vehicle (AV) into a “system that becomes extremely brittle,” making it more difficult to adapt.

The rest of the automotive industry, however, pretty much believes that AV could use an HD map as, at least, a backup system.

“HD Maps are all about adding intelligence to improve the performance and safety of automated vehicles,” said Phil Magney, founder and principal at VSI Labs.

As Matt Preyss, Product Marketing Manager at HERE, explained, HD maps are not the familiar GPS helpers used by human drivers. The HD maps in question — loaded with geo-coded metadata — are specifically designed for machines, said Preyss.

The foremost purpose for AVs to use mapping data is, according to Magney, “to add confidence to the system. It takes a load off the computational problem of road parsing.” The layers in HD maps include “precision lane markings, boundaries, geometries, and 3D markers for localization,” he said.

A layered approach for robocar maps (Source: Lyft)

HD mapping for AVs, as it turns out, is becoming a very competitive market not just among traditional navigation map vendors, but also tech companies, car OEMs and startups. In one way or another, the field is jockeying for data ownership, where the high-stakes game plays today.

EE Times got some help from AV experts at VSI Labs to break down pros and cons of HD mapping for the future of ADAS and AVs.

Why HD maps are needed

There are many reasons why AVs should leverage HD maps. Best of all, HD maps offer the most straightforward solution for either automated driving systems (ADAS) or AV to get better at “road parsing” – analyzing ground and aerial images for road segmentation.

Even for an ADAS car to do basic jobs like lane-keeping, HD maps are effective when lane markers have faded, covered by snow or obscured other weather conditions.

The combination of data from different sensors (computer vision, radars, sonars) should also be able to parse the road. But they have their limitations.

“A good example of sensor limitations comes from Tesla accidents,” explained Magney. “In the Mountain View accident, a likely contributing factor was road surface changes. The dark asphalt surface next to the light concrete surface may have caused the computer to interpret this as a lane line leading to the improper trajectory.”

Magney noted, “If Autopilot had a detailed lane model (an HD Map) this accident could have been prevented as the system knows where it should and should not be.”

In the Mountain View accident, a likely contributing factor is the road surface changes. The dark surface is asphalt while the light surface is concrete. Autopilot may have misinterpreted the change of surfaces as a lane line leading to the improper trajectory. If Autopilot had a lane model and was localizing against that lane model, this type of accident could be prevented. This is why Autopilot (and any other L2 system) require constant driver attention and engagement. (Source: VSI Labs)

Presumably, cameras are still the best, most cost-effective sensors, but they would struggle with road parsing, Magney cautioned. The problem is that it is computationally expensive, he said. “HD mapping assets alleviate the computational load and add confidence to the environmental model.”

What about lidars, then? Shouldn’t the addition of lidars be able to improve parsing?

Not so fast, said Matthew Linder, senior AV software engineer at VSI Labs. Based on lots of experience with lidars at VSI, he acknowledged that lidar is best for 3D perception and environmental modeling, and it reliably detects unknown objects and free space. “However, lidar offers limited performance in poor weather,” Linder warned. “For example, lidar fails to detect lane markings if they are completely covered by snow. Lidar is also still very expensive. Lidars (or any sensor for that matter) cannot see through occlusions, like other vehicles, trucks, hills, around a bend or building.” In his opinion, “The map can extend as far you need, and does not have issues with occlusion.”

When asked about HERE's HD Live Map, Preyss claimed, “We can support any localization solution developed by an OEM. We believe a fusion of GPS/IMU, HD maps and other sensors is the ideal solution for SAE L3 through 5 systems.”

Nonethelss, “OEMs will have different strategies with different price-points, roll-outs and technologies,” he acknowledged. Noting that the automotive industry is “largely taking an incremental approach to automation,” Preyss said, “We need to make sure our solution can support their various strategies, not only for today, but for the long-term.”

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.