Advertisement

Why FIR sensing technology is essential for achieving fully autonomous vehicles

June 12, 2018

YakovShaharabani-June 12, 2018

The automotive industry is experiencing an influx of new technology as it never has before. Automakers are promising to deploy fully autonomous vehicles on public roads within the next few years and are predicting that mass market adoption will not be far behind. But while top-tier automakers and tech companies are eager to accelerate these autonomous innovations, achieving full vehicle autonomy will require a sensing technology that enable cars to “see” the world around them and react better than human drivers.

Current sensing technologies, like LiDAR, radar, and cameras, have perception problems that require a human driver to be ready to take control of the car at any moment. For this reason, the role of sensors has only intensified; to achieve Level 3-5 autonomous driving, vehicles need sensors both in greater quantity and of greater ability. This article explores the sensing capabilities of current solutions, such as radar and LiDAR (light detection and ranging), and why FIR (far-infrared) in a fusion solution is ultimately the key to achieving Level-3, 4, and 5 autonomous driving.

Limitations of current sensing technologies

Radar sensors can detect objects that are far away but cannot identify the object. Cameras, on the other hand, can more effectively determine what an object is, but only at a closer range. For these reasons, radar sensors and cameras are used in conjunction to provide autonomous vehicles with more complete detection and coverage: A radar sensor detects an object down the road, and a camera provides a more detailed picture of the object as it gets closer.

LiDAR sensors have also become an essential component of autonomous vehicles’ detection and coverage capabilities. Like radar, LiDAR sends out signals and measures the distance to an object via a reflection of those signals, but it uses light waves or lasers as opposed to radar’s radio signals. While LiDAR sensors provide a wider field-of-view than radar’s and cameras’ more directional view, LiDAR is still cost prohibitive for mass market applications. Several companies are attempting to alleviate this issue by producing lower-cost LiDAR sensors, but these low-resolution LiDAR sensors cannot effectively detect obstacles that are far away, which spells reduced reaction for autonomous vehicles. Unfortunately, simply waiting for the price of LiDAR sensors to fall could slow the mass deployment of autonomous vehicles.

click for larger image

Fig 1: Current sensing technologies suffer from perception problems. CMOS camera, radar, and LiDAR cannot function in dynamic lighting or harsh weather conditions.(Source: AdaSky)

In addition to current sensing technologies’ lack of complete visibility, they also experience another obstacle with additional environment factors. To achieve Level-5 autonomy, full functionality must be realized no matter the weather; however, all of today’s sensors are compromised to some degree in adverse weather conditions. For example, while radar can still detect faraway objects in heavy fog, haze, or at night, most cameras have near-field sight limitations that constrain their ability to see in foul weather or darkness. Most sensors are also confounded by sudden changes in lighting. For instance, consider a vehicle entering or existing a tunnel. For a human driver, it takes the eyes a few seconds to adjust to the sudden darkness or bright lighting, but cameras and LiDAR are no better—they are also momentarily blinded by the lighting change.  

Accurate image detection is another challenge for today’s sight and perception solutions. Although a camera can successfully detect a person or an animal, its image-processing software is not always able to accurately distinguish between a real person or an animal and a picture of a person or an animal on advertisements, buildings, or buses.

Emerging requirements

Automakers and AV developers are eager to deploy fully autonomous vehicles on public roads in early 2020. But before any vehicle over Level-2 autonomy can be produced and a full fleet deployed, sensors must eliminate existing vision and perception weaknesses and guarantee complete detection and coverage of a vehicle’s surroundings 24/7, in any environment and condition. Today, one of the main reasons humans must still take control of AVs is that their sensors fail amid adverse weather conditions.

Without improved sensor capability and accuracy for providing safe and reliable operations in all weather conditions, fully autonomous cars cannot be brought to mass market.

FIR technology

A sensor that employs FIR technology can overcome the many reliability gaps and perception problems that confound other sensors. FIR has been used for decades in defense, security, firefighting, and construction, making it a mature and proven technology. That proven technology has been adapted to automotive applications. FIR-based cameras use far infrared light waves to detect differences in heat (thermal radiation) naturally emitted by objects and converts this data into an image. Unlike the more common optical sensors used on cars that capture images perceptible to the human eye, FIR cameras scan the infrared spectrum just above visible light and can, thus, detect objects that may not otherwise be perceptible to a camera, radar, or lidar.

click for larger image

Fig 2. FIR sensors generate a new later of information increasing performance for segmentation and providing an accurate analysis of the vehicle’s surroundings. (Source: AdaSky)

With a sensitivity of 0.05 Centigrade for high-contrast imaging, a VGA thermal sensor with FIR will detect the pedestrian at up to 200 meters. The FIR sensor will be able to track the pedestrian through time at 30 or 60 fps and detect the road ahead.

As well as capturing the temperature of an object or material, a FIR camera captures an object’s emissivity—how effectively it emits heat. Since every object has a different emissivity, this allows an FIR camera to sense any object in its path. With this information, the camera can create a visual painting of the roadway to operate independently and safely. 

Thermal FIR also detects lane markings and the position of the pedestrian (which direction he/she is facing), in most cases. It can then determine that the pedestrian is going off the sidewalk and is about to start to cross the road on the opposite lane; thus, the FIR sensor would be able to predict if the vehicle is at risk of hitting the pedestrian.

In response, the autonomous vehicle with FIR sensors would slow down to ensure that there’s enough time to break should the pedestrian perform an unpredictable move. This means that everyone will arrive safely to their destination.

Continue reading on page two >>

 

< Previous
Page 1 of 2
Next >

Loading comments...