The automotive industry is experiencing an influx of new technology as it never has before. Automakers are promising to deploy fully autonomous vehicles on public roads within the next few years and are predicting that mass market adoption will not be far behind. But while top-tier automakers and tech companies are eager to accelerate these autonomous innovations, achieving full vehicle autonomy will require a sensing technology that enable cars to “see” the world around them and react better than human drivers.
Current sensing technologies, like LiDAR, radar, and cameras, have perception problems that require a human driver to be ready to take control of the car at any moment. For this reason, the role of sensors has only intensified; to achieve Level 3-5 autonomous driving, vehicles need sensors both in greater quantity and of greater ability. This article explores the sensing capabilities of current solutions, such as radar and LiDAR (light detection and ranging), and why FIR (far-infrared) in a fusion solution is ultimately the key to achieving Level-3, 4, and 5 autonomous driving.
Limitations of current sensing technologies
Radar sensors can detect objects that are far away but cannot identify the object. Cameras, on the other hand, can more effectively determine what an object is, but only at a closer range. For these reasons, radar sensors and cameras are used in conjunction to provide autonomous vehicles with more complete detection and coverage: A radar sensor detects an object down the road, and a camera provides a more detailed picture of the object as it gets closer.
LiDAR sensors have also become an essential component of autonomous vehicles’ detection and coverage capabilities. Like radar, LiDAR sends out signals and measures the distance to an object via a reflection of those signals, but it uses light waves or lasers as opposed to radar’s radio signals. While LiDAR sensors provide a wider field-of-view than radar’s and cameras’ more directional view, LiDAR is still cost prohibitive for mass market applications. Several companies are attempting to alleviate this issue by producing lower-cost LiDAR sensors, but these low-resolution LiDAR sensors cannot effectively detect obstacles that are far away, which spells reduced reaction for autonomous vehicles. Unfortunately, simply waiting for the price of LiDAR sensors to fall could slow the mass deployment of autonomous vehicles.
click for larger image
Fig 1: Current sensing technologies suffer from perception problems. CMOS camera, radar, and LiDAR cannot function in dynamic lighting or harsh weather conditions.(Source: AdaSky)
In addition to current sensing technologies’ lack of complete visibility, they also experience another obstacle with additional environment factors. To achieve Level-5 autonomy, full functionality must be realized no matter the weather; however, all of today’s sensors are compromised to some degree in adverse weather conditions. For example, while radar can still detect faraway objects in heavy fog, haze, or at night, most cameras have near-field sight limitations that constrain their ability to see in foul weather or darkness. Most sensors are also confounded by sudden changes in lighting. For instance, consider a vehicle entering or existing a tunnel. For a human driver, it takes the eyes a few seconds to adjust to the sudden darkness or bright lighting, but cameras and LiDAR are no better—they are also momentarily blinded by the lighting change.
Accurate image detection is another challenge for today’s sight and perception solutions. Although a camera can successfully detect a person or an animal, its image-processing software is not always able to accurately distinguish between a real person or an animal and a picture of a person or an animal on advertisements, buildings, or buses.
Automakers and AV developers are eager to deploy fully autonomous vehicles on public roads in early 2020. But before any vehicle over Level-2 autonomy can be produced and a full fleet deployed, sensors must eliminate existing vision and perception weaknesses and guarantee complete detection and coverage of a vehicle’s surroundings 24/7, in any environment and condition. Today, one of the main reasons humans must still take control of AVs is that their sensors fail amid adverse weather conditions.
Without improved sensor capability and accuracy for providing safe and reliable operations in all weather conditions, fully autonomous cars cannot be brought to mass market.
A sensor that employs FIR technology can overcome the many reliability gaps and perception problems that confound other sensors. FIR has been used for decades in defense, security, firefighting, and construction, making it a mature and proven technology. That proven technology has been adapted to automotive applications. FIR-based cameras use far infrared light waves to detect differences in heat (thermal radiation) naturally emitted by objects and converts this data into an image. Unlike the more common optical sensors used on cars that capture images perceptible to the human eye, FIR cameras scan the infrared spectrum just above visible light and can, thus, detect objects that may not otherwise be perceptible to a camera, radar, or lidar.
click for larger image
Fig 2. FIR sensors generate a new later of information increasing performance for segmentation and providing an accurate analysis of the vehicle’s surroundings. (Source: AdaSky)
With a sensitivity of 0.05 Centigrade for high-contrast imaging, a VGA thermal sensor with FIR will detect the pedestrian at up to 200 meters. The FIR sensor will be able to track the pedestrian through time at 30 or 60 fps and detect the road ahead.
As well as capturing the temperature of an object or material, a FIR camera captures an object’s emissivity—how effectively it emits heat. Since every object has a different emissivity, this allows an FIR camera to sense any object in its path. With this information, the camera can create a visual painting of the roadway to operate independently and safely.
Thermal FIR also detects lane markings and the position of the pedestrian (which direction he/she is facing), in most cases. It can then determine that the pedestrian is going off the sidewalk and is about to start to cross the road on the opposite lane; thus, the FIR sensor would be able to predict if the vehicle is at risk of hitting the pedestrian.
In response, the autonomous vehicle with FIR sensors would slow down to ensure that there’s enough time to break should the pedestrian perform an unpredictable move. This means that everyone will arrive safely to their destination.
FIR technology is necessary for the deployment and mass market adoption of autonomous vehicles because it is the only sensor capable of providing the complete and reliable coverage needed to make AVs safe. Major automotive OEM, BMW is using thermal-imaging cameras as part of their sensor suites for all self-driving prototypes.
While other optical sensors used on cars only capture images visible to the human eye, FIR cameras provide a more comprehensive layer of perception. By scanning the infrared spectrum just above visible light, FIR cameras detect objects that a camera, radar, or LiDAR may miss. Moreover, unlike radar and LiDAR sensors that must transmit and receive signals, a FIR camera only collects signals, making it a “passive” technology. With no moving parts, a FIR camera can provide complete coverage of an AV’s surroundings simply by sensing signals from objects radiating heat.
Currently, there are three leading FIR sensor companies: Autoliv, FLIR systems and AdaSky. AdaSky is an Israeli startup that recently developed Viper, a high-resolution thermal camera that passively collects FIR signals, converts it to a high-resolution VGA video, and applies deep-learning computer vision algorithms to sense and analyze its surroundings.
click for larger image
Fig 3 A side-by-side comparison of a state-of-the-art camera with low light sensitivity and Viper shows objects undetected by current sensing technology are visible with a FIR solution. (Source: AdaSky)
With this advanced technology, FIR cameras can overcome the obstacles presented by complicated weather and lighting conditions that confound other sensing technologies. Compared to FIR cameras, today’s sensors are limited and no longer seem capable of delivering full autonomy. Even working together, current sensors cannot provide total or accurate coverage of a vehicle’s surroundings in every situation. Only FIR sensors can generate the needed, deeper layer of information that originates from a different band of electromagnetic spectrum to significantly increase performance for classification, identification, and detection of objects and of vehicle surroundings, both at near and far range.
By creating a visual representation of the vehicle’s surroundings, FIR fills the gaps left by other sensors to produce total detection and coverage in any weather condition and in any environment, whether it be urban, rural, or highway driving—or a combination of all three. For example, on the highway, it’s crucial to have long-ranging sensing so that if an object is detected, there is ample time for the vehicle to make the decision to stop—even as it travels at high speeds. In urban areas, having a wider field-of-view is prioritized to be able to detect pedestrians and cyclists on the sidewalk and at crosswalks.
While FIR is needed for Level-3 autonomous solutions, it is an essential enabler to Level-4 and up. To achieve Level-5 autonomy and, ultimately, bring fully autonomous vehicles to the mass market, AV developers anticipate that each vehicle should be equipped with several FIR cameras to enable wide coverage and a comprehensive understanding of its surroundings. Automakers favor using multiple FIR sensors because it delivers the highest level of safety. In fact, the U.S. Department of Transportation’s Federal Automated Vehicles policy requires redundancy for certain critical AV systems, and most OEM and tier-ones are gearing up to use multiple sensors and other components as fail-safe measures.
Every sensor has a scenario where it encounters weakness. No sensor is 100 percent accurate 100 percent of the time including FIR. This is why it is so crucial to have redundancy through a sensor fusion solution that combines LiDAR, Radar and FIR technology. As you can see in the diagram below, when you layer all of these technologies into a sensor fusion solution, you will get coverage for all areas and every scenario. This is how FIR will enable fully autonomous driving.
click for larger image
Fig 4. Sensor modalities for autonomous vehicles – coverage from CMOS, Radar, LiDAR and FIR. (Source: AdaSky)
Automakers’ goal of deploying autonomous vehicles on public roads by the beginning of the next decade cannot be achieved with the limited capabilities of today’s sensing solutions. Their persistent perception problems mean that vehicles cannot operate safely and reliably without the monitored control of a human driver, making true Level-5 autonomy impossible. FIR cameras are the only technology that can deliver complete classification, identification, and detection of a vehicle’s surroundings in any environment or weather condition and are, thus, the only sensing technology that can make the mass market adoption of fully autonomous vehicles a reality.
Yakov Shaharabani , CEO and Board Member for AdaSky, is a strategic thinker with extensive experience as a leader dealing with complex environments and highly tense situations. Much of Yakov’s leadership and strategic experience was gained in 34 years of service in the Israeli Air Force, from a young pilot, through the positions of a squadron leader and a base commander, up to the very senior positions, as one of the very few Generals in the Israeli Air Force. Yakov is the founder of SNH Strategies LTD, a company focusing on strategic consulting and strategic leadership education. He earned his B.S with honors in economics and computer sciences, and M.A. in National Resource Strategy (Cum Laude), The National Defense University (NDU), Washington D.C.