Advertisement

Vision system fuses multiple sensor data sources

January 15, 2019

nitind-January 15, 2019

LONDON — Israeli startup Vayavision has launched a software-based autonomous vehicle environmental perception engine which upscales raw data from camera, lidar and radar sensors to provide what it claims is a more accurate 3D model than object fusion-based platforms.

The company’s CEO, Ronny Cohen, told EETimes that today’s object-led fusion of sensor data is not reliable and can lead to objects being missed. Roads are full of unexpected objects that are absent from training data sets, even when those sets are captured while travelling millions of kilometers.

Cohen said most current generation autonomous driving solutions are based on object fusion, in which each sensor registers an independent object, and then reconciles which data is correct.  This can provide inaccurate detections and result in a high rate of false alarms — and ultimately accidents.


Vayavision’s raw data fusion with up sampling. (Source: Vayavision)

It’s thought that more advanced perception solutions like raw data fusion could help better model the 3D environment. Cohen said cameras don’t see depth and distance sensors — such as lidars and radars — are usually low resolution. Vayavision's VAYADrive 2.0 takes the raw data and upsamples the sparse samples from distance sensors and assigns distance information to every pixel in the high-resolution camera image. This, the company said, allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.

The VAYADrive 2.0 software solution combines artificial intelligence (AI), analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensor hardware, and is compatible with a wide range of cameras, lidars and radars. This provides an accurate 3D environmental model of the area around the self-driving vehicle. The company said it breaks new ground in several categories of AV environmental perception: raw data fusion, object detection, classification, SLAM, and movement tracking, providing crucial information about dynamic driving environments.

Cohen emphasized that the key to the company’s solution is its unique set of patented algorithms, which upscale and generate high-resolution images from sparse data.

>> Continue reading this article on our sister site, EE Times: "Startup Upscales Raw Sensor Data for Accurate 3D Models."

 

 

Loading comments...