Vision system fuses multiple sensor data sources -

Vision system fuses multiple sensor data sources

LONDON — Israeli startup Vayavision has launched a software-based autonomous vehicle environmental perception engine which upscales raw data from camera, lidar and radar sensors to provide what it claims is a more accurate 3D model than object fusion-based platforms.

The company’s CEO, Ronny Cohen, told EETimes that today’s object-led fusion of sensor data is not reliable and can lead to objects being missed. Roads are full of unexpected objects that are absent from training data sets, even when those sets are captured while travelling millions of kilometers.

Cohen said most current generation autonomous driving solutions are based on object fusion, in which each sensor registers an independent object, and then reconciles which data is correct.  This can provide inaccurate detections and result in a high rate of false alarms — and ultimately accidents.

Vayavision’s raw data fusion with up sampling. (Source: Vayavision)

It’s thought that more advanced perception solutions like raw data fusion could help better model the 3D environment. Cohen said cameras don’t see depth and distance sensors — such as lidars and radars — are usually low resolution. Vayavision's VAYADrive 2.0 takes the raw data and upsamples the sparse samples from distance sensors and assigns distance information to every pixel in the high-resolution camera image. This, the company said, allows autonomous vehicles to receive crucial information on an object’s size and shape, to separate every small obstacle on the road, and to accurately define the shapes of vehicles, humans, and other objects on the road.

The VAYADrive 2.0 software solution combines artificial intelligence (AI), analytics, and computer vision technologies with computational efficiency to scale up the performance of AV sensor hardware, and is compatible with a wide range of cameras, lidars and radars. This provides an accurate 3D environmental model of the area around the self-driving vehicle. The company said it breaks new ground in several categories of AV environmental perception: raw data fusion, object detection, classification, SLAM, and movement tracking, providing crucial information about dynamic driving environments.

Cohen emphasized that the key to the company’s solution is its unique set of patented algorithms, which upscale and generate high-resolution images from sparse data.

2 thoughts on “Vision system fuses multiple sensor data sources

  1. “Given that there are so many variables, the only way to improve the systems that we have is to keep putting all of them out on trial and running them in real-life situations. It's only when we come across the exceptions that we are able to code in the app

    Log in to Reply
  2. “The autonomous industry as we are currently witnessing is obviously going through a tremendous deal of changes. They are indeed contributing towards the goal of achieving an overall improved environment for both driverless manufacturers and users. However

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.