Giving AVs a better sense of hearing - Embedded.com

Giving AVs a better sense of hearing

If drivers can hear a siren, why can’t autonomous vehicles do the same?

In emergency situations such as severe traffic accidents, every second counts, and a sufficiently wide rescue lane can make the difference between life and death. If drivers can hear a siren, why can’t autonomous vehicles do the same? Cairo, Egypt-based Avelabs has developed a sensor solution that gives vehicles the sense of hearing to complement vision and improve autonomous driving systems. 

Ambulance

“Vision is our most important sense when evaluating the environment,” said Amr Abdelsabour, director of Product Management at Avelabs, in a panel session at this year’s AutoSens Brussels. “However, as human drivers, we don’t just depend on vision. When we are driving, we depend on our hearing as well. There is a lot of information that we can hear but not see like a siren coming from the back. Or, if we are driving into a blind intersection and a car is coming, we can’t really see it, but we can hear it.” 

At AutoSens, Avelabs introduced AutoHears, an acoustic sensing system that detects, classifies, and localizes sounds to help understand the complex environment of the vehicle. AutoHears, which includes the hardware, the mechanical enclosure and the software that runs the sensing features, aims to execute emergency vehicle, obscured field, natural disaster (e.g. rockslides) and safety event (e.g. nearby collisions, gunshots, explosions) detection, as well as vehicle self-diagnosis and speech recognition.

In a follow-up discussion with EE Times Europe, Abdelsabour explained what it takes to give vehicles the sense of hearing, how software and hardware depend on each other, where and how the data fusion process is performed, and when we can expect AutoHears to hit the road? 

EE Times Europe: Could you describe the types of sounds that AutoHears can and can’t detect? 

Amr Abdelsabour: We started with running vehicle sounds (e.g. tires, engine, brakes, and aerodynamic sounds), as well as horns and sirens of different standards worldwide. These classes are what has been tested and demonstrated so far. We are currently working on adding new classes such as natural disasters and collision detection, but they are still in the feature development phase. A roadmap for feature development is under construction. 

EE Times Europe: AutoHears detects sounds from all angles. Are there any physical limitations?

Abdelsabour: AutoHears can detect sounds from all angles, and not only that, but also sounds coming from behind walls and other obstructions. There are of course physical limitations. Sound measurement is a relative sensing process, where sound is sensed relative to its environment. This means that if the environment was quiet, AutoHears would be able to detect faint and weak sounds such as bicycles and even footsteps. However, if the environment was noisy, AutoHears would only be able to detect the most significant of sounds. So, for example, if a loud siren was active nearby, we would not be able to detect the motor sounds of other vehicles since the loud sound would cover the quiet sound. Nevertheless, we are working on finding out our exact physical limitation in objective numbers to be able to deliver reliable limitations to our customers.

EE Times Europe: What about the classification of sounds?

Abdelsabour: Classification of sounds is a complex process, especially when it comes to non standardized sounds. If we are talking about standardized sounds such as sirens, the classification process becomes easy and fairly straightforward and can be done using model-based algorithms. However, the running vehicle detection is a more complex process, because it is a non-standard combination of sounds composed of different physical components creating the final sounds our ears or sensors hear. This is where various artificial intelligence methods come into play, to be able to classify sounds based on machine learning through data that was collected and to detect and classify the sounds according to what they have learned. We are proud to say that, in AutoHears, we have deployed a combination of both model-based algorithms and machine learning to classify sounds, depending on the target sounds to be detected. 

EE Times Europe: How does audio data fuse with image data from cameras or other sensors embedded in the car?

Abdelsabour: As it is the case with human drivers, sound complements vision. This is how we see AutoHears and we have developed it accordingly. Because we are concerned mainly with the acoustic sensing part, we are delivering the raw acoustic sensing information that can be fused with other sensors such as cameras and radars to classify and localize objects so as to use the strengths of each separate sensor and overcome their weaknesses. So, for example, combining a radar, a camera and AutoHears can lead to the following detection of a vehicle in our blind spot: The radar can detect the obstacle and accurately estimate its distance (as radars are highly reliable from that aspect), the camera would classify that object (if there is a camera looking at the direction where the target vehicle resides), AutoHears would confirm the detection with its own classification and localization of this vehicle as well as add the information if that vehicle is making any sounds such as honking a horn or emitting a siren. The combination of all sensors together makes the sensor fusion a highly conclusive reconstruction of the surrounding environment in the best way possible. 

EE Times Europe: Why did you decide to build a complete system? Why was it essential to tackle all software and hardware aspects?


Amr Abdelsabour, Avelabs

Abdelsabour: AutoHears as a sensing system is one of the first detection systems of its kind, namely an acoustic detection system. Since Avelabs is a software company, we initially wanted our main focus to be only on sensing features from a software perspective and not focus on the hardware parts. However, without sensing hardware, there can be no sensing features. The sensing hardware is the main enabler of the sensing features, since the sensor is not as simple as putting a microphone in the vehicle to enable detection, but rather the hardware must be designed carefully to enable accurate localization of the surroundings. In order to localize an object, the localization algorithms rely on physical factors such as the sound time difference of arrival, which can only be detected when the hardware is designed in a way that enables detecting this. There are several hardware factors involved such as the number of microphones, the distance between them, and their placement on the vehicle. All of these hardware requirements forced us to design and build the hardware ourselves to enable the acoustic sensing features we offer. Simply speaking, there is no company that offers external acoustic detection hardware for vehicles, which is why we had to develop it ourselves. 

EE Times Europe: Can you give me some details about the acoustic sensor itself? And on the CPU where the algorithm is running?

Abdelsabour: We have decided on a centralized architecture when it comes to the sensor and processor system. This decision is to move with the trend that all automotive companies are currently taking, which is to rely on sensors that detect the raw data (cameras, radars,…). The raw data is then sent to a centralized domain controller where the sensor fusion takes place. That’s why we built the acoustic sensor to be a raw data sensor, detecting all the acoustic information and sending it to the centralized domain controller where the sensing algorithms run. As you know, we designed the acoustic sensor ourselves, but we use off-the-shelf automotive domain controllers such as the Xilinx FPGA and the TI ADAS TDA SoC as the CPU that runs our algorithms. However, because each customer uses their own domain controller, we are using these processors only as reference hardware. As we can simply be deployed on any type of domain controller given the necessary customizations. 

EE Times Europe: Why do you say AutoHears is “hardware-dependent”?

Abdelsabour: AutoHears as a sensor and as sensing algorithms has generic component and hardware specific components, depending on the features desired by the customer and the processing controller the customer uses. So, for example, if the customer only wants the direction of the sound event (without the distance to the object emitting the sound), it is only necessary to use one sensor. But if the customer would also like to detect the object’s distance, it is necessary to use multiple sensors to triangulate the object’s distance. This is a hardware dependent feature, for example. 

The other side regarding hardware dependency is the domain controller being used to process the sensing features. The performance of our features depends on the processors that run them and their capabilities. There is a trade off between the performance and processing requirements of the hardware. So, for example, if we want AutoHears to detect with a one-degree resolution, this will require more processing resources. If we decrease our desired performance, so will the processing requirements. Additionally, each new hardware will come with some hardware specific customizations for the microcontroller abstraction layer such as the AutoHears sensor drivers which would be implemented into the customer’s basic software environment.

EE Times Europe: Where are you in terms of development? When do you plan to test AutoHears on public roads? When do you expect AutoHears to be in production? 

Abdelsabour: AutoHears can be considered in the product development phase. We have already proven the concept from a technical and financial perspective, performed demonstrations and tests to prove feasibility and currently we are working on “productizing” the development. This includes public road validation as well as acquiring automotive certifications. These are the two steps needed for us to go from product development to commercialization. These are necessary steps to be taken before being production-ready. 

EE Times Europe: Do you have early customers testing the solution?

Abdelsabour: Although we started announcing the product at AutoSens this September, we are already discussing with customers regarding testing the solution. As we are trying to launch a new product into the automotive market, we hope to rely on our customers and partners to learn more about the market expectations and requirements as well as to integrate our sensors into data collection vehicle fleets to gather more data for training and validation purposes.


Related Contents:

For more Embedded, subscribe to Embedded’s weekly email newsletter.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.