How sensor technology enables context awareness in hearables - Embedded.com

How sensor technology enables context awareness in hearables

One of the fastest-growing verticals in consumer electronics is the hearables market. These on-ear devices, ranging from wireless earbuds to hearing aids, are more than just a tool for listening – they offer a brand new way for us to engage with our technology and the wider world around us.

The hearables market is expected to reach $93.9 billion by 2026, growing at a CAGR of 17.2% from 2019 to 2026. Other data shows that consumers are interested in a specific functionality within these devices. According to a 2019 Qualcomm survey, 55% of survey respondents rated themselves as interested in context-aware hearables. They named background noise reduction and dynamic volume adjustment as the most useful capabilities.

Hearables continue to gain interest from users expecting next-generation features. (Source: CEVA)

It’s clear that end-users are interested in these features for next-gen hearables for a better, more immersive listening experience – but what components do you need to actually configure these capabilities?

For a truly immersive listening experience, you need your hearables to solve common UX challenges and pitfalls that are common in this technology. Here are 4 of the most common challenges we see:

1. The traditional user interface is not convenient for hearables.

If you’re using wireless headphones when you’re out for a run or working out at the gym, the chances are slim that you’re going to be staring at your phone the entire time. This makes it inconvenient for users to rely on their phones to control their hearables. Buttons put directly on the hearable device itself tend to be small and aren’t visible when they’re in the user’s ear, making it difficult to locate and press them.

A more convenient UI is gesture control. With motion tracking, simple gestures can provide instruction for specific controls and actions. For instance, your device could sense a simple “tap” on the earbud to increase the volume. It’s much easier to find and tap the entire hearable compared to pressing a specific button on it.

In-ear detection is a gesture that can be used to automatically pause the audio when the user takes their earbud out. Think of how much easier that would be when you run into a friend in between sets at the gym; the audio just stops automatically as you politely take out an earbud and resumes the moment you put it back in your ear.

2. Simple, accurate tracking is needed to meet expectations for fitness and activity tracking.

Hearables are pretty ideal for fitness tracking applications. Fitness tracking through the head has robustness built into it since the range of motion for a head (and ears) is relatively consistent compared to your wrist or pocket.

Still, it’s possible to fool fitness algorithms, with many false positives and negatives affecting the output data if the motion tracking is not precise. If your hearable can detect and classify activities automatically, it can track full body movement and gain context – are you running? Biking? Standing in line at a cafe? Accurate classification can be integrated with a software library to convert step counts to calorie counts as well, giving a more complete picture of your day.

3. Typically in hearables, the sound is not truly immersive.

Conventional hearables don’t offer a truly immersive experience; you’re listening in on the sound, but not engaging with it in a meaningful way.

But when you pair accurate head tracking with hearable technology, however, the listening experience becomes an immersive experience. Hearables equipped with spatial audio changes as you turn your head, putting you right in the middle of the music, as if you were there.

This life-like experience requires high accuracy head tracking with low-latency, to ensure that it moves with you, and without delay. Spatial audio elevates the user experience for gaming or XR applications as well.

4. Hearables, on their own, don’t react to changing environments.

Making hearables context aware is one of the toughest challenges to overcome in their design.

Today’s users often need to manually change settings, such as volume, or remove their earbuds when they want to listen to the outside world. By their very design, hearables block out external sounds so that listeners can just focus on whatever they’re hearing through the earpiece; however, in some contexts, they may inadvertently block out critical information. In a worst case scenario, missing a sound cue like a car horn when you’re about to cross a busy street could result in serious injury.

However, with the help of context awareness, hearables can analyze information from their sensors to determine user activity, like walking, jogging, bicycling and more. Combining that information with other known information—like GPS from a phone or an AI algorithm to detect important cues—allows the hearable to determine if external audio should be blocked out or passed through. The right sensors and sensor fusion software can also separate the user’s voice from background voices to improve call quality and voice command accuracy for virtual assistants.

How Are Sensors Used in Hearables?

The hearables market includes a wide array of devices including true wireless stereo earbuds, audio headsets, hearing aids, and AR glasses. To maximize their functionality and effectiveness, you need the right mix of sensors.

At the most basic level, an accelerometer is necessary for activity tracking. Information from this sensor can define a basic step count, but with the proper understanding, can also be used for more complex activity classification, like walking vs. running.

A more advanced device can also use a 6-axis IMU, made up of an accelerometer and a gyroscope, to track orientation. With the additional data from the gyroscope, the hearable device can find the user’s relative head orientation. After ensuring the proper sensor rates and latency, this enables the accurate head tracking necessary for immersive 3D audio and XR applications.

Pairing either the accelerometer or the 6-axis IMU with a proximity sensor increases the robustness of features like in-ear detection. The more information, the better the algorithmic result, and hearables are no exception. Advanced technologies such as CEVA’s Hillcrest Labs MotionEngine™ Hear lets designers incorporate context awareness and frictionless UI into hearable devices ranging from true wireless stereo earbuds to hearing aids to wireless headphones.


Charles Pao is Sr. Marketing Specialist in the Sensor Fusion Business Unit at CEVA. He started at Hillcrest Labs after graduating from Johns Hopkins University with a Master of Science degree in electrical engineering. He started work in software development, creating a black box system for evaluating motion characteristics. With a passion for media and communications, Charles started producing demo and product videos for Hillcrest Labs. This passion led to an official position transfer into Marketing. Currently, he is Hillcrest’s first point of contact for information and support and manages their marketing efforts. He’s also held various account and project management roles. Charles also earned Bachelor of Science degrees in electrical engineering and computer engineering from Johns Hopkins University.

 

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.