In today’s world, the market wants technology to be smaller, thinner, sleeker and still deliver great sound. As our devices have become smaller, so does the amount of space for audio hardware. In addition, the cost pressures on the product place additional pressure on the audio hardware to also reduce cost. Often this requires a lower cost, thinner audio system solution. These constraints will degrade the audio quality. However, this is where the solutions offering optimized signal chains come into play, building on scalable audio tuning methods that can improve the audio performance of any device.
What is an audio signal chain and how does it work?
An audio signal chain is the path the audio input signal takes through the electronics and processing software to output. An example of this is a television. The program source is received by the television hardware and the video and audio signals are separated into two paths. The audio is first processed by a decoder. This is typically a large digital signal processor (DSP) to decode the audio from the format it was recorded in, into an audio stream that can be used by the specific product. If the sound will only be rendered (output) through the two speakers on the TV, then the DSP will need to process (down sample) any multichannel signals into just stereo. If the audio is to be rendered to a soundbar that is connected to the TV, then the signal will need to be down sampled to the correct number of channels for that soundbar (soundbars can be from two to ten channel devices).
Audio signal chains can range from simple to very complex depending on the number of changes or effects that are needed. Having a powerful audio processing tool allows for the audio and system engineers to develop a signal path to optimize the audio quality with the hardware constraints. The goal of the audio signal chain is to deliver audio that is the best representation of the original recording.
What are the challenges in optimizing signal chains?
Each audio system is unique to a specific product and each architecture has its own issues. Some of the issues that audio system architects deal with include output level, voice clarity, bass level and extension and high frequency directionality.
- Overall output level, or the overall “loudness” of the product, must provide a comfortable listening level above the environmental noises. The loudness will be limited by the size of the audio speaker driver’s loudspeaker power rating, system gain structure, interfaces like wire gauges and the audio power amplifier. Maximizing the output level will require a dynamic multi-band approach because the hardware will have different limitations in different frequency ranges.
- Voice clarity is another issue with many products. Human hearing is sensitive to the frequency range of the human voice. The specific frequencies can get “muddied” with the other sound types in an audio stream. The ability for the audio processing to “pull out” the voices can make a big difference in the product satisfaction.
- Bass level and depth are two characteristics that most listeners always want more of. To produce a balanced level of bass and reproduce the lowest bass possible, several techniques are used. One technique is “Smart Bass”. This technique sets a bass level goal and attempts to obtain that level until the hardware limitations restrict it. Most systems can produce bass above that level, but high distortion will also be produced. The balancing of bass output and lowest distortion is best done with a software processing tool such as Harman AudioEFX.
- High frequency directionalitybecomes an issue when the speakers are not pointed directly at the listeners. The higher frequencies are very directional and the listener that is “off axis” will not hear that energy at the intended level. To adjust for this, the audio architect will use a processing block the boosts the higher frequencies so the off-axis listeners will hear a more balanced level. Power and processing operations are important on portable devices due to the limits on battery power and processor size. All the processing mentioned above must be done in the most efficient manner to maximize playback time.
What is the solution?
With the rise of small portable products producing a booming bass and slimmer television systems, the need for advanced digital signal processing has increased. Tools like Harman Audio EFX that enable signal chain optimization utilize many audio algorithms, including smart bass, smart treble, voice enhancement, parametric equalizers, compressors and limiters. Using this type of tool, audio system architects can quickly design and test audio signal paths to produce the correct product audio “signature” in their DSP-based audio systems.
|Bruce Ryan started at Harman in 1999 and is an original member of the Home and Multi Media business unit, which developed the first Harman desktop, docking and portable Bluetooth speakers. Currently leading the Harman Embedded Audio Engineering group, Bruce and his team develop audio solutions for internal Harman products and external partners. He has a B.S. and M.S. in Mechanical Engineering from C.S.U.N. and an MBA from Pepperdine University.|
|Nikhil Rathod, Product Leader, at Harman Embedded Audio, is a hardware and software innovator committed to helping people realize the full potential of technology. He has over 12 years’ experience envisioning, developing, and managing mass-market technology solutions with a passion for defining viable solutions and exceptional user experiences. He has a master’s degree in Electrical Engineering from CSUN.|
- Better audio processing at the edge
- Blending hardware and software for better audio
- Accelerating complex audio DSP algorithms with audio-enhanced DMA
- Using DSPs for audio AI at the edge
For more Embedded, subscribe to Embedded’s weekly email newsletter.