A new approach to improving system performance - Embedded.com

A new approach to improving system performance

Speed is a key element in most every electronic design. Whether engineers are creating complex image processing applications or designing systems that extend battery life by working swiftly before returning to sleep mode, speed is a critical factor in a product's success.

Though hardware usually gets first consideration when design teams look for ways to improve speed, that's not usually the most effective path. It's fairly straightforward to run the features and functions of a product faster without making any hardware changes.

Streamlining software so it runs at optimal rates can bring significant improvements in a way that's so easy to implement units in the field can be enhanced. That's far more cost effective than redesigning hardware.

Three of the four basic components in system speed are in software: operating systems, compilers and application software. Hardware is the critical fourth phase, but altering processors, memories, bus architectures and data channels is difficult.

Altering the operating system is also difficult once the OS has been selected. That leaves optimizing the software that runs above the operating system as the most straightforward way to increase speed. Applications packages, middleware and drivers take center stage when development teams focus on the features and functions that attract customers. But this software is typically overlooked when the focus shifts to performance.

That's a mistake. Significant performance increases can be achieved when acceleration techniques are applied to software that resides above the operating system. It's rare that speed can't be boosted by 20 percent (or even doubled or quadrupled) especially when utilizing an outside firm that specializes in software acceleration who can assist with streamlining programs.

Changing focus
Over the past several years, the focus for many software projects has shifted away from speed. That's largely because CPUs and other hardware have continuously gotten faster, masking the slow speed of inefficient software. At the same time, customers clamor for new features, shifting the software spotlight to additional functions.

Optimizing software lets design teams boost performance without the time and expense of upgrading CPUs and other hardware. When companies scrutinize software to find bottlenecks, loops and other problems that slow performance, they can often tout improvements beyond increased speed. Battery life can be improved, for example. These improvements can be marketed as field upgrades, giving companies a way to add revenue and help their existing users enhance their performance.

Improving software efficiency often leads to smaller code sizes. If that gain occurs early enough in the design process, companies may be able to trim memory requirements. In very high volume applications like autos or consumer products, saving a few hundred bytes on each chip can lead to solid savings. Other times, freeing up memory may provide space for an additional feature.

Enhancing software
The operating system, complier, and the software above the OS all offer room for improvement. Altering the software above the OS offers the most room for speed enhancements, and it may be the most feasible.

Operating system changes usually aren't viable since it's a major issue to switch. Engineers are rightly concerned about compatibility issues and the accumulated knowledge learned while working with the existing operating system.

At the same time, there can be an almost religious adherence to this base component for software. This brand bias often extends to compilers. However, there's far less technical reason to stay with a compiler that shows shortcomings. In most instances, compliers can be changed without much impact on the operating system or the software that operates above it.

Switching compilers takes a minimum of effort, but it has a systemic impact on performance. Some software companies routinely turn out compilers that run 15-20% faster than those from competitors. Switching to a faster compiler can yield a significant speedup, though the overall system gain won't match the increase over the existing compiler.

Typically, changing a compiler provides 2-5% faster processing at the system level with the benefit sometimes as great as 10%. These are big benefits given the small amount of effort needed to make the change.

Accelerating Applications
The biggest gains come when higher level software is examined and enhanced. Accelerating these program components can make a system run 20% to several hundred percent faster without any hardware alterations.

One of the most intriguing aspects of software acceleration is that it's easy to implement at any stage of a product life cycle. Programs can be enhanced during development or after the product is released. Installing an accelerated upgrade is no more difficult than installing patches or feature upgrades.

Those downloadable upgrades give companies a fairly easy and inexpensive way to market upgrade systems in the field without a significant expenditure. The cost of downloading software upgrades is far below the exorbitant price of upgrading hardware in the field.

During initial product development or redesigns, analyzing software performance can bring solid benefits. Sometimes hardware and software are upgraded at the same time. Software acceleration can open the door for solid benefits. When software is enhanced during the hardware design cycle, engineers may be able to reduce clock speed and memory size.

If the software can reduce the CPU's load, engineers will no longer have to over specify CPU performance. The added clock cycles are often justified when software runs at less than optimal rates, but adding overhead pushes up prices over the entire production run.

Software acceleration can also help slash non-recurring engineering costs. Hardware costs don't stop at component prices. There's substantial cost when engineering teams spend hundreds of man hours to upgrade a product. When a group of engineers work on a hardware upgrade, costs quickly add up to hundreds of thousands of dollars. Typical hardware redesign projects can cost $500,000 or more.

In battery-powered systems, software acceleration can bring an unexpected benefit: battery lifetimes can be improved with more efficient software. When programs run faster, tasks can be completed in less time. That means that processors run for shorter periods before they return to battery-saving sleep modes. That's a big benefit in this era of green engineering.

Making a Change
Figuring out how to make these upgrades without altering the features and functions of the software is not a simple task. Speed enhancements come only after significant analysis by engineers who have studied performance issues.

The techniques for creating fast software are not commonly understood. Software speed lost importance as marketers focused on adding bells and whistles while faster processors and larger memory sizes provided more speed.

That makes it necessary for most companies to turn to specialists who have focused on speedy software. For example, the makers of real time operating systems have focused on writing fast and compact code. Many engineers in industrial and instrumentation industries have also stressed fast actions.

Specialized companies provide consulting services, just as design contractors provide support for specialized aspects of hardware such as wireless or ruggedization. These companies work with a range of programs, spanning many different industries.

That diversity is possible because when engineers work to accelerate software, they are looking for bottlenecks and other problems that reduce speed. They don't need to know much about what the program does. That is a good thing given the complexity of programs and the diversity of fields where enhanced computing speed is desirable.

How it's Done
The task of enhancing software performance starts with instrumenting the target device and recording the hundreds of thousands of events that it runs. At a high level, a resource analyzer provides insight into memory and CPU usage. A range of other tools like event analyzers and profilers are brought into play. The events they find are then categorized for further analysis.

This examination first requires storing huge amounts of data, then running sections of code over and over again to see where they slow down. To effectively view overall system performance, one must look at several factors that govern overall performance, simultaneously and in real-time in order to find the most hidden bottlenecks of performance.

To accomplish this task, think about viewing each factor of overall system performance in a separate view or window. One window will provide path analysis, another will be used for event analysis, while yet another will let engineers monitor individual function calls. Analysts watch these events occur in real time, then move back and forth in time to examine the interactions and look for hot spots.

In one program, analysts found that 30% of a program's time was spent on seeks. These seeks were called from 10 different locations, often causing conflicts. Tweaking these calls provided a dramatic speedup.

Another example of the gains brought by these steps came after an analysis of a Linux PDF display program. This open source program had been extensively examined before it came into widespread use. But when a Green Hills engineer got tired of waiting to view graphics pages, he analyzed the program. He found an intensive loop that was called too many times when it accessed memory in buffers.

That occurred because some parameters were not pulled in at the right time, forcing another call. That fix, along with a few minor tweaks, brought a stunning 1,200% improvement in speed.

It's extremely rare to find a system that can't realize a gain of 20% or higher by improving the efficiency of software that resides above the operating system. Device drivers, middleware, protocol stacks and the application packages consume a large number of CPU cycles. Improving the efficiency of the software that runs above the operating system is the best way to make systems run faster.

The programmers who created the application's software may be concerned that alterations may add bugs. That's rare, although it's always possible. Bugs are typically introduced when features are added. When code is tweaked for speed, the way applications respond doesn't change, so there's little chance that bugs will be created during the enhancement.

Conclusion
Software acceleration provides plenty of ways for product developers to increase performance without spending a lot. Companies that specialize in software acceleration often negotiate pricing based on the percentage of improvement. When experienced design teams examine software, they easily boost performance by 20-30 percent, often trimming processing time by 50 percent or more.

When project managers get their software back, it's pretty simple to see what's been changed. Simply comparing the old and new versions of code will show managers where changes were made. Often, their reaction will be that the alterations were quite simple. That's true.

The hard part is knowing what to look for, where to find it and how to make the changes without altering the program. Then comes the easy part, shipping a faster system.

Terry Costlow has covered the technology industry since the days of the Apple I, writing about both the products and their impact on society. He's written for EE Times, Automotive Engineering International, Design News, The Christian Science Monitor, Los Angeles Magazine and the Portland Oregonian.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.