It’s not just the processor! - Embedded.com

It’s not just the processor!

Embedded systems are becoming ever more complex in order to satisfy the growing demand for performance and features, but the requirements for faster product introductions and lower product price tags run counter to this. The pressure on engineers is increasing as the constraints of project schedules and development budgets start to bite.

To produce competitive and profitable designs, engineers need to optimize the feature set, bill of materials costs, code size, performance specs and power consumption. Software is flexible, so is seen as the aspect of the design in which tradeoffs can be made to achieve (or get as close as possible to) the project's original goals.

As we will see, the method in which engineers perform optimizations is surprisingly unrefined considering the advanced nature of the embedded systems they are applied to. A series of critical hardware/software bottlenecks, which have the potential to blight modern embedded designs, have started to emerge. Valuable time is be wasted trying to deal with these issues, when it could be better spent being creative and getting the design to market sooner.

Optimization is currently a highly manual process, often depending on trial and error. The difficulties of satisfying numerous constraints will generally result in functionality or performance being sacrificed in order to meet project deadlines.

There is a key technical bottleneck – the interactions between the processor(s) and memory systems which are pivotal to the efficiency and performance of embedded systems. As more sophisticated memory systems with varying sizes and access latencies start to be incorporated into designs these effects are becoming increasingly convoluted.

In addition, multi-core processor implementations are also seeing progressively greater prevalence as the demand for performance and quantity of data that must be dealt with gets larger and larger, with possible behaviors getting harder to predict, something that can prove to be a serious problems for real-time systems. Both these trends should be appropriately addressed.

The software development tools (Figure 1 ) engineers are utilizing haven't evolved accordingly to match the scale of these problems. Basic code and data sequences are generated by compilers, one at a time for each individual source file, and only using minimal information on the target device to generate instructions for a given CPU instruction set architecture. When available, profiling tools can sometimes be used to uncover some detail in relation to the system resources being expended, but they tell engineers very little about the root causes.

Figure 1: Conventional approach to optimization – resulting in an output which ignores interactions between separate software sequences & the underlying hardware used to execute them.

The inadequacies in traditional software development tools are now becoming apparent and can have a profound effect on engineers’ design projects, especially for systems that have complex memory hierarchies or multiple processor elements. They can often lead to:

1) Sub-optimal instruction sequences being generated – with both their size and their power consumption being higher than necessary.

2) Ineffectual positioning memory resource and poor overall layout – with constituent parts of the system not being located in places that are best suited to either instructions or data structures.

3) Misdirecting of human-centric optimization efforts – with considerable amounts of time being spent on experimental tweaks that may offer little or no improvement to the system design.

Effective optimization requires information about the whole system, including the memory hierarchy, and the entire program. Software/hardware interactions are so complex in today’s processor systems it is just too difficult for engineers to directly control many of these effects via their source code. Frequently this leads to unintended effects if the program's source code is modified in some way as engineers currently lack the visibility and controls they need to make their designs fully effective. Without access to information on the cause and effect of different optimization procedures it is virtually impossible for them to predict, so attempts to optimize by hand are effectively ‘flying blind’.

It must now be recognized that effective control and optimization cannot be achieved using the traditional tools or methods, as there is a major divergence between what can be achieved in hardware and what it is possible to control using software development tools.

Optimization must no longer focus completely on the processor. Instead the system as a whole must be addressed, with the memory, processing and connectivity elements, plus the whole program all taken into consideration. Furthermore the human element needs to be removed from the optimization process. The underlying relationships at work here are frequently beyond what even the most highly experienced engineers can comprehend. All this has grave implications – with the vast majority of design projects failing to attain the initial objectives that were set.

The advent of next generation, ‘device aware’ software development solutions (Figure 2 ), which consider the entire system architecture including all processor elements and the memory hierarchy, are set to change all this. By altering how code generation and data sequencing processes are embarked upon it is possible to significantly enhance system efficiency, as well as a shortening of the time required to complete embedded design projects.

This means that the need for any form of manual intervention can be dispensed with and the prospect of being forced to carry out frustrating “human in the loop” optimization iterations is eliminated. Information gleaned on the target device’s processors and memory architecture can be used to intelligently re-sequence the program. Code size, performance and execution efficiency may thus be optimized in such a way to generate “best fit” to the original design constraints. This improves both average and best-case execution time, providing more deterministic behavior and hence bounded worst-case execution time.

Figure 2: Optimization via device aware software development tools (with fully automated code & data sequencing mechanisms) yields an output which respects the structure of the program & its interactions with the hardware

It is clear that current software development tools’ shortcomings stem directly from their inability to consider or control low level hardware/software interactions, and the excessive level of human involvement they are still dependent on.

All this is leading to serious doubts as to whether engineers have access to development tools which are capable of tackling the challenges they now face. New more advanced software tools now entering the market will enable the automatic optimization of embedded systems in their entirety and thereby allow substantial operational benefits to be obtained.

By ditching the conventional (and increasingly outdated) manual approach to design optimization and replacing it with a highly automated, deterministic methodology, significant boosts in the system performance levels, as well as marked reductions in the time/costs associated with new design implementations, can be obtained.

Dave Edwards is founder and CEO/CTO of SOMNIUM.   With over 25 years of experience in the semiconductor industry, he has led processor architecture and software tools development teams for the Inmos transputer, the STMicroelectronics/Hitachi 64-bit SH-5 processor and the Icera/Nvidia DXP adaptive wireless processor used in leading 3G/LTE modems. Dave was also the technical lead of IEEE 5001 Nexus debug architecture software API activities and has over 35 granted patents in the embedded systems space.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.