Software performance engineering for embedded systems

Robert Oshana, Freescale Semiconductor

September 08, 2012

Robert Oshana, Freescale SemiconductorSeptember 08, 2012

Embedded systems often have one or more performance related requirements. The complexity of modern embedded software systems requires a systematic approach for achieving these performance targets. An ad hoc process can lead to missed deadlines, poor performing systems and cancelled projects. A mature process is required to define, manage, and deliver on multiple real time performance requirements.

Goddavula et al [1] define a maturity scale for performance engineering. Performance process maturity can be measured on a scale, similar to the Capability Maturity Model (CMMi) and other related scales. Figure 1 shows a similar scale for performance engineering.


Click on image to enlarge.

Figure 1. Performance Engineering Maturity Model

These maturity levels can be described as follows:

Maturity Level 0: Ad-Hoc fire-fighting

At this level there is little focus given to the operational aspects related to performance. Any requirements that exist related to performance only specify the basic capabilities needed and may not be quantitative. At this level performance issues are found early in the development process, during prototyping or early iterations. These performance issues are addressed by “tuning” the application by optimizing the code. This approach provides only incremental improvement.

Maturity Level 1: Systematic Performance Resolution
At maturity level 1, software teams may have a more systematic performance resolution process that addresses performance bottlenecks using the classic approach of

  • Discover
  • Detect
  • Isolate
  • Resolve

This approach focuses on performance resolution by identifying bottlenecks and then tuning appropriately, and requires domain experts to help resolve the issues. At this level there is still no process for early identification of performance problems.

Maturity level 2: Performance testing
At level 2, the software team may have some level of automation to collect performance data for the embedded system. There is generally a proactive effort to deal systematically with critical resource measurements such as CPU utilization, I/O, memory, and power, but not until the system is well into development. Most of the efforts to fix performance defects at this maturity level are limited to operating system or other hardware configuration adjustments.

Maturity Level 3: Early Performance Validation
At this level, performance evaluation and planning is an integral part of the development process. Performance requirements are more aggressively managed using modeling approaches and profiling tools. Performance response time budgets are allocated across the application and managed appropriately.

Maturity level 4: Performance Engineering
At level 4, the fundamental practices of software performance engineering are practiced and managed throughout the lifecycle.

Maturity Level 5: Continuous Performance Optimization
At this level of process maturity, proposed changes to the system are evaluated for their impact on the end user and an assessment is made of the impact on relevant and important resource utilizations. Tradeoffs are well understood and rationalized. Excessive optimization is prevented by understanding goals and there is no premature optimization. The complete cost of the system is well understood in terms of overall system performance. The team has the discipline to rationalize the benefits for key performance optimizations against the cost to achieve those optimizations in terms of return on investment.

In "Software Performance Engineering" [3], Connie Smith and Lloyd Williams define SPE as a discipline within the broader systems engineering area that can improve the maturity of the performance engineering process. It is a systematic, quantitative approach to constructing software systems that meet performance objectives. It is a software-oriented approach that focuses on architecture, design, and implementation choices. It also focuses on the activities, techniques, and deliverables that are applied at every phase of the embedded software development lifecycle, especially responsiveness and scalability, to ensure software is being architected and implemented to meet the performance related requirements for the system.

Responsiveness is the ability of a system to meet its objectives for response time or throughput. Defined from a user perspective, in end systems this would define the time to complete a task, the number of transaction per unit of time, or how fast to respond to an event. An example would be an embedded networking application that would expect the packet throughput to be roughly at the “line rate” of the peripheral bandwidth such as an Ethernet port.

Scalability is the system's ability to continue to meet its response time or throughput objectives as the demand for the software functions increases. For example, as the number of cell phone calls increases in a Femto basestation, the software must scale appropriately to meet the processing requirements for the increased number of users.

Performance failures in systems like this are most often due to fundamental hardware/software architecture or software design factors rather than inefficient coding and implementation. Whether or not a system will be able to exhibit its desired (or required) performance attributes is determined by the time the architecture is chosen. Ignoring performance-related factors during the early part of the development cycle and then tuning performance once the program is running correctly is a “fix it later” approach that is a primary cause for embedded systems failing to deliver on time and within budget.

Some of the primary objectives of SPE include [3]:
  • Eliminating delayed embedded system deployment due to performance issues
  • Eliminating avoidable system rework due to performance issues
  • Eliminating avoidable system tuning and optimization efforts
  • Avoiding additional and unnecessary hardware costs necessary to meet performance objectives
  • Reducing increased software maintenance costs due to performance problems in production
  • Reducing increased software maintenance costs due to software impacted by ad hoc performance fixes

The SPE process includes the following steps [3] (Figure 2)

  1. Assess performance risk
  2. Identify critical use cases
  3. Select key performance scenarios
  4. Establish performance objectives
  5. Construct performance models
  6. Determine SW resource requirements
  7. Add computer resource requirements
  8. Evaluate models
  9. Verify and validate models



Click on image to enlarge.

Figure 2: SPE Modeling Process

The SPE process can be tailored based on the embedded project and organizational goals. Figure 3 shows one such tailored process for a performance engineering activity. In this process, a performance calculator is used to model the important performance use cases for the application. The SoC architecture is also input into this process, as well as data from existing hardware benchmarking (P2020 hardware). Performance targets are used to create a performance report which is used to document in a software statement of work (SOW), which is used for internal development as well as to serve as a requirement document into third party vendors contributing to the system development process. Software implementation is focused on meeting the performance targets from initial architecture design through the implementation phase, with performance analysis conducted formally at each major phase of the project to ensure that goals are being met. The software architecture for this application is shown in Figure 4. The software components within the dotted line boxes are those developed by the third party vendor and are key performance use cases that must be managed to meet performance requirements.

Click on image to enlarge.

Figure 3: An example process flow for using the performance calculator to manage performance metrics



Click on image to enlarge.

Figure 4: A software architecture partitioning based on results from the performance calculator

SPE includes best practices in the areas of project management, modeling, and measurement. Project management best practices include performing early estimates of performance risk, tracking costs and benefits of performance engineering, matching the level of effort for SPE based on the overall system performance risk, integrating SPE into the embedded software development process, establishing quantitative performance objectives, managing the development process to meet these objectives, and identifying critical performance-related use cases that focus on the scenarios that drive worst case performance.

Modeling is a significant aspect of SPE. Some of the performance modeling best practices include using performance scenarios to evaluate software architecture and design alternatives before beginning the software coding and implementation phase. SPE starts with the development and analysis of the simplest model that identifies problems with the system architecture, design, or implementation plans. Details are added as more and more details of the software become apparent.

Figure 5 shows an example of a performance use case used to model expected performance goals for a Femto basestation application. Configuration management practices can be leveraged to create baseline performance models that remain synchronized with changes made to the software.



Click on image to enlarge.

Figure 5 Performance scenarios and use cases used in Software Performance Engineering Modeling

Figure 6 shows the CM branching practice established for managing the performance program for a Femto basestation software project. Best and worst case estimates of resource requirement are used to establish bounds on the expected performance.



Click on image to enlarge.

Figure 6: Configuration Management streams used to help manage performance improvements of a Femto basestation project

Read Part 2: Software performance engineering for embedded systems: Part 2 – The importance of performance measurements

Rob Oshana, author of the soon to be published “Software engineering for embedded systems,” by Elsevier, is director of Software R&D, Networking Systems Group, Freescale Semiconductor.

References

[1] A Maturity Model for Application Performance Management Process Evolution, A model for evolving organization’s application performance management process, By Shyam Kumar Doddavula, Nidhi Timari, and Amit Gawande

[2] Five Steps to Solving Software Performance Problems, Lloyd G. Williams, Ph.D.Connie U. Smith, Ph.D. June, 2002

[3] "Software Performance Engineering", in UML for Real: Design of Embedded Real-Time Systems, Luciano Lavagno, Grant Martin, Bran Selic ed., Kluwer, 2003.

[4] "Performance Solutions: A Practical Guide to Creating Responsive, Scalable Software", Lloyd G. Williams, Ph.D.Connie U. Smith, Ph.D.

[5] "Performance Anti-Patterns, Want your apps to run faster? Here’s what not to do." Bart Smaalders, Sun Microsystems


Used with permission from Morgan Kaufmann, a division of Elsevier, Copyright 2012. For more information about “Software engineering for embedded systems,” and other similar books, visit www.elsevierdirect.com.

Loading comments...

Most Commented

  • Currently no items

Parts Search Datasheets.com

KNOWLEDGE CENTER