Single core to multicore: Addressing the system design paradigm shift with project management and software instrumentation - Embedded.com

Single core to multicore: Addressing the system design paradigm shift with project management and software instrumentation

You are a software/systems development lead on a complex embedded development project. There are many requirements to be met in order to satisfy the project specifications as well as an aggressive delivery timeline. The project is entering the integration phase. The functionality seems to be working well and you’re feeling pretty good about things.

But then it happens: initial tests show that your system is performing at 1000% over the requirements! Or as you progress through the integration of the disparate components and begin to apply stress tests on your system, resets are occurring at a frequency that makes your system look like a re-boot test.

More functionality into faster, more powerful devices
With the exponential growth in the complexity of embedded systems, the above scenario is becoming all too common. Consider current mobile devices such as smart phones and tablets now hitting the market that have four processor cores (and an additional GPU core) such as Qualcomm’s Snapdragon, with other suppliers such as Samsung advertising eight (heterogeneous) core devices for next-gen mobile devices.

Then there are higher-end devices such as the LSI Axxia Communication Processors (supporting 16 ARM Cortex A15 cores) for use in networking/telecom applications. It’s safe to assume this trend for more functionality will not slow down any time soon.

Figure 1 shows an example of such a system and its possible components. This example could be a tablet, a mobile device, or even an automotive infotainment system. The demands from handheld to high-end devices are converging, and these systems are being asked to play flash videos, stream applications over Bluetooth, perform on-the-fly security tasks, be ready to take incoming calls, and more – while in many cases having the expectation that the user interface (UI) will not lose any responsiveness to touch gestures.

Figure 1: The inbound and outbound flow of data among devices continues to converge at alarming rates; system functionality needs to keep up with the increased demand for more and more data.

Multi-threaded, multicore, and even multi-OS hardware/software embedded systems lead to extremely difficult-to-diagnose interdependent issues such as non-optimized use of shared resources, including the processors themselves! In some cases, problems may not arise until integration starts, and some of these may have the potential to kill a project.

But in this article we suggest another option. The solution is to propose a juxtaposition of process with technology. That is: leverage a new technical solution for solving these problems, and then merge this technical solution into the project software development processes in order to maximize the benefits.

Mitigating risk
Sound project management includes up-front risk mitigation plans. Thus, if you agree that what has been shared so far is an inherent risk in your upcoming projects, read on.

What has the mitigation for such risks been historically? Answer: people. Mitigation plans often add developers during the later phases of the development to fix the issues. But bringing in a team at that point has its own set of risks. There is the required “ramp-up time” for new team members as well as the need for the current team to set aside cycles to train the new developers brought in.

Couple that with the fact that a percentage of the issues will be very difficult to root cause, thus requiring the attention of the experts on the team, adding developers can cause a project to actually lose ground. To provide the senior developers with every advantage in solving these complex issues, traditional software debugging techniques are no longer adequate on their own. The experts on the team need new methods to efficiently resolve these problems.

Leveraging instrumentation
The new method proposed herein is the incorporation of software instrumentation to analyze the behavior of the system and help debug complex issues in a way that complements traditional software debug. Instrumentation, in this case, is defined as the insertion of code that generates trace data, which in turn reveals important information about the state and flow of a software application.

Though instrumentation has been used informally for many years, it has matured greatly from the days of “printf ”, and its inclusion in the formal software development processes for complex system analysis is long overdue. However, it’s an investment that must be designed in from inception if a project is to maximize the benefits. Instead of a developer putting all that they learn into a debug session which is lost when the target power is turned off, when a project invests in instrumentation, the team is putting much of what they learn into the code itself to be leveraged over the life of the program.
Establishing a process
To incorporate instrumentation into a project, an implementation process could be outlined as follows:

  1. Expect and plan up front for integration challenges in complex systems. Also, envision the long-term use, leveraging, and ROI that instrumentation can provide on your current project and any future projects.
  2. Define up front the complementary technical solution that will be most able to help the development teams to analyze, understand, and characterize system behaviors. This understanding will aid in debugging complex interdependent system issues. This solution is to instrument the code in preparation for use in future analysis. The solution should also define the Analysis Tool that will be used for the project.
  3. Modify up front the software development processes that every developer on the project must adhere to and enforce these throughout the development efforts.
  4. Resolve the issues as they occur – aided by capturing and visualizing the software flow in the selected Analysis Tool.

Instrumentation and software application analysis
Incomplex embedded systems, standard debug is no longer enough. Thesooner project teams realize that analysis of software system behavioris just as critical as traditional debug, the sooner projects will beginto benefit.

When coupled with Analysis Tools, or even “traceviewers,” instrumentation can provide insights into the system’sbehavior that are nearly impossible any other way. Some trace viewertools are available as open source and can be helpful. Commerciallyavailable tools take this to the next level, and provide insight intointerdependent behaviors of the system as well as tuning views toanalyze a developer’s specific use case.

Figure 2 provides a snapshot from Mentor’s Sourcery Analyzerhttp://www.mentor.com/embedded-software/sourcery-tools/sourcery-analyzer/as an example. The tool is able to extend beyond a typical trace viewerby obtaining complex measurements, performing mathematical transformson waveforms, and creating custom analysis agents, as well as supportingmultiple input trace data formats.

Such advanced featurescontribute to the goal of understanding the in-depth behaviors of thesoftware/hardware system. Perhaps most important of all, a commercialtool is able to work out of the box so the development team is focusedon debugging/optimizing their product.

Figure 2: Sample waveform view using Mentor’s Sourcery Analyzer.

LTTng and LTTng-UST
Whendefining a standardized approach to instrumenting a softwareapplication, a program should consider leveraging existing standards.For example, LTTng and LTTng-UST (User Space Tracing)  in the Linux environment meet the requirementsfor a developer to visualize and analyze much of the system behavior.

LTTngis focused on the Linux kernel behavior. LTTng-UST could be consideredmore of a framework standard for Linux applications, allowing adeveloper to add custom instrumentation into internally developedapplications. Complying with the formatting defined by LTTng willprovide an output format that is pre-defined and therefore allows thedeveloper to use available analysis tools and trace viewers forbehavioral analysis and debug.

In some cases, it may benecessary to develop a project’s own trace data format. This formatshould be well thought out, and it makes sense that a project does whatit can to make the instrumentation compatible with viewers alreadyavailable. For example, CTF (Common Trace Format) is the trace formatstandard for LTTng. Using CTF will provide support in open source traceviewers as well as the commercially available and higher power analysistools.

Conclusion
The fact is that in many cases,single-core, single-threaded, single-OS solutions are history. Therewill continue to be use cases where a simple system is adequate – theneed for a multicore SoC to be used in a garage door opener doesn’tappear to be looming on the horizon, for example. But if one is toremain competitive in today’s complex embedded systems, it will becritical to not only consider, but to plan for and implementinstrumentation technologies into the design.

Distinct softwareprocesses must be included to support these technologies. Don’t wait forthe results to roll in at integration. It’s far more prudent to planfor these tasks up front in the design process, rather than paying asteep price during later phases of product development – when inabilityto analyze critical system behaviors can have an exponential effect onthe success of your embedded design project.

Don Harbin is the product Engineering Manager for Sourcery Analyzer in theEmbedded Systems Division at Mentor Graphics Corporation. Don has over20 years of experience in the embedded industry, spanninghardware/software product development as well as holding leadershippositions on large-scale embedded systems services contracts. Don hasbeen involved in embedded Linux solutions since 2001 while holdingpositions at Intel, MontaVista Software, and currently Mentor Graphics.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.