The basics of being Agile in a real-time embedded systems environment: Part 3 - Embedded.com

The basics of being Agile in a real-time embedded systems environment: Part 3

Agile methods differ from traditional industrial processes in a couple of ways. Agile planning differs from traditional planning because agile planning is – to use the words of Captain Barbossa [ in Pirates of the Caribbean: The Curse of the Black Pearl (Walt Disney Pictures, 2003 ] “more what you'd call a guideline.”

Agile development tends to follow a depth-first approach rather than the breadth-first approach of traditional methods. Another key agile practice is test-driven development (TDD), which pushes testing as far up front in the process as possible. Finally, agile embraces change rather than fearing it.

Planning
It is a common and well-known problem in numerical analysis that the precision of a computational result cannot be better than that of the elements used within the computation ( Assuming certain stochastic properties of the error distribution, of course ).

I have seen schedules for complex system development projects that stretch on for years yet identify the completion time to the minute. Clearly, the level of knowledge doesn't support such a precise conclusion. In addition (pun intended ), errors accumulate during computations; that is, a long computation compounds the errors of its individual terms.

If you are used to working in a traditional plan-based approach, agile methods may seem chaotic and intimidating. The problem with the standard waterfall style is that although plans may be highly detailed and ostensibly more complete, that detail is wrong and the computed costs and end dates are in error.

Further, not only is the information you have about estimates fuzzy at best, it is also usually systematically biased toward the low end. This is often a result of management pressure for a lower number, with the misguided intention of providing a “sense of urgency” to the developers. Sometimes this comes from engineers with an overdeveloped sense of optimism.

Maybe it comes from the marketing staff who require a systematic reduction of the schedule by 20%, regardless of the facts of the matter. In any event, a systematic but uncorrected bias in the estimates doesn't do anything but further degrade the accuracy of the plan.

Beyond the lack of precision in the estimates and the systematic bias, there is also the problem of stuff you don't know and don't know that you don't know . Things go wrong on projects—all projects. Not all things. Not even most things. But you can bet money that something unexpected will go wrong. Perhaps a team member will leave to work for a competitor.

Perhaps a supplier will stop producing a crucial part and you'll have to search for a replacement. Maybe as-yet-unknown errors in your compiler itself will cause you to waste precious weeks trying to ?nd the problem. Perhaps the of?ce assistant is really a KGB (Excuse me, that should be FSB now. agent carefully placed to bring down the Western economy by single-handedly intercepting and losing your office memo.

It is important to understand, deep within your hindbrain, that planning the unknown entails inherent inaccuracy. This doesn't mean that you shouldn't plan software development or that the plans you come up with shouldn't be as accurate as is needed. But it does mean that you need to be aware that they contain errors.

Because software plans contain errors that cannot be entirely removed, schedules need to be tracked and maintained frequently to take into account the “facts on the ground.” This is what we mean by the term dynamic planning—it is planning to track and replan when and as necessary.

Depth-First Development
If you look at a traditional waterfall approach, such as is shown in Figure 1.10 below , the process can be viewed as a sequential movement through a set of layers. In the traditional view, each layer (or “phase”) is worked to completion before moving on. This is a “breadth-first” approach. It has the advantage that the phase and the artifacts that it creates are complete before moving on.

Figure 1.10. Waterfall lifecycle

It has the signi?cant dis advantage that the basic assumption of the waterfall approach—that the work within a single phase can be completed without significant error—has been shown to be incorrect. Most projects are late and/or over budget, and at least part of the fault can be laid at the feet of the waterfall lifecycle.

An incremental approach is more “depth-first,” as shown in Figure 1.11 below . This is a “depth-first” approach (also known as spiral development) because only a small part of the overall requirements are dealt with at a time; these are detailed, analyzed, designed, and validated before the next set of requirements is examined in detail.) The astute reader will notice that the “implementation” phase has gone away. This is because code is produced throughout the analysis and design activities. )

Figure 1.11. Incremental spiral lifecycle.

The result of this approach is that any defects in the requirements, through their initial examination or their subsequent implementation, are uncovered at a much earlier stage. Requirements can be selected on the basis of risk (high-risk first), thus leading to an earlier reduction in project risk.

In essence, a large, complex project is sequenced into a series of small, simple projects. The resulting incremental prototypes (also known as builds) are validated and provide a robust starting point for the next set of requirements.

Put another way, we can “unroll” the spiral approach and show its progress over linear time. The resulting figure is a sawtooth curve (Figure 1.12 below ) that shows the flow of the phases within each spiral and the delivery at the end of each iteration.

Figure 1.12. Unrolling the spiral.

This release contains “real code” that will be shipped to the customer. The prototype becomes increasingly complete over time as more requirements and functionality are added to it during each microcycle. This means not only that some, presumably the high-risk or most critical requirements, are tested first, but also that they are tested more often than low-risk or less crucial requirements.

Test-Driven Development
In agile approaches, testing is the “stuff of life.” Testing is not something done at the end of the project to mark a check box, but an integral part of daily work. In the best case, requirements are delivered as a set of executable test cases, so it is clear whether or not the requirements are met.

As development proceeds, it is common for the developer to write the test cases before writing the software. Certainly, before a function or class is complete, the test cases exist and have been executed. As much as possible, we want to automate this testing and use tools that can assist in creating coverage tests.

Embracing Change
Unplanned change in a project can occur either because of the imprecision of knowledge early in the project or because something, well, changed . Market conditions change. Technology changes. Competitors' products change. Development tools change. We live in a churning sea of chaotic change, yet we cope.

Remember when real estate was a fantastic investment that could double your money in a few months? If you counted on that being true forever and built long-range inflexible plans based on that assumption, then you're probably reading this while pushing your shopping cart down Market Street in San Francisco looking for sandwiches left on the curb.

We cope in our daily lives because we know that things will change and we adapt. This doesn't mean that we don't have goals and plans but that we adjust those goals and plans to take change into account.

Embracing change isn't just a slogan or a mantra. Specific practices enable that embracement, such as making plans that specify a range of successful states, means by which changing conditions can be identified, analyzed, and adapted to, and methods for adapting what we do and how we do it to become as nimble and, well, agile, as possible.

In the final analysis, if you can adapt to change better than your competitors, then evolution favors you .

Conclusion
Agile approaches are important because of the increasing burden of complexity and quality and the formidable constraint of a decreasing time to market. Understanding such real-time concepts as execution time, deadline, blocking time, concurrency unit, criticality, and urgency, are a prerequisite to understanding how agile methods can be applied to the development of such systems.

Important in implementing the Agile approach is model-driven development. Although not normally considered “agile,” MDD provides very real bene?ts in terms of conceptualizing, developing, and validating systems. MDD and agile methods work synergistically to create a state-of-the-art development environment far more powerful than either is alone.

To read Part 1, go to What is agile development and why use it?
To read Part 2, go to
Benefits of Agile Methods

Used with the permission of the publisher, Addison-Wesley, an imprint of Pearson Higher Education, this series of three articles is based on material from “Real Time Agility” by Bruce Powel Douglass .

Bruce Powel Douglass has worked as a software developer in real-time systems for over 25 years and is a well-known speaker, author, and consultant in the area of real-time embedded systems. He is on the Advisory Board of the Embedded Systems Conference where he has taught courses in software estimation and scheduling, project management, object-oriented analysis and design, communications protocols, finite state machines, design patterns, and safety-critical systems design. He develops and teaches courses and consults in real-time object-oriented analysis and design and project management and has done so for many years. He has authored articles for a many journals and periodicals, especially in the real-time domain.

He is the chief evangelist for Rational/IBM, a leading producer of tools for software and systems development. Bruce worked with various UML partners on the specification of the UM, both versions 1 and 2. He is a former co-chairs of the Object Management Group's Real-Time Analysis and Design Working Group. He is the author of several other books on software, including Doing Hard Time: Developing Real-Time Systems with UML, Objects, Frameworks and Patterns (Addison-Wesley, 1999), Real-Time Design Patterns: Robust Scalable Architecture for Real-Time Systems (Addison-Wesley, 2002), Real-Time UML 3rd Edition: Advances in the UML for Real-Time Systems (Addison-Wesley, 2004), Real-Time UML Workshop for Embedded Systems (Elsevier Press, 2006) and several others, including a short textbook on table tennis.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.