Using many processors is a valid solution to the growing complexity ofembedded design problems companies face: assign specific tasks to manysmall “smart devices” (these can be microcontrollers that cost between$0.50 and $2 depending on features) and make these smart devices dotheir intended function flawlessly (becoming a trusted component thatmeets its specification with margin).
Since a company typically solves the same or a similar problem overand over (whatever their specialty is) these smart devices will bereusable and upgradeable over time, allowing the engineers to focustheir attention on solving the new parts of the problem.
In “Escapethe software development paradigm trap” Mark Bereit called for anew software development paradigm. He asserted that “…softwaredevelopment should involve trustworthy components, specifications andmargins;…it should be something that can be generally dependable andtrustworthy.” He proposed a development model that assigns each task ina design to a different processor. Stop trying to make ever morecomplicated single devices correct, instead assign one or a few tasksto many devices.
Jack Ganssle earlier wrote in support of this approach in hisarticle “Subtractsoftware costs by adding CPUs” as a method to regain the losses insoftware productivity due to larger, complex projects that data showsare taking longer to complete and have a higher rate of defects.
But simply using lots of processors to solve smaller problems stillleads designers down a familiar road when a new “smart device” isrequired: the do-it-yourself, start-from-scratch,build-a-custom-solution- every time highway. What if you could buildthe “smart devices” up out of trusted software components and you couldadd to your catalog of software components as you go, creating oracquiring what you need?
Then each new project could build upon and benefit from the workthat has already been completed but more importantly tested and proven.What if an entire community of engineers were using these same softwarecomponents in their designs? Then each team of engineers benefits fromthe work of many unknown and unrelated engineering teams that put thesame software components through their paces by reporting and fixingthe bugs, sharing ideas and methods.
Build A Bridge To Reality
If this sounds like the usual ivory-tower rantings of a Utopian madman,fret not. The bridge from Utopian vision to everyday reality isstraightforward, provided we have an embedded design tool whosearchitecture incorporates these key ingredients:
* Portability: so that catalog itemsand completed designs can be moved from one embedded processor toanother, or from one architecture to another.
* Seamlessmultiprocessing suppor t: so that a widely distributed designdoes not become a (system) management nightmare; ensure that mastersand slaves can be independently developed and verified.
* Design Visualization : so that entireembedded designs are visible and reviewable by any and every member ofthe team, not just the low-level implementer; a full design should beviewed, manipulated, and even simulated easily.
* Rich ContentLibrary : so that ready-to-use, verified components are widelyavailable and, most importantly, the published architecture providesanyone the power to add new elements to the catalogs, to share these aswidely as they choose.
The Blueprints: The Bridge From TheInside Out
Delving deeper into the four key ingredients of our Utopian designtool, let's see how each helps us attain our goal of developing areliable embedded system in less time.
Portability, first andforemost, helps us in the most basic reliability-building method “iteration. With programmatic support for portability, a workingsolution that can not only be taken from design to design but alsoimproved. And there is no faster path to greater success than “standingon the shoulders of giants” or building upon the successful work ofthose who came before us.
This notion of portability is not new, but it is rarely achievedwithout artificial aid. While portability is the raisond'être of programming languages and entire computing systems(i.e. Linux, HTML, WinTel), portability in embedded systems is usedmostly to sell aggressive schedules andoverly optimistic budgets.
To qualify as a key strut of our bridge, portability must deliversupport for a new design (iteration), and flexibility to adapt tochanging requirements (progress). The most common form of portabilityis to use a certain technology over and over, be it a specific CPU core(like the 8051) or a specificvendor's microcontroller.
This kind of portability is flawed in almost every respect that aprogressive design requires. For a tool to foster a better form ofportability, it must bridge the gaps of a specific technology byproviding a hardware abstraction layer.
When a design description calls out in general terms its hardwareneeds, and specific processors and specialized hardware can describetheir resource offerings in similar terms, a design tool can search outmatching hardware from a universe of possibilities and couple it to thespecific design. Then a design can easily move from less to morerequirements, old to new hardware or technology, gaining benefits witheach new step, giant-sized or dwarf-sized.
Seamless multiprocessing, the second ingredient, is more than just amouthful of jargon. Along with portability it spans ever larger andmore diverse embedded problems without the complexity (and lack ofreliability) typically associated with large or expanding projects.
This is the fundamental approach Ganssle and Bereit pose as thesolution to the current paradigm trap. Unfortunately, design problemsrarely separate cleanly into smaller pieces, and if the new situationof communicating or synchronizing the parts is harder to get right thanthe one large design, there is no net gain.
This is where the design tool helps form a bridge, by providingbuilt-in communication mechanisms that allow any device to add aninterface to the outside world (for control input or monitoring), andincorporating remote elements in a design as “seamlessly” as if theywere “wired in”. In keeping with portability, these remote elements ofthe design need to be swappable with real “wired-in” elements if thenext (or current) design calls for it.
Imagine a network switch or router that employs an embedded systemto maintain the enclosure temperature with fans for cooling and varioustemperature sensors to sense the heat. The first generation wasdeveloped such that it had all components on one board and usedhard-wired thermistors for sensing temperature by the same device thatcontrolled the speed of the fan. A depiction of this system is shown inFigure 1 below.
|Figure1: Single Controller Senses Five Thermistors and Drives Two Fans|
Now suppose the next generation model is defined and requires itshardware components to be carefully spread across three circuit boardswithout the option of hard-wiring all the thermistors needed to monitorevery corner of the enclosure to one MCU (because daughter boards areplugged into a backplane with numerous signaling and routingconstraints).
A tool that supports seamlessmultiprocessing would let the designer start with the firstgeneration design and then replace each thermistor with, for instance,anI2C slave device which senses temperatures locally on each separatecard, automatically adding the communications code to handle theaccessing of temperature from the remote devices (which must adhere toan agreed-upon protocol).
In this case, the fan-controlling device design changes minimally,even though the system is radically different (and all of theunderlying generated code has to change).
For remote temperature sensing, suppose it was just as easy tocreate one or more small devices that took the thermistor sensingapproach from the original design and exposed the sensed temperaturesas an I2C slave device (using the same agreed-upon protocol). Voila!Two (or three) embedded devices seamlessly operating as one, yettotally independent. A depiction of the new system is shown in Figure 2, below.
|Figure2: MCU1 Senses Three Thermistors, MCU2 Senses Two Thermistors, MCU3Reads Five Temperatures Via I2C Bus And Drives Two Fans|
Design visualization provides us two necessary benefits to reduce both the time to completea design and the stress: 1) review all aspects of a design easily, and 2) with simulation, demonstrate thedesign for all to see. For many decades softwarepundits have extolled the benefits of compact and visual designmodels and described (often theoretically) such tools.
In many fields today this has been realized, though often definedaround a particular domain need (for example the drag-and-drop-styleHTML tools and LabVIEW dataacquisition and instrument control). The problem is that these eitherdo not address embedded design problems, or when they do it is oftenwith just a new a equally complex graphical language. While LabVIEWbrings instrumentation programming up to the level of circuitschematics, not everyone can understand schematics.
Embedded design visualization needs to be “assimple as possible, but not simpler.” Show what is importantto the understanding of the problem being solved, and hide the rest.For instance, if input data is used by two different portions of thecontrol logic, the design tool's display screen or report shouldclearly provide visual cues which parts do and which don't use thedata.
The simulation needs to show how changes in the input data flowsthrough the control logic to the outputs, and still remain at thehighest level of representation relevant. The low-level details (forinstance the communications protocol or how a 12-bit analogto digital converter (ADC) value is converted to a temperature) arenot relevant to the understanding of the system logic, and thereforeare hidden.
Since these are often hardware implementation specific, we alreadyagreed when discussing portability that the design does not (even mustnot) hinge on these details. Of course the definition of “assimple as possible” will lie in both the eye of the beholder and thenature of the problem solved, which requires an architecture that isflexible and adjustable.
Now that we have come to our last ingredient it is clear that nosingle ingredient is sufficient and all are necessary. This isunderscored by the need for a rich content library of trustedcomponents or building blocks. Without portability, seamlessmultiprocessing and design visualization, a rich content library is of limitedvalue. The baseline of what constitutes “richness” in content as inlife, depends upon what you have today.
In essence, if you can solve your problems, the library issufficiently rich; yet since we always seek better solutions, thelibrary will always lack something. By necessity our tool must allowcontent to be added easily and by anyone. Therefore the tool must bearchitected to discover new content, and the methods for creating anddistributing the content (so that it can be discovered) must bepublished and accessible.
We truly have reached utopia when the tool itself enables thesolutions created with itself to be published as new content, hidingthe implementation details for the next user (until or unless he or sheneeds them).
What about trust in the components? This is not automatic nor is itan architectural element, but is built upon policies and community (andsometime the policing of a community). Sound methods for identifyingverified content and sound policies for what constitutes trustedcontent are both required.
In the beginning, the community may be the tool's author or vendorand the policies are those created by the authors or vendor to ensuretrust and limit liability. In time, the community can supersede theinfluence of the vendor as adoption grows (if methods are published andthe architecture is sufficiently open to allow this). If this is notthe goal of the author or vendor, the community will struggle to expandand flourish.
Are we there yet?
What will it take to achieve the kind of flexible, reliable andefficient programming and development environment, essential in today'sincreasingly complex, multiple processor embedded environment?
Quite a lot if we stick to traditional techniques for programmingmicrocontrollers, such as run time programming, where somemicrocontroller devices include provisions for programming themicrocontroller memory while the unit remains installed in the system.
Run time programming schemes generally provide special circuits thatallow user application software to modify the memory contents. Thismodification, or programming, is usually done by invoking a particularsubroutine during the normal course of software execution.
One disadvantage of run time programming is that it usually requiresthe user to devote a portion of the microcontroller's available memoryspace to support the programming function. This memory space isgenerally used for a software subroutine that serves as an interfacebetween the user's application software and the microcontroller'sprogramming mechanism.
A second disadvantage of the run time programming is that it is notwell suited to programming a completely new (un-programmed)microcontroller. Since most run time programming mechanisms aredependent on the above mentioned interface subroutine, they must havethe interface subroutine installed by means of special purposeprogramming equipment before it can be used to program any of theremaining user memory space.
In addition to the drawback described above, the user interfaceportions of many conventional software applications for programmingmicrocontrollers are very difficult to use. Many of the user interfacewindows used in the software tend to pop-up as the user is attemptingto program the microcontroller. Windows in the design software arepopped based on a “flat-organized” drop down menu system with little orno cues as to the overall design process.
Each window tends to correspond to a discrete function of themicrocontroller and many functions may be required to do simpleprogramming tasks. Importantly, the windows give no information as towhich ones should be used first and the subsequent order that theyshould be used in.
Also, it is difficult transitioning from one window to another forsharing resources because the programmer could not remember whichwindow contained the source of data required data and which windowneeded the data. Having many windows open on a computer screen canoften confuse the programmer as the programmer is unable to keep trackof which window represents which function of the microcontroller.
Another drawback of the conventional microcontroller programmingmethods is the inability to track hardware resources, such as memory,power, programmable logic, etc., used as the programmer adds or deletescomponents to a system that is to be implemented on a target device.
What is needed is a new approach to microcontroller programmingapplication with extensibility capabilities that allows programmers todynamically program a microcontroller with datasheets incorporated inthe programming software.
A need also exists for “out-of-the-box” microcontroller programmingsystem solutions that allow programmers to efficiently organize designcomponents necessary for the complete programming of a microcontrollerwithout unduly tasking the programmer while tracking system resourceusage by the target microcontroller.
These issues and a possible development framework that overcomessuch limitations will be the topic of Part 2: “The right development framework and how to use it. “
Jon Pearson is the product managerfor PSoC development tools at Cypress Semiconductor Corporation.Jon has been with the PSoC product line since its first offering in2000 and led the definition of PSoCExpress . Prior to joining Cypress, Jon developed embeddedsystems using controllers of various size and complexity for projectsas diverse as satellite modem controls and aircraft electric powergeneration. You can reach Jon with questions and comments by email at firstname.lastname@example.org .