managing editor

Susan Rambo is the executive editor of EE Times. Prior to EE Times, she was the managing editor for and the print publication, Embedded Systems Design/Embedded Systems Programming since 2002. You may reach her at


's contributions
    • Here's the obligatory holiday tech gift list for people who don’t know what to buy an engineer for that special end of year celebration: Christmas, Hannakah, Kwanzaa, winter solstice or what have you.

    • The ACE Awards ceremony at the 2014 ESC/ EE Live! included winners from the usual companies, products, execs, and design teams but also awards to marketing, journalists, and contributors. View the slide show to see who won!

    • Ever have your iPhone run over by a Humvee or two? For Memorial Day, iFixit, the repair site for electronic gadgets, is sharing some of the best of repair stories they've collected from U.S. military personnel.

    • The Electronics West show will be held February 12 through 14, 2013 in Anaheim Convention Center, Anaheim, CA.

    • TI's new RTOS for its microcontroller platforms combines a real-time multitasking kernel with TCP/IP, USB stacks, and other middleware components.

    • The Embedded Market Survey has been tracking trends in embedded systems every year since the early 1990s. Here is the archived results of the surveys by year, plus links to articles based on the results.

    • U.C. Berkeley's EECS department sponsors "DREAM Seminar: Sensor fusion in dynamical systems--applications and research challenges" to be held on Dec. 11, 2012 at Wozniak Lounge in Soda Hall on U.C.B. Campus, in Berkeley, California. Thomas Schön, Linköping University, Sweden is the speaker.

    • Globecom 2012 (IEEE Global Communications Conference) will be held Dec 3 through Dec 7, 2012 at the Disneyland Hotel in Anaheim, California.

    • MENS 2012: The 4th IEEE International Workshop on Management of Emerging Networks and Services will be held on Dec. 3 through Dec 7, 2012 in Anaheim, California. Part of Globecom 2012.

    • The Maker Faire will be held December 1 and 2, 2012 at the National Museum of Emerging Science and Innovation in Tokyo, Japan.

    • 9th International Conference on Machine Learning and Data Mining MLDM 2013 will be held July 19 through 25, 2013, New York/USA.

    • Interservice/Industry Training, Simulation and Education Conference (I/ITSEC) will be held in Orlando, Florida, Dec. 3 through 6, 2012.

    • Tech & Food Hackathon to tackle meat industry challenges. Hackathon runs all weekend, starting on 7th. Final project pitches will be on Sunday afternoon (the 9th), which will be the most interesting part.

    • It's a stamp act at the January 2013 Design Con: dollars, hi-tech swag, and ESC all-access passes are at stake. Plus energy-harvesting and automotive tagline contests.

    • It's in the bag. Giveaways, swag, free stuff, contests, prizes, drawings. Here's a listing of upcoming opportunities for engineers to earn, compete for, or just snatch and grab free stuff (mostly from trade vendors).

    • Nikola Tesla's last remaining lab is for sale. What others are doing to help save it and how you can help.

    • Welcome to a new space for embedded systems developers and engineers for exchanging coding tips and tricks, tools, and techniques for integrating embedded systems into networks on the Internet (the cloud).

    • As editor-in-chief of Embedded Systems Design magazine, Colin Holland takes a regular bird's eye view of the embedded industry.

    • 1999 Embedded Market Survey, produced by Embedded Systems Programming magazine and the Embedded Systems Conference.

    • Embedded Systems Programming magazine's subscriber survey from 1997 is available here for history buffs.

    • Interesting comment, Cdhmanning. Complexity has exploded, you say. Complexity in hardware, software, systems? All of the above? Isn't computing power and memory almost free now (compared with the early days), so we can get more complex? Perhaps complexity can be seen as is a sign of progress. But the temptation is then to make something complex just because we can. (or because we're too lazy to make it simple.) Maybe that's the people problem you refer to.

    • Hey, Steam Kid, thanks for catching that. I fixed it. Now Figure 4 looks like Figure 4.

    • With the "Cloud" and Internet of Things, more and more corporate IT and embedded systems folks are going to have to work together. Both have knowledge to share with each other that is valuable, with all the embedded systems ending up on the networks.

    • Hello Rwehri. Thank you for your interest in this report from over 10 years ago. The document was not created in 2012 -- it was just uploaded to this website on 2012. Thank you for at least registering to access the free report. Unfortunately there's not much profit in making a report from 1999 look good, by 2013 standards. Frankly, we're lucky it exists at all. But I'll take a look at it in my spare time, unless you know any experts from the preservation society. I'm sorry you were disappointed.

    • Here's a job for embedded systems programmers looking for a slight career change: Administrative Patent Judge - Electrical, Computer, Mechanical "Yearning for a job that harnesses the power of innovation, yet provides work/life balance? Now you can experience this opportunity as an Administrative Patent Judge with the Patent Trial and Appeal Board."

    • Thanks for your comment. Yes, I am still uploading the archived articles from Embedded Systems Programming magazine and the Embedded Systems Conference. I'll write a blog about it soon, because I know people care about having this content available. (I just uploaded a some old Embedded Systems Programming articles on Friday from 2000 and will update the Tables of Contents so you can see what content is being uploaded. I still have probably 1,000 more articles I can upload. However, most of those will be PDFs, so they'll be easier to upload than an HTML version.) Maybe I should crowd source the upload---get some engineers to upload articles so they're available for everyone to read.

    • At least the your engineer friend found an owner's manual in his rent-a-car. When I rent a car, the owner's manual always seems to be missing from the car, which is very annoying because not all cars have the same controls, as Bernard Cole points out.

    • Thanks Esteban, I removed the link to the webinar. We only keep webinars for two years, so this one had expired and will be deleted from our system. --Susan Rambo

    • Thanks. We are fixing this now. Sometimes our articles collide with each other. The text in this article is from another article obviously. Our web-based content management system is a bit touchy, so every so often something like this happens. --Susan Rambo

    • RogerC and Studleylee, thanks for noticing the filename capitalization inconsistencies. We've corrected them in the article above. --Susan Rambo Managing editor

    • Thank you, Antiquus. We've corrected the error. The key in Figure 1 was incorrect but I have replaced the figures with the originals Dan Saks submitted. He designated the languages different colors between the Figures 1 and 2 but within each figure, the color keys are now correct.

    • The Computer History Museum has a good timeline with pictures, here:

    • Thanks Peralta_Mike and all of you who caught some errors. I fixed the superscript (106 should be 10^6, 109 is 10^9). Jack Ganssle's original text in Word was correct but because we have to restyle the text for print, the superscripts have to be reapplied. I missed the superscripts as I was laying out the article. (These articles need several steps of copyediting and proofreading, but we don't have the staff for it anymore. That means sometimes this kind of thing slips through.) Sorry about that. Contact me at

    • RamSan, The code is now here: The link is live on page 2 of the article above. You do have to log in to download the code. Thanks for letting us know about the broken link. --ESD managing editor

    • Thanks Nihil Obstat, I updated the link to the code. You can find it here:

    • Daedalus, I fixed the code you mentioned above (on page 2 of this article): the ="" was a mistake that occurred during the HTML typesetting but the angle brackets are in Dan Saks' original text. --Susan Rambo, managing editor, ESD magazine

    • Ddaly, I fixed the images. It had nothing to do with scaling the images up or down. I simply replaced a GIF image with JPG, and that did the trick. (In the GIF image, the underscores were too light.) Thanks for noticing. BTW, if you click on the image, you can see the full-sized graphic. ---S. Rambo, managing editor, ESD magazine

    • I reposted the Motor.c code in our new source code library. You can find it here (The link in this article is also now up to date.) I'll be updating all the ESD/ESP magazine source code archive links in the next month to make sure all source code from Embedded Systems Programming/ESD magazine is connected to an article and vice versa. --Susan Rambo Managing editor, ESD magazine

    • I just spoke with Barbel French, the press rep for Green Hills Software, who was describing an ESC Boston presentation led by Ron Wilson (editorial director of ESD, ESC, She said she was surprised how few people raised their hand when he asked how many people were using static-code analysis tools--only three people. (I don't know how big the audience was...I assume more than three people.) She said static code analyzers are expensive (which may be partly why they aren't used as much as they should be?) so GHS added their static-code analyzer to MULTI (the GHS IDE). Users won't have to buy the static analyzer separately. Seems like a good plan and a painless way to try out a code analyzer if you're already using MULTI. (She also mentioned that some developers use several static-code analyzers because different analyzers find different bugs.) Here's an interesting column written earlier this year by Jack Gannsle about static analyzers. Any thoughts on static analyzers? Are you using them? Why or why not. --Susan, ESD managing editor

    • cmhicks, Sorry about that: these forums aren't ideal for those of your who want to post code. Both angle brackets and square brackets disappear. You have to use HTML codes to get them to appear (and sometimes even that doesn't work). Here's the code you were trying to post:

      One approach I sometimes use is to template a class like this on the base address of the peripheral. This removes unnecessary indirections, and gets the (or its equivalent) inside the class header file. You then instantiate it like this:
      timer_registers the_timer;
      and use it like this:
      // etc.
      If you've got more than one peripheral of the same type, you can do:
      uart_registers uart0;
      uart_registers uart1;
      the penalty in this case being duplicated code for the two UARTS. CH ==
      --Susan Rambo Managing editor, ESD magazine

    • KarlS, Thank you for pointing out this forum's Kafkaesque instructions about the preview tab (there is no preview tab as far as I know). Just today, after reading your comment, I reminded the editors-in-chief and the manager of our web of development that the preview tab needs to be added. And once again, thanks to everyone for your patience in using this forum. Although the user experience can be improved, the automatic posting still beats having to have an editor post every reader's response by hand, as we used to have to do a few years ago. (For one, we'd never have any discussions if you had to wait for an editor to pos or approve your comment.) If, however, you have posted something that you'd really like to have fixed, contact me by e-mail using the word "Forum" somewhere in your subject line. I can edit or remove your comments. --Susan Susan Rambo Managing Editor, Embedded Systems Design magazine Site editor, Industrial Control Designline EE Times Group Email:

    • More reader comments (originally posted on other sites) Great article; you described my style to a "Tee". Loved your comments about punched tape - remember the times when the tape missed the bucket and you would have to scoop it off the floor are refold it. On those occasions, I was really glad my coding had to fit into 2K EPROM! I, too, write those one or two line general utility functions. Guess I'm lazy cause a lot of them are out there in libraries. But trying to find exactly the right one for the tasks at hand can be more work then rolling your own! Maybe programming style really has a lot to do with your early applications. Like you, mine were low level hardware control programs, so I nearlly always started with the drivers and IO. Hearing relays clank away and watching led's flash during testing really gave me the warm fuzzies. Ah, the good old days. Now I'm so far away from the hardware that it could crash and burn and my program wouldn't know it for days. --Brian (Bakhruddin) I suspect that most people of jacks vintage and experience do the same. It is all a question of what one has been exposed to. I certainly could have written a very similar story and I still work in the way he describes. I learnt my programming more than 20 years a go on an oil rig where we designed the system as we went along and testing was running the code written in the last hour for real. After we knew how it worked we wrote flow diagrams, and sub routine call trees by hand on bits of paper in order to convince our selves we knew what we were doing. I still use this type of code/design iteration technique today. Most certainly those of us that remember have to boot a system by programming the boot code by toggle switches really appreciate using an IDE. Why make programming harder. Long live KISS. --slabs Really great article, thanks. I admit, that I don't like the idea of having everything "as modular as possible", it depends on the situation. When you write PLC-like state programs, then fragmenting something to blocks just for having it divided to blocks is possible, but the readability of such code suffers. Whatever is functional is beautiful. --pteryx

    • More reader comments (originally posted on other sites) Hi Jack, I read your paper with smile: what a surprise, the way I write software is exactly the same as you. Possibly this is due to we have one thing in common. I'm a practitioner have my hands dirty for 20 years, start from 8051 assembly, later move on to C51, PIC and ARM. Maybe our way of write software is a bit old fashioned. But customers won't care how we write, all they want is the final product reliable and bug-free. I'm happy I'm still get hired, this partly proves this "old" fashion is a good fashion. Enjoy writing, --Zhe (ukzw) Jack, You are a guy after my own heart. Starting with assembly code for 8080's, 8085's, 6802's, and 6809's and later some C and C++ was the way I did embedded programming. I relate well to your comment on Turbo Pascal--it was wonderful. I did modules, not as small as yours but generally for specific functions, even when doing assembly language programs. There was a similarity to my modules and the units in Turbo Pascal. I did the top down, with some bottom up, and maybe your external-internal approaches. Each module was tested extensively, and therefore integrated well at the end. Maybe the fact that I was fundamentally a hardware designer warped my mind a little, as I expected each subsection (unit, function) to actually be known to work as early on as possible. This was my ego trip of "that's good, that works, now let's go on." My very first programming was on an IBM 1401 with 8K bytes of memory and done in SPS-2, basically assembly language and this was done in 1964, so you know how that dates me. Thanks for your great perception of things when you write your articles. --Lyeal Lyeal, I think you put your finger right on a couple of very good points. First, I think having written in assembly language is a big plus. I sure wouldn't want to force everyone to have to do the same, but I do think it the experience gives one a very specific view on what's going on inside the hardware, that's hard to get otherwise. Even more importantly, I think understanding hardware design is also a plus. A hardware designer truly understands the concept of "black box," to the extent that it's an ingrained instinct. When I was about 10, I got interested in ham radio, so decided to build my own receiver from a schematic I found. I went to the store, bought all the parts, and soldered them together (with acid-core solder, no less). Guess what? It didn't work. That's where my project ended, because I had no idea what was wrong. I had found the schematic, no problem, but I had not the slightest clue what all those parts were supposed to be doing. Fortunately, I got smarter with age. Today, I'd build a certain circuit, say an oscillator. Then I'd test it with scope, etc., to make sure it was osc-ing, and at the right frequency. My pal Peter Stark designed a computer kit that worked that way. The first circuit elements consisted of the CPU, a clock crystal, and an LED indicator light (poor man's scope). He tied the data lines to ground, so as the CPU ran, it would cycle through the address space. That's my kind of circuitry, and the concept can and should carry over into software. --Jack Crenshaw

    • You have to use the html codes get the greater than/less than and square bracket glyphs. I hope i fixed it for you. --S. Rambo Managing editor Embedded Systems Design magazine

    • The following comment was sent via e-mailed: In "Oversampling with averaging to increase ADC resolution" Franco Contadini describes how to get 16 bits of RESOLUTION from a 12 bit ADC, but it certainly doesn't guarantee 16 bits of ACCURACY. At the end of the day, accuracy is usually what we're after. Hardware Guy

    • Originally posted on Industrial Control Designline: Good article. The more our products and processes rely on embedded systems, the more this will come up. At some point, after more accidents and deaths, the deniers will come around to the need for proper software QA. As a new member of Professional Engineers of Ontario (PEO) Council, I expect to get into regulatory obligations for software (and other newer engineering specialties) under Canada's Engineering Act. It will not be easy since the profession is still focused on traditional civil, mechanical and electrical engineering practice. --Engineer62

    • Ron, thanks for the write-up. Two whole generations have missed the vacuum tube era and take cheap computers for granted. When I was in college we had access to the campus' one machine (though only special people could ever actually see the thing), but no one could even dream of owning any sort of computer. How things have changed! All the best, Jack

    • A reader comment submitted via e-mail: I've just read "Remembering the Memories" (Break Points - January/February 2010) by Jack Ganssle. His description of computers that used drum memory brought back some personal memories. My first computer, which I programmed in the summer of 1961, was an IBM 650. It had 2000 (decimal) words (10 decimal digits plus sign) of memory arranged as forty tracks of 50 words each. The heads were fixed (not even floating) and the 6" diameter drum rotated at 12,000 RPM. This arrangement caused some thermal issues when stopping and restarting the machine. Obviously, you didn't want head crashes during warm up. This meant that, after the drum stopped and while it was cooling, the drum and heads came in contact so that restarting would be disastrous. Two additional drums were optional, as was a small amount of core memory (perhaps as much as 100 words). Another optional accessory was the RAMAC disk system. Unfortunately, I don't remember the capacity of RAMAC system. Efficient programming was a real challenge. In order to avoid excess latency in accessing instructions, each instruction specified the location of its successor. In the case of branch instructions (all presumably conditional), each instruction specified two possible successors. Program loops were usually points at which instruction-placement optimization was relaxed, but such loops could be unrolled to minimize the loss - if you could afford the memory for the duplicated code. There were table look up instructions that could search entire tracks in a single revolution. This was a great boon under the circumstances. On the two 650's I used, all input and output was via punched cards, although I believe that a line-printing device might have been optional. I still have one or more original manuals for this computer and the peripheral devices with which it was used. Ron Martin Ann Arbor, MI

    • A reader comment: Great review! I am pleased to have followed Tek since I ogled the ancient catalogs as a teenager and wished I could have all of the stuff (this was back when they had dual-beam scopes, and the 519 with direct CRT plate drive). Then I used a 535 for years at UCLA, until finally some unspent year-end money popped up and I got a 475 and FET active probe, for a dramatic improvement in bandwidth and sensitivity. Then I bought a 2236, with counter-timer-multimeter (including access from the Ch. 1 probe tip) for about 3k in 1985, and although some inevitable deterioration has set in, still serves. Based on 2010 dollars the 5400 asked for the scope reviewed is really a good deal. There were some dark days at Tek, when they sent a lot of old-timers home and made some truly execrable scopes---I remember one that didn't even allow one channel to be triggered from the other! But eventually things righted themselves and they got back to engineering excellence. bcarso --originally posted on Planet Analog

    • This article was controversial when it was first published (September 2006). Here are the comments submitted at the time by readers, some of whom have written for Embedded Systems Design and teach classes at the Embedded Systems Conference. (These comments appeared in the October 2006 issue of Embedded Systems Design.) --S. Rambo Managing editor, ESD ------------------------------------------------------------------------ Mr. Su’s article (“Saving space with Pointer-less C,” Mengjin Su, September 2006, p. 16, ID:192503623) is fundamentally flawed in a number of important areas. First, the C syntax for pointers isn’t ambiguous as Mr. Su suggests. The language rules are clear, and anyone who takes the time to understand them will find the pointer syntax easy to understand and use. (Granted, sometimes operator precedence creates hard-to-interpret code, but those problems are usually easily corrected with parentheses. Mr. Su does not suggest that operator precedence is behind his complaints with C pointers.) Second, Mr. Su’s reasoning behind how to turn pointers into integers (“Language without pointers”) is inadequate and *highly* platform specific. Many of the 8051 compilers I have worked with over the years add additional bits to pointer objects, to track whether the referenced object is in ROM vs. RAM, which memory bank, etc. In such systems, the compiler generates code to silently consult those extra bits at runtime to figure out which instructions to use when dereferencing the pointer. Unless Mr. Su takes those additional bits into account (which requires intimate knowledge of the toolchain, and space beyond the “16 bits” needed), his technique will irreparably damage the pointer object. Finally, Mr. Su’s PLC syntax is, with all due respect, abhorrent. It practically encourages the developer to type “char” once, and “int” later for the same pointer-like object. Or at least it helps the developer miss one or two such conversions when a data type changes. I think it’s great that Mr. Su has taken the time to consider what he perceives to be a problem with C, and to also propose a solution. Such critical thinking is what has brought us C, C++, and Java, and will bring countless more improvements in languages and techniques as time marches on. But PLC doesn’t get my vote. It solves a problem that isn’t there, and does so in a dangerous way. Better to get a solid grounding in C pointers, switch to a pointer-less language like assembly code or Java, or get a better C compiler. Please don’t spend any more time on PLC. —Bill Gatliff Freelance Embedded Developer and Consultant Peoria, IL ------------------------------------------------------------------------ Mengjin Su responds: Regarding point [1], it is true that C syntax or grammars for pointers are not ambiguous since the C compiler is based on the syntax and works perfectly on all desktop machines. A non-ambiguous syntax doesn’t guarantee that a program based on the syntax is non-ambiguous, especially for the embedded applications. “Subject - verb - object” is the rule or syntax for English. But we can generate the following two sentences based on the rule: (1) I eat an apple. (2) An apple eats me. [2] Indeed, my article on Pointer-less C focuses at embedded applications because a lot of embedded CPUs or MCUs are in Harvard architecture which has separate ROM and RAM spaces. In such situations, the “normal” pointer operation might not work. As you described, many of those C51 C compilers added extra key words or extended the syntax to access different memory spaces, which makes the compiler more complicated. I have reason to believe that Pointer-less C can be ported to C51 environment and will work very well. [3] Again, my article covered two aspects: (1) eliminate the ambiguous of using pointer, and (2) make the compiler easy to be made. ------------------------------------------------------------------------ Did you guys actually read the cover article on Pointerless C before printing it? A whole article about a subject that can be handled with a few C macros? I’m not even going to get into the fact that the different sizes of chars, ints and longs is not addressed in the notation. Apologies if I’ve missed the big picture on this, but I don’t get it.

      #define P(a,b) (*((b *)(a)))
      // allow for different sized objects
      #define P1(a,i,b) (* (((b *)(a))+i))
      int buffer[10];
      void main(void)
         int i;
         int *pi;
         int addressi;
         char ci;
         char *pci;
         int addressci;
         pi = buffer;
         addressi = (int)pi;
         printf(“%p %p %p\n”,pi,buffer,addressi);
         P(pi,int) = 1234;
         i = P(pi,int);
         printf(“%d %d %d\n”,i,P(pi,int),buffer[0]);
         i = P(addressi,int);
         printf(“%d %d
         P1(pi,1,int) = 5678;
         i = P1(pi,1,int);
         printf(“%d %d
      —Charles Nowell Longwood, Florida Note: 5th line down in the code, the 10 that appears linked is actually a 10 surrounded by square brackets. -- sr ------------------------------------------------------------------------ I read the latest print issue, and particularly looked forward to the cover feature “saving space with Pointer-less C.” Unfortunately when I finished I didn’t find benefits or even a clear approach from the proposed, albeit interesting, method of Mengjin Su. There is clearly a problem with using pointers in programming, as anyone will admit when attempting linked-list and binary sort exercises in their introductory programming courses. The root cause of the problem is not the very powerful and useful concept of pointers to data, because Mengjin Su describes simply a different implementation of them. The root cause is the usage, the so called “pilot-errors.” But you can choose a simpler airplane. There are very few, rare instances where direct usage of pointer operations are more understandable than operations with arrays and structures, which fundamentally require pointer technology but are more intuitive than “**Dptr++.” As with many programming concepts, especially in embedded systems, just because you can implement something very complex in a single statement of “C” or any other language, doesn’t make it good practice. Processes, tools and training are the best vehicles for saving programming projects and engineers’ sanity. We are approaching a time with embedded systems and controllers where hand-optimizing an implementation will be eclipsed by tools and methods which build highly reliable systems quickly and easily out of trusted components. When a particular approach or tool used to solve a problem inherently leads to errors, it is time to “re-engineer” your processes and use different and better tools and approaches. “Pointer-less C” is very possible in all compilers today, using arrays, unions, and structures, a new compiler is not required. —Jon Pearson Product Manager Cypress Semiconductor Corporation Lynnwood, WA ------------------------------------------------------------------------ Dan Saks chimes in: When I saw the article “Saving Space with Pointer- Less C,” I was immediately wary of where it was going. Love it or hate it, C’s treatmentof pointers and arrays is truly unique—it is arguably what makes C . . . C. C’s pointer notation also provides the conceptual model for iterators in the C++ Standard Template Library. The generalization of pointers into iterators allows an extraordinarily flexible and efficient programming style. Reading the article only confirmed my suspicions. The article asserts that programmers can be confused by the meaning of pointers in systems that support ROM as well as RAM, but it doesn’t explain why programmers should care whether a particular pointer points to RAM or ROM.As Bill Gatliff notes, when the distinction matters, compilers can (and should) take care of it. The article’s proposed solution fails to consider any existing compiler-specific extensions that might address the problem, and completely ignores any concerns for “const” or “volatile” semantics. As Charles Nowell observed, the proposed syntax is nothing but a combination of casting and dereferencing. In effect, PLC treats integers as untyped pointers that must be cast to the appropriate pointer type at every use. Programming with untyped pointers has repeatedly proven itself to be highly error-prone. To add to the excitement, PLC eliminates scaling on pointer arithmetic (it always adds one no matter what the integer “points” to). I expected the article to conclude by stating at least some observed benefit of using PLC instead of C. For example, the article explained that interviewees struggled with the test question in Listing 3. Where are the comparable PLC test and results? How about data showing an increase in programmer productivity, or a decrease in software defects, or any other benefit from using PLC instead of C? They’re not there.And, despite the title of the article, there’s no explanation of how PLC saves space compared to C. PLC’s notation is an ill conceived, platform-specific solution to a vaguely specified problem. The dialect’s disregard for type checking represents a giant step backward. In my opinion, Embedded Systems Design should not have published this article as is. —Dan Saks Saks & Associates Springfield, OH ------------------------------------------------------------------------ Editor’s note: Our editorial staff is responsible for the final article title and description (deck), not the author.

    • The following e-mail has been posted with the Jim Sorensen's permission. From: Jim Sorensen Sent: Tuesday, December 01, 2009 1:08 PM To: Richard Nass Subject: Finding hard bugs Dear Editor, Difficult to find bugs almost never occur if all I/O operations time-out and return a status, all functions return an error status and the program has a good error logging mechanism, i.e., a memory log area and a serial output. The error logging should have a run-time enable flag by thread (if multithreaded) and/or function and also be conditionally compiled as a macro ‘printf’ statement to remove from the final product as necessary. For real-time problems, the programmer disables the serial port and examines the memory area. Reading the log from a browser is a real plus. I believe the real measure of programmer skill is the error handling in the program. Jim Sorensen Extron, Anaheim, CA,

    • The following is an e-mail sent to Richard Nass and is posted with permission: I agree with you 100%, regarding Cavium-MV as well as Intel-WR. Another point in support of your conclusion is this: If you're Cavium, why would you allocate precious resources for engineering work related to support of competitive processors? Why would you share technology (that you think highly enough that you bought the company) with your competitors? Is it part of their business model to create solutions for their competitors? The only answer I can think of is that Cavium (and Intel) seek to make MontaVista (and WR) industry standard solutions, and thereby expand their appeal by being the driving force behind them as they permeate the industry. I don't think this is as good a strategy as keeping the technology for themselves, which seems to have immediate and strategic benefits. As you said, the jury is still out so we'll all have to wait and see. --John Carbone VP Marketing Express Logic, Inc.

    • An e-mail sent to Rich Nass: Dear Richard, I have enough trouble trying to explain what a Software Engineer is! I then try to explain what embedded software is. I usually mention their cell phone or remote control. That they understand. Then I say I do not work on that! When all is lost, I just say I write computer programs. I never thought of calling myself an embedded software developer/designer as a title. However, my professional personal e-mail address is But that address happened because originally optonline only allowed 8 characters, so my address was SoftGuru. When I found out that you could have more characters, somebody else stole SoftwareGuru, so I then tried EmbeddedGuru. Turns out that this is a better e-mail address anyway, since I have been involved in Embedded SW forever. Let's face it, your title is what they hire you as. Your skills are why they hired you. Joanne Tow Senior Software Engineer Siemens Industry, Inc.

    • The "real men" isn't intentionally sexist. It's tongue-in-cheek. Ever heard of 1982 book "Real Men Don't Eat Quiche" ( or the famous article "Real Men Don’t Use Pascal" ( )?

    • Hello SaurabhG, The code (heapcalc.c) is now online at and available for download. Don't forget to rate the code after you try it out. --Susan Rambo Managing editor ESD magazine

    • The following comment is from Jack Crenshaw to Doug Currie: Excellent stuff, Doug! I'll probably work with your equations to get them in the right form. Thanks for the idea. It's one I definltely hadn't thought of. --Jack

    • An e-mail to Richard Nass from a reader: Hi guys, Re: Jack Crenshaw's "Why all the math?" in Embedded System Design, June 2009... The article was excellent, but there is another simplification that Jack didn't mention, and that is applicable to Jack's simplest least squares "implementation anywhere on the planet." Just as you can arrange for the Xs to range from 0 to n-1, assuming n is odd (which is true in Jack's example) you can also arrange for the Xs to range from -floor(n/2) to floor(n/2). This has the happy result that the sum of the Xs is zero. Referring to equations (16) in the article, these then simplify to: a = sum(Yi) / n b = sum(Xi * Yi) / sum(Xi * Xi) The same techniques Jack used to produce an on-line real-time version of the algorithm apply. There is a closed form solution for sum(Xi * Xi): m = floor(n/2) sum(Xi * Xi) = m * (m + 1) * n / 3 In the case of n = 5, the sum(Xi * Xi) = 10. Using u1 and u1+ as Jack defined them in equations (27-29), we can define: u1- = u1 - Y(k-4) u1+ = u1- + Y(k+1) u3 = -2Y(k-4) + -1Y(k-3) + 0Y(k-2) + 1Y(k-1) + 2Y(k-0) u3+ = -2Y(k-3) + -1Y(k-2) + 0Y(k-1) + 1Y(k-0) + 2Y(k+1) u3+ = u3 + 2Y(k-4) - u1- + 2Y(k+1) Note that both 2s in the equation above correspond to m = floor(n/2). Now, a = u1+ / 5 b = u3+ / 10 Regards, --Doug Currie Consulting Engineer Sunrise Labs, Inc.

    • This e-mail was sent directly to Jack Ganssle: Jack, I just read your editorial on on the RS08 processor. This is a fun little processor. We did some work on the instruction set design on this processor and wrote a C compiler for the RS08. This is a remarkable little processor that in the end out performed many people (including mine) expectations. The processor has family ties to the very old 6804, 6805 families and the various 6808 families with a dose of RISC processors in the way its call return is handled. A lot of tradeoff's were made in the instruction set. The processor mapped the index register and index indirect into in memory space releasing all of the index specific instruction space to support a new tiny 16 byte address space. Direct data access is limited to 8 bits address space very similar to the 6808 DIR space. A flexible paging system gives access to the full 14 bit linear address space (including ROM constants) and eliminates the need for EXT access found on the 6808. The RS08 does not have a data or subroutine return stack, this further reduces the opcode space from the 6808. The RS08 has functionality not found in the 6808. Bit manipulation and bit branch instructions on the RS08 can access the full memory space where the 6808 can only do this on the first 256 bytes of address space. The memory to memory moves and constant to memory instructions can reference the full processor address space unlike the 256 address limit on the 6808. Both of these reduce data flow pressure on the single accumulator. How well does it work? It works very well. Compiler technology is now very good at managing a compile time stack in global RAM space. Whole application optimizing compilers decide Tiny address space usage and nested subroutine support. Aggressive tail end recursion optimization further reduces RAM needs and accesses. Benchmarks run on the RS08 against the 6808 shows that over all it requires a few percent (6-8%) more code and execution cycles that the 6808. The die complexity is greatly reduced. Byte Craft created syntax to support event driven processing to compensate for the lack of interrupts and RTOS real time support. Events are logical conditions (interrupt like flags and logical expressions on global data) that start the execution of a of a code block similar to interrupt support. Event bodies run to completion without interruption. This flat execution structure reduces the needed RAM to ROM ratio for applications running on the RS08 to about 30 to 1 instead of the more usual 20:1 for compiled code and 16:1 for the average hand coded assembler. The background mode debugging facility has a second use suitable for many applications. The port may be used for inter processor communications in a multiprocessor environment where the RS08 is implements as a single function application. The BDM port allows asynchronous access to internal data buffers. Some links on Byte Craft's web site referring to the RS08. A technical overview of the C compiler for the RS08. The RS08 was the first processor that we implemented ISO 18037 on so that the whole instruction set can be encoded in C. This paper shows how we proved that anything that can be written in asm can be written in C with the same or less code space. Regards, Walter Banks Byte Craft Limited

    • Mr Nass, I read your column in of the 1 January 2009 and am aghast at some of the responses. Why should you not be allowed to 'feel' that there is change on the way? The curtness of the rebukes staggers me. I'm sympathetic to how you must 'feel' since the cringeworthy presidency of G.W. Bush. How your country's allies 'felt' about him is unprecedented. When we thought about it and the facts have been uncovered we have 'felt' outraged to say the least. The doyen of democracy has been behaving very very badly and for the meanest of economic expedients. There was no relationship between the Iraq invasion and the 9/11 debacle, never was. Saddam Hussein was substantially an artefact of US foreign policy in the 1980's. Rumsfeld knew about what Iraq had because he was there, helping to establish it all. Hussein was doing the dirty work and fighting the Iranians for the U.S. and her allies because the Shah of Iran was over thrown by it's people. The Shah was hated because he was a nasty bit of work who was anything but democratic, but he was also a tool of U.S. foreign policy and got too nasty while helping to establish and maintain U.S interests. Obama can at least speak eloquently and form a reasoned argument without resorting to stuttering obfuscation and emotional speech. It is this type of emotional language, used so well by 'W' that helped to outrage the U.S. citizenry, post 9/11, sufficiently to mobilise the war machine. The language was of revenge, vulnerability and outrage and people took it and rallied behind it. I think they did so because they didn't know or care about the history. As a result many good people were hoodwinked and conned by the subsequent war incursions into Iraq and Afghanistan. Did you know that Usama bin laden was paid by the CIA in the days of the Russian incursion into Afghanistan? There is so much more going on behind the scenes that most American citizens never get to hear. That lacking makes the populace prime candidates for the nastiness that you have just come through. I blame your media, not the people. Now perhaps the people have woken up from their collective slumber. Perhaps Obama will be a president of propriety and good behaviour as this rests with another human capacity, that of hope. Regards, Steve Curtis Australia

    • To Noway2, I added the actual code into this article (instead of just a picture of the author's code). Note that the code has some html characters in it. --Susan Rambo Managing Editor

    • Dear Mr. Nass I read your article "A time for change". I think the key word in your article is "feelings". I am far less interested in how you "feel" than how you "think", for it is in process of thinking that one solves problems. Did you ever stop to "think" about the long term effects of multi-trillion dollar deficits? Have you stopped to "think" about long term tightening of credit, or lack of investment, or the inevitable inflation that will result as a necessary consequence of pouring so much borrowed (or printed) money into the economy. How do you "think" that will effect small business? I agree that it was "feelings" like yours that won Obama the presidency, but should people stop "feeling" and start "thinking", they will realize that it is truly a "time for change". Regards, John Zeiler IMPACT Engineering

    • The following was e-mailed to Richard Nass on 12/29/08: Richard, I just read "An Insider's view of the 2008 Embedded Market Study" (Embedded Systems Design, September 2008, pages 18-26), and have a few comments to offer. Figure 1: This never changes, because the business people will always squeeze engineering until the wheels fall off. The business people, being unversed in engineering, know no other way to discover what's possible. And debugging difficulties are always a large part of the problems with keeping within schedule and budget. Exceeded only by unrealistic expectations. Figure 1(a): It would be interesting to know what "none of the above" included. Clearly, the question needs work, as it missed ~60% of the story. The form needs a way to enter free text so responders can tell us what's missing. Figure 2(b): On UML, I have the opposite reaction to Michael Barr. I read the very low uptake of UML as the result of developers (and their bosses) voting with their feet on the cost-benefit ratio of UML. I've seen many other well-touted and/or mandated methodologies and languages fail in the same way, and for the same reason. If mandates were the answer, Ada would be Queen and ISO/OSI would be King. Figure 3(a): Nor have source-code analysis tools proved all that useful in practice, with the possible exception of lint. The classic problem with analysis tools is floods of false alarms, which take significant effort to assess. Another classic problem is false negatives - problems not caught, despite the considerable effort. What is almost universal and very useful is to require a clean compile and link before proceeding, but this is not what people mean by the term "source-code analysis tools", even though that's exactly what's being done. Figure 3(b): As for Test Tools, it does not follow that because people don't buy test tools that no test tools are used. Most people roll their own, and always have, by one name or another. The COTS tools tend to be too generic to be cost effective in any specific application domain, and it's pretty easy to write one's own tools using script languages and bits of C/C++. Figure 3(c): Oscilloscopes are hardly a "distant third" in the pecking order of tools. In round numbers, compilers, assemblers, and debuggers are mentioned by one half of the respondents, scopes are mentioned by one third of respondents, IDEs by about one quarter, and so on. I was surprised that scopes outranked IDEs, for one thing. Perhaps a better dichotomy would be software tools versus hardware tools. In any event, what's distant are all those thing with less than 10% of responses. Figure 4: In-house code reuse has always been practiced, long before code reuse became a theology. And design reuse long preceded code reuse, as code developed in the days before floating-point hardware became common generally could not be ported without re-coding to match the fixed-point scaling used in the next project. This was a consequence of the small, slow computers of that day, where one had to optimize the scaling of fixed-point variables on a project-by-project bases. I first ran into this reuse issue in the late 1970s, when my then project manager called us all together to hear about this new cost-saving approach, code reuse. It died in its crib when we realized that we could not reuse even a Sin routine. Because everybody had different scalings and accuracy requirements, every project had developed their own trig function library. What was reused was the general design (usually table lookup plus interpolation), the flowchart, and the documentation, but the code was new each time. Figure 5(a): A plot of *achieved* median time-to-market of projects (stratified by project size) versus date of inception would be useful. My sense is that this has hardly changed over the years. The good news is that the project sizes have increased by a factor of about ten over the last three decades. When I started in the 1970s, 100,000 lines of code was considered large for a realtime system, and much of this was assembly language. Now days, large projects are of order 1,000,000 lines, the bulk being C/C++. The human effort per line of delivered working source code (regardless of language) has not changed much over the years. Instead, a larger fraction of system complexity is in the software. Of course, people have many definitions of "realtime", with characteristic response times ranging from microseconds to minutes, so one must always ask a few questions to nail down their definition of the term. The systems I have worked on have had critical-path response times of tens of microseconds to a millisecond or so. Figure 5(b): A simpler explanation for the low score of "Professionalism and standards" may be that nobody quite knew what to make of the term, and so either didn't answer or provided essentially random answers. Note that the response rate is about half that of other questions. Nor are "professionalism" and "standards" related concepts, making it even harder to parse the combined term. Figure 6: The reason that "OS takes too much processor power" has declined as a reason to avoid the OS is very simple - even cheap processors are so much more powerful than before that one can now afford to waste some CPU on the conveniences of an OS. In the 1970s there were roughly five categories of operating system, sorted by the log of response time. The quickest had typical claimed response times of 1-10 microseconds (these were not really operating systems, but never mind). The other regimes were 10-100 uSec (typical of small RTOSs), 100-1000 uSec (typical of large RTOSs and process-control midicomputers), 1-10 millisecond (typical of general-purpose midicomputers), and 10-100 mSec (typical of UNIX boxes used for lab automation, and other non-realtime computers). In the 2000s, thirty years later, these same categories largely endure, except that the fifth (10-100mSec) has vanished, and the first (1-10 uSec) now has real RTOSs in it. Now, by Moore's Law, in 30 years computers have become 2^(30/1.5)= ~10^6 times faster, so why didn't the above categories simply change from microseconds to femtoseconds? The short answer is that the core requirement of realtime is to keep up with the real world, and the real world didn't get faster. So the added CPU power was instead spent on conveniences like fancy operating systems and TCP/IP networking and GUIs and so on. Figure 7: I'm not sure I believe that use of operating systems has declined, and the stated decline rate (13% in 4 years) is too slight to be reliably measured by such a survey. More likely, there has been no change over the four years. If plotted on a bar chart (like the other figures), this becomes immediately apparent. Figure 8: I think the reason Figure 8 ended up as a head-scratcher is that the wrong questions were asked. The main reason to use an OS (commercial or not) is to cut time to market (and development expense and risk) by reusing a major lump of infrastructure, versus building it from scratch one more time. However, use of any OS exacts a price in size, performance, and functionality compared to code custom designed for a specific purpose. If one cannot afford the added overhead of an OS, then one rolls ones own on-the-metal code. Production volume weighs heavy in the decision. If the product will be made by the millions, every resistor counts, development cost is amortized over millions of units, and use of an OS is unlikely. If the product will be made by the tens, development cost and schedule risk are paramount, and an OS will most likely be used. Figure 9: I would suggest that one disregard Figure 8. Figure 9 looks plausible to me, although my personal ranking is a bit different. In particular, I would rate tech support far higher. And again, production volume matters a lot. If one is making a few systems, OS purchase price is far less important (if the OS sufficiently reduces development effort) than if one is making millions of units. Figure 10: Who picks the OS is a fraught issue for sure, and the process was different on every project I have ever been on. Until C/C++ won the day, only picking an implementation language was a bigger fight. Lately, in the systems that I have been involved in, what has been chosen was the main platform (Sun, SGI, IBM, et al), and the OS was always a UNIX dialect (including Linux in non-realtime areas like displays), with traditional RTOSs (VxWorks et al) used to interface big boxes to special hardware. Figure 11: I always wonder how well outsourcing hardware and software development can work, as it's hard enough to get development right even under the best of conditions. But again it depends a lot on volume, and also on product complexity. And of course outsourcing development to China often results in the creation of a competitor. It would be useful to stratify responses by the log of production volume and by the log of product size in lines of code. Log of critical response time could also be used, but this may not be as informative as it once was, so long as projects where minutes are OK are excluded. I no longer recall the survey in detail, but it is very useful to provide a place to enter free text comments. For one thing, this allows one to know how the questions are being read, and also if they harbor an unspoken but incorrect assumption. Joe Gwinn

    • Here is Jack Crenshaw's response via e-mail date February 11, 2009. ----- Yes, it's true, I have ignored APL for a couple of reasons. First, it's pretty much obsolete. Invented in 1957, it's almost as old as Fortran. I'm not sure what the user interface is these days, but the original version required a special keyboard with special symbols that don't appear on any other keyboard on the planet. Like Forth, APL has rightly been called a "read-only language." I could be showing my examples in Forth, Logo, or Cobol. Call me prejudiced, but I've chosen not to mention these languages either. More to the point, if I were recommending commercial or free applications to do matrix/vector arithmetic, I would recommend Matlab, Maple, Mathematica, Octave, or Mathcad. All of these tools are modern, have excellent GUI-based interfaces, use the most powerful algorithms, and offer a dazzling array of matrix and vector methods. But that's not the point of my columns; never has been. My goal has always been to explain algorithms to ESD readers, in sufficient detail so that they both understand how they work, and can implement them in their own, embedded software applications. None of the applications mentioned, including APL, are suitable for this purpose. Jack -----

    • (The following comment was sent in an e-mail to Jack Crenshaw and Editor in Chief Richard Nass) For decades Jack has either been ignoring or forgetting APL had matrix and vector notation. Given that APL has had matrix and vector notation since the late 60's and is still being used why has its existence been ignored in these articles. I have APL running on a PDA and it has all the functions you could ever want including transpose, matrix divide, inversion and on and on. The APL wiki is here: Chris Pollard Mechanical Engineer

    • (The following e-mail comment was sent in response to comment posted by Editor in Chief Richard Nass--aka, wirelessguy.) Nobody can take away the historical significance of our first black president. It is easy to say that vast majority of Americans admire that. But that doesn't translate into everyone welcoming his policies and ideas. I would love to debate the causes of our current financial crisis including Barney Frank and Chris Dodd and their roles in relaxing the lending standards which has had disastrous results. The housing bubble clearly had its origin in the 90's and without the housing bubble, we have no recession. Bush actually proposed a higher degree of regulation over Fannie and Freddie [1] in 2003 and was basically called a racist because it would have kept more minorities out of home-ownership. [1] Why Barney Frank is still in office is beyond me. Here's his quote from the article: ''These two entities -- Fannie Mae and Freddie Mac -- are not facing any kind of financial crisis,'' said Representative Barney Frank of Massachusetts, the ranking Democrat on the Financial Services Committee. ''The more people exaggerate these problems, the more pressure there is on these companies, the less we will see in terms of affordable housing.'' Of course many people don't want facts or sound financial principles. They want a feel-good policy, regardless of the secondary effects. Your article was full of vague feel-goodness about the new president and an unsubstantiated rebuke of the previous president, which is what I took offense to. When I want political commentary, I'll go to the Wall Street Journal. Rich p.s. At the risk of sounding petty, Reagan's inauguration in 1981 was more watched than Obama's.

    • Comment sent to Richard Nass by e-mail: Mr Nass, I need to put my son to bed, so I'll get right to the point. Your article stinks. I say that because you sound like you're trying out for a job as an Obama spokesperson. We are not all "standing as one", and at least the people in my immediate sphere are not welcoming the ideas from the new president. Labeling these ideas "new" seems odd as well. Just as the great depression was prolonged by the "new deal", so will this recession be prolonged by Mr Obama's new ideas. I'll be happy to admit I was wrong if this is not the case in 4 to 8 years. Unfortunately, all I can hope for is nationalization of industries (and the accompanying inefficiencies), higher taxes, higher unemployment, more government involvement in our personal lives, and a generally massive government infrastructure that crushes our economy. Sincerely, Rich von Lehe Senior Software Engineer

    • Is anyone interested in seeing an historical teardown, such as an old PDP-11, an older oscilloscope (we could compare it with a new oscilloscope), or the Apollo Guidance Computer (we wouldn't be able tear that down but we could have it on display, perhaps)? Jack Ganssle came up with these ideas; he also said we could compare an old Collins or Healthkit ham rig with a new software-defined radio. Message was edited by: ESD editorial staff: SRambo

    • "Reentrantly sharing memory is tough enough with a single processor; when many share the same data the demands on developers to produce perfectly locked and reentrant code become overwhelming." Your comment brought to mind a quote in the book, "Programming Erlang: Software for a Concurrent World" by Joe Armstrong, that I have just finished reading. "If you have multiple processes sharing and modifying the *same* memory, you have a recipe for disaster -- madness lies here." Erlang is used to build highly-fault tolerant switching systems, such as phone switches. Seems that the telephone companies figured out years ago the correct way to do multi-core/processor systems. After all when is the last time you had to press the rest button on your Plain Old Telephone (POT)? Erlang has many features that are useful in Embedded Systems such as the ability to update running code. Phone Switches have to run for years without ever being take out of service. --Bob Paddock ASQ Certified Quality Software Engineer.

    • The following e-mail was sent to Editors Richard Nass and Colin Holland. We've posted it with permission. From: Richard Barry Normally open source projects do not respond to 'anti' articles as they do not employ publicists or writers. I thought in this case, to stand up for the community, I would provide a quick reply. Of course generalisations are always partly based on truth from somewhere, but are never wholly accurate. I am well versed with the pros and cons of open source software - they have been batted around for years (GCC anybody?). But "open source software" is not all the same, just as some commercial software is great and some terrible. Ask engineers if they prefer Firefox or Internet Explorer - and the open source product will normally win although I harbour no preference myself. The obstacles to using open source software are well documented, easy to identify and therefore easy to remove for serious open source suppliers. Here is a brief list of common statements, with a specific response provided for each: • Statement: Open source software is badly supported. has an active support forum, and also boasts optional commercial support provided by a large engineering company. • Statement: If you use open source software you are at risk of having to open source your entire application. is licensed such that only the kernel is open source. Application code that uses the kernel can remain closed source and proprietary. • Statement: Open source software ends up costing much more. is completely free to download, experiment with and deploy. Each port comes with a pre-configured demo application to ensure you start with a known good and working project that can then be tailored - getting you up and running very quickly. Should at some point you require commercial licensing or support then packages are available at very competitive prices, so you have nothing to loose. • Statement: Open source software is badly written. is commercial grade, stable and reliable. There are even safety critical versions based on it, with improvements from the safety critical certification being fed back into the open source code base (although not the new safety related features). • Statement: Open source code becomes fragmented, with many different versions available. --The release procedure is very tightly controlled with all official ports being updated simultaneously. Latest and past releases are available in .zip files. The head revision is available from a publically accessible SVN repository. Naturally, occasionally errors are made, but these are quickly spotted by the large user base (more than 6000 downloads per month [conservative figure given]) and are documented as soon as they are brought to my attention. • Statement: Use of open source code leaves you at risk of IP infringement. --Only code of known origin is included in official versions. If you are still concerned about IP infringement, purchase a commercial license. • Statement: Open source projects have no longevity. --In some cases, neither do commercial products. I could name a few commercial tools that are now defunct, if these had been open source at least users would have the source and could continue to use the tool. As it happens has been around for nearly 6 years already, and is going very strong, hopefully well into 2028! Regards, Richard.

    • This comment came to Bill Schweber from an engineer who worked on the chopper. (Posted with permission from Larry Park.)--sr Bill, Thanks for your generous article on my "Micro Tiger" Estes helicopter. I've got 25 years experience in electrical/software/hardware integration but before a year-and-a-half ago, I had never programmed a Microchip processor before. I met John Day late one evening at the Microchip Master's conference in '07, showed him my prototype boards for the helicopter, and asked how to go about creating the code I needed to control it. He was "head and shoulders" more knowledgeable with programming the processors than the other "experts" I had been talking to. He provided me with several relevant code 'snippets' and a good feel of how to work within the limitations of the processor. Quite a guy. Five weeks later, I had the first prototype up and flying. It was a real thrill to bring the first-production helicopter to this year's Master's conference and let John fly it; especially when he proposed to make it the feature teardown at ESC east. I should have made a point of mentioning that the project is coded almost entirely in 'C'. Thanks for the review, Larry Park Sr. Engineer, Estes-Cox Corporation

    • Manu Karan e-mailed this comment to Richard Nass: I'm an avid follower of Media Player tear-downs and comparison articles. I request you to also talk about the "transfer speeds" for files being loaded into the Media Players. Personally (as well as for most of my friends), this is fast becoming a pain-point. When the Media players these days tout very high capacities, what we find practically is that not all players are fast enough to ever possibly fill the capacity with data. Now with video playing capabilities, I find it even more of a pain point. Giant files take SO much more time to transfer, that I have to practically plan ahead of time to load my media player with video content. It's the difference between dumping a movie into my player just before heading out in the morning, vs. planning for it in advance, starting the file transfer, and then going and having my morning coffee! I'd be very grateful if you can add this as a review parameter in your tear-downs of media-players / cell-phones from now on. This would help a lot of us readers to decide the media player/phone we want to buy next! Thank you, --Manu

    • The following comment was submitted to the editor in chief by e-mail and originally appeared in the Parity Bit section of the August print issue of Embedded Systems Design magazine. I read with interest Jack Ganssle's "Faster!" and the feedback it generated. (Breakpoints, June 2008, p.53; Parity Bit, July 2008, p.9, all available at For most of my 30-year software-development career, I've heard the constant message that there aren't enough programmers, software is too expensive, we need reuse, more abstraction; and seen various attempts at making it happen. When I started, programs had hundreds or thousands of lines of code and memory was in KB. Now programs contain millions of lines of code and occupy GB of memory. Little else has changed. Doomsday never arrived, nor did a solution. First, neither the 65-hour workweek nor the "mythical man month" are programming or computer issues. Both are management issues and can be solved and have been solved in organizations willing to address them. I really don’t need a new cell phone every year or two with yet more features I can't use unless I spend two days reading the manual. The time crunch for most of this stuff exists only to benefit the market position of the company peddling the product, which in theory is supposed to benefit the prosperity of society, but in practice only has a positive impact on those well above median income. Regarding Jim Ford's question, "Why is there so much difference between hardware and software?" Hardware got off to a fundamentally different start, thanks to packaging. Early devices had to be task specific because of the limitations of what could be put on a piece of silicon. That forced systems to be made from a collection of components, which in turn had to have a standardized interface. Even today you can buy just about any 8- or 16-bit CPU and some SRAM and with the addition of 8 or 16 data lines, a dozen or two address lines, and three control lines have them communicate. The physical package boundary reinforced this. Of course, if you were to build a 3-GHz Pentium desktop out of such components, it wouldn’t run anywhere close to 3 GHz and would consume thousands of watts of power. The inefficiencies of converting every signal to a standard interface on that scale, with enough drive to reach all destinations quickly enough would consume far more power than any useful work would. We consider such totally unacceptable, and hence--especially in the embedded world--ICs have increased in density and integration, eliminating this inefficiency, and FPGAs, ASIC, and IP have become increasingly a part of the landscape. Even with IP, though, there are standardized interfaces borrowed from our history, just simpler, less power-greedy ones. They still add to the size of the die and its power consumption, and the designer must decide to what extent to work at a lower level and eliminate some of them in the interest of size and power budgets or leave them alone in the interest of schedule deadlines and costs. Software has taken exactly the opposite approach. It started without interface standards. Software reuse has suggested the need for such standards, but like hardware, there is a price to be paid for that standardization. Likewise higher-level languages, while offering more abstraction, also extract a price in efficiency. With clock speeds in the GHz and memory in the MB and GB region, the tendency is to move in that direction with abandon. I was interviewing some consultants recently for a project I needed to outsource. One recommended doing it in Java. I asked him about efficiency, and he assured me it was so close to C and C++ as to be down in the noise and sent me some links to studies giving information. While I had to admit the numbers looked better than I expected, the general consensus of the papers was that the performance hit was no more than 2:1. Well, on an otherwise idle, state-of-the-art desktop computer that may not seem significant, but as I read the concerns about the large percentage of our electrical power being consumed by major server farms (which happen to be very tempting candidates for this abstraction), the decision to use Java instead of C could make a 2:1 impact on their electric bill! That is not down in the noise! I’m all in favor of leveraging the productivity of programmers (including my own), but we need to come to terms with some hard facts. The first is that interpreted code (including JIT compile) by definition can never be competitively efficient for production. Anything that can be resolved once at compile time should not be reinvented every time a function is called. This is a matter of fundamental physics and cannot be circumvented. Why are we throwing so much effort down a guaranteed rat hole? Second, that energy should be more productively channeled at techniques that increase the level of abstraction while providing optimization that closes the performance gap. Higher levels of abstraction offer opportunities for optimizations that the programmer might not think of or recognize and that would be precluded by details that must be specified at lower levels of coding. Maybe a state machine with an unusual assortment of state values would work more efficiently than anything the programmer envisioned. Maybe a lookup table would be a better solution than a numerical algorithm or set of branches. Its similar to what a switch() statement offers a C compiler--the freedom to consider different ways of accomplishing the task. Sadly, we’ve probably spent 100x as many man-hours tweaking JIT runtimes as we have doing this sort of exploration. As far as code reuse, the only technique I know of that works is to write code and reuse it. I've been doing it for 25 years now. The products I currently ship contain pieces of code I first wrote that long ago. To be sure, I give some thought to the interface when I write. Yes, some of the pieces have evolved over the years. Yes, there is some time spent on adapting it to the new need, which detracts from the productivity gain. However, there are significant gains in two areas: first, the software architecture of the project is already specified because of the reuse so I don't have to invent it from scratch every time, and second, every chunk of code dropped in from a previous project is a chunk of code that has been debugged and field tested, hence the debugging cycle is shortened and the reliability increased. I have standardized on a single CPU family so even my driver development is minimized, as is my learning curve, both on the hardware and the tools. Every time a new project comes in the door, I look for a recently completed project most like the new one and copy the entire project into a new directory. Then I delete what doesn’t apply and start changing and adding to what is left. I can turn out a fully debugged control project with graphic LCD and touch, serial, and Ethernet in a matter of a few weeks. I can also port to new hardware in a matter of days--all in C with judicious use of assembly. That’s my version of reuse. It’s the only version I know of that really works! --Wilton Helm
      Embedded System Resources
      Golden, CO null

    • The following is from an e-mail sent to Jack Ganssle: Jack, There's a postscript to what you wrote. My wife is now taking classes at SJSU across the street from our condo. She's a grad student in the English department. Here's what one of her English professors said last semester: "Einstein's theory of relativity teaches us that everything's actually relative and nothing's absolute. Newton's Laws therefore no longer apply and science is in limbo. The only thing you can rely on is your own perceptions, ideas, and feelings." My wife was really taken aback and fought back, as an engineer caught in the Twilight Zone of "Arts and Parties" relativistic thinking. Newton's Laws are perfectly satisfactory still, she said. Bridges and buildings still stay up. Airplanes fly. Medicine works. Etc. Science is not dead, even in this post modern world. Not every liberal arts class has someone like my wife to defend science. Lots of college students are being spoon-fed such mental junk by the "learned" reactionaries who think that science alone has gotten us into the current unhappy human condition, as though there weren't biased, imperfect humans at the tiller of progress. Regards, Steve Leibson

    • The following letter was sent in via e-mail from a reader: In the HP35s teardown article from February issue of Embedded Systems Design, you write: "The 8502 is designed by Sunplus Technology, a Taiwanese company. It’s based on the 6502, an 8-bit processor that first appeared on the Commodore 64 computer, which was popular around the same time that I purchased my 15c." Whoops! The 6502 was already quite mature by time it appeared as the more highly-integrated 6510 in the Commodore 64; it had already appeared in the original Apple 1 computer in 1976, as well as the Commodore PET and Apple II-family. Wikipedia correctly lists at least the Atari, BBC Micro and Commodore VIC-20 as well. You might have a quick look at Wikipedia's entry for the 6502--it looks pretty accurate, at least to my recollections of 32 years ago (ouch). --Dana Myers

    • E-mail from a reader: Dear Mr. Nass: In his June column, Jack Ganssle ("Faster!", June 2008) asks whether the software IC is even possible. It is easy to demonstrate that it is not. All software components (particularly the off-the-shelf kind) involve tradeoffs. If you use small components, you will need a lot of them, and hence a lot of glue logic--thus defeating the purpose. If components are large, they are either rigid and unlikely to fit your needs, or they are highly configurable and therefore very complex to understand, integrate and maintain. Then there are the issues of source code availability, royalties, and the fact that you are more dependent on outside vendors. But the software/IC analogy is actually flawed at a deeper level: If software were like integrated circuits, the best-designed systems that I have worked on would have their equivalent in several hundred ASICS, each approximately the complexity of the Z80, but all different and highly interconnected. Furthermore, during development (and after release), both the ICs, the interconnections among them, and even the number of pins would be changing on a weekly basis as the market's needs evolve. Hardware ICs simply are not a good model for software. Additionally, if a company wants to differentiate itself from the competition, it has to produce most of its software internally, so that the effort it expends and the expertise it possesses will show up in the end product. If a software system were 95% off-the-shelf components and 5% glue logic, it would be easy to recreate, and would demonstrate that the company had nothing to offer itself. Internally-written components are a good idea, but expensive to get right. Unless you are using the same hardware, user interface, etc., over and over, you will find yourself writing large numbers of new components for each new system, hence nothing is gained. Software development is the encoding of messy, domain-specific, real-world knowledge into machine-usable form, and is unique among human endeavors. Its incredible flexibility is also unique--and is its greatest strength. Our desire to formulate simple solutions just proves that we don't really understand it yet. The sooner we accept that it is different, the sooner we can move forwards. Grant D. Schultz Senior Software Engineer Mission, KS

    • From a reader: To the Editor, The hardware dependency of the internet is worse than Jim Turley lets on. No one would expect a modern web site to work well using a Commodore 64 computer. Turley would have us believe that Microsoft is protecting its customer base by staying focused on more popular platforms. How about, say, any computer designed to run Windows 98? Worth supporting? The consensus is no. Any web site that insists on the latest version of Acrobat or Explorer is effectively saying "junk your older computer." New versions of Acrobat and Explorer long since stopped working with Win 98, and there is no way in the world to put XP on a computer from that period. The promise of universal access will only recede if a computer is unusable for browsing after 5 years. In a world of increasingly scarce resources, can we afford to continue to allow Microsoft and friends to turn our computers into landfill at that rate? Adding interesting new features is all well and good, but there is no good reason to axe backward compatibility. Thomas Lawson Malvern, PA

    • Reader comment sent by e-mail: The article mentions Wind River and MontaVista and details some of the numerous outrageous claims by these companies. Thing is, Green Hills is also actively making similar claims and the article doesn't debunk them, it just states them. This is a way to join the others and get out their negative message about Linux. In passing it should be noted that both companies were dragged into Linux by customer demands. Both companies continued to run their business as a proprietary venture while supporting Linux. As the article points out this doesn't work. I worked in a company that investigated buying development tools and kernel from various vendors. In the end we found the companies difficult to work with, limited in what they would support, and expensive. It should be noted the developers were not interested in the tools since they already had development tools that came with their desktop distributions. Desktop Linux and Embedded Linux has grown to be pretty self sufficient. The embedded designs that I have been involved with revolved around ASICs that included a CPU. All of the chip vendors provided a supported Linux kernel, root filesystem, and cross development toolchain. In several years of working in this environment there were remarkably few problems that went back to the vendor. Actually, most of the problems have to do with the code we are writing, there are seldom problems with Linux. Beyond that, a web search will find projects that focus on Linux for specific processors. These projects do processor specific development work and feed the results to the kernel project. You can tap into the developer community and get a lot of support. There are also (usually gcc based) cross development tool chains supported by (or linked to by) the projects. The patching and security issues don't relate to us. The vendor provides us a kernel from, 2.6.18 at present, and a set of patches. We have made a few additional patches to support our hardware. We rarely get a new patch set from the vendor. Security is of limited concern. The users can only interact with the device through a GUI that is part of the product and the product doesn't connect to the outside world. Their cost figures are stunning. For $192,000,000 you can write your own RTOS that is similar in features and quality to VxWorks, QNX, LynxOs, or any of the other RTOSes. You don't have to scour the internet for patches. Many distributions and CPU projects do that for you. There is a lot of practical support on the internet and through vendors. Actually, I think the last security patch for a core Linux module was a couple of years ago. Linux is very modular. That is why it is a good fit for a wide range of systems. The core Linux is pretty small and you only need the architecture support, device drivers, network stack,and filesystems that are used in your product. This is, in our case, a minor fraction of the complete Linux. It would be very good for you to publish an article that debunks this article and explains why Linux is so important in the embedded space. --Philip Cameron

    • Reader comment submitted by e-mail: Dear Mr. Nass, I'm sure I am just adding to a barrage of response you must have received to this article, but I feel that it is necessary for you to know how badly it has reflected on the otherwise good quality of your magazine. Although I get the point that two (previously unheard of) proponents of 'embedded linux' systems may have decided to condemn it, that doesn't mean an article should allow it's tone to verge so far off course. Dan O' Dowd is obviously a man of FUD and I hope his misuse of Embedded Systems Design has allowed ESD to create new filters for such uninformative and misleading information. It's a shame that MontaVista and Wind River Systems can't get their act together, but there are plenty of us out here who most certainly can. We will enjoy inexpensive development of embedded linux systems on a wide range of processors and hardware configurations where these others have failed. Kind Regards, --Wes Cravens

    • Reader comment submitted via e-mail: I find it very sad to find this article in your otherwise high-quality magazine under the heading "guest editor". Under that heading, people may actually put some stock into it. A more appropriate heading might have been "Opinion" or even "Company shill" or "Paid programming". I can understand that Mr. O'Dowd wants to downplay the competition, but he actually may have done more damage to his own image and that of his company by resorting to scare tactics, out of context and partial quotes, and exaggerations instead of some real, solid arguments. Since he is CEO of a company that makes a product called "Integrity", I had expected more from him. MontaVista and Wind River Systems are both companies that are trying to make money off of software that is available for free. How can they convince people to pay them money for something that is free? By offering extra tools and services, and by offering their expertise. It is much easier to sell your tools if you can make your potential customers believe that the tools that are freely available are inadequate. It is also much easier to sell your expertise if you can make your potential customers believe that something is difficult. These marketing messages are intended to benefit the respective companies, they are not intended to show Linux in the most favorable light. After all, how would you convince anyone to pay you money if you would advertise that embedded Linux is a breeze? The marketing departments of these companies have to thread a fine line in promoting Linux while at the same time convincing people that it isn't easy, and that they will need help with it. Now Mr. O'Dowd takes these moderately "negative" marketing messages and twists them into such strong statements as "terrible", "horrors", "monster" and "nightmare". I'm sorry, but I haven't seen these words used by the companies he is basing his rant on. I haven't read the complete MontaVista article he quotes, but it seems to be highlighting the work they have to do in order to support many architectures and software packages to make a full distribution. If you're using Linux to develop a product, you are using a single architecture and a select set of packages. Often, your development board even comes with a Linux dev kit and Linux loaded on the board and ready to run. Making the work that MontaVista does, and the work that someone making a Linux product has to do, sound as if they are the same thing is more than a stretch. I won't even comment on the Wind River "CHAOS" ad. I have seen the ad and talking about only half of it and not the other half should make it clear how genuinely Mr. O'Dowd is interested in being fair. As to his claim that these companies are in trouble, and somehow equating that to embedded Linux being in trouble, this argument would only sway those who are ignorant about how open source works. Unfortunately, this is still a large percentage of developers, so that's why I thought it important to send this response. If these companies aren't making as much money as they hoped, could it be that it is because embedded Linux is simple enough for most companies to tackle without their help? Due to the nature of open source, these companies might fold tomorrow and developers would continue to put Linux in their devices just as well. Could the same be said about "Integrity" if Green Hills were to fold tomorrow? Of course there has to be a survey thrown in for good measure, so I'll contribute one too ( It is well known that anyone can make a survey say what they want it to say. The one I found says Linux is the most popular embedded OS for the fourth year running. The one Mr. O'Dowd quotes says less people are considering using Linux because more are actually using it (those are separate survey options... get it?). Doesn't sound to me like we're talking about a "dwindling number of disenchanted embedded Linux users" as Mr. O'Dowd claims. By the way Mr. O'Dowd, what are the stats on your embedded OS? Your article has all the twisted information, exaggerations and other telltale signs of someone who is desperately trying to tear down the competition. Is fair competition not doing it for you anymore? --Patrick Van Oosterwijck Senior Software Engineer EcoWater Systems

    • Reader comment submitted via e-mail: What the heck is "" thinking with this article? I have to say that I am not a major embedded Linux fan, and when I first saw the article title, I thought it would be a good read. Then, the first thing I scanned to was the author, and being from Green Hills, that immediately raised a flag regarding the obvious fact that this guy has vested interest in bashing embedded Linux. The content of the article just made it more obvious. I realize that there are probably months that you guys have trouble trying to fill the pages of the magazine with good technical stuff, but this article is pitiful. Even if the negative Linux hype is all true, just the fact that the article was written by someone from Green Hills discredits it. Maybe if there was a version of the "Enquirer" for embedded systems, this article would fit well there. The editors need to keep a clear separation between "articles" and "advertisements" otherwise, I personally think you're discrediting the magazine. --Ed Sutter

    • Reader feedback: Don't forget Jim Ready, founder of Hunter & Ready systems' VRTX --one of the early embedded operating systems (which runs the Hubble Space Telescope among other things) which still exists today, sold by Mentor Graphics, and later founder of MonteVista embedded Linux. When I was just staring out in engineering, I was porting VRTX to a board and called the company. Jim Ready himself answered my call and helped me out. I was very impressed, so say the least! I did some research and found out from this link: that VRTX was the first commercial embedded operating system and that VxWorks was originally just a c-library meant to be run on top of it. So I would say that Jim Ready is the most significant 'embedded' individual. He started the industry and continues to lead it to this day. Quite an accomplishment. --Tom Biggs

    • Undoubtedly, antinickname and General Bob are right: this list would be more pertinent and interesting if we chronicled embedded applications. That information, however, may be harder to find (as some of the information may be proprietary), but we're working on a list and will start adding milestones.

    • In response to kolio's comment: John Atanasoff's machine was neither programmable nor Turing-complete, nor did it have a stored-program architecture. But the courts ruled in his favor. We adjusted the milestone for ENIAC and will add Atanasoff's computer.

    • Reader comment: Mr. Ganssle, I should admit up-front that I work for a defense contractor in a security clearance environment, so working from home isn't much of an option currently. Though there might be some advantages to making my home-office a restricted area (e.g., my kids would no longer be able to borrow my pencils or scotch tape and not return them), I don't think it would really be worth the time and expense. Having said that, I don't think the security problems are the real issue. In my (not so) humble opinion, the biggest obstacle to working from home is management - the average manager does not trust his or her employees to actually work, and has no way to determine whether they are actually working eight hours a day. What do I mean by that? I have never worked for a company that paid me based on what I produced. I have always been paid based on how many hours I spent in their building. That is, I don't get paid based on whether a particular work product has been completed within some pre-determined time period, I am paid based on the fact that I spend forty hours a week in the building. Some of those hours are productive, some not so much; I get paid the same regardless. Why is that? You would think an employer would be more interested in what and how much I produce. But I have worked at only one company that made any effort whatsoever to measure productivity, and that was SLOC. (I'll leave the validity of SLOC as a productivity measure to another discussion.) If my manager has no way of measuring my production, how does he know whether I am working at home or just sitting at home in my underwear reading Dilbert? And yes, I once had a manager who would admit that he would not allow his employees to work at home because he did not trust all of them to actually work. I don't see work-at-home engineering making much progress until management learns how to measure productivity in a meaningful way. And for the record, I drive 25 miles each way. I spend about an hour and forty minutes on the road each day. That doesn't bother me as much as I thought it would. I have become a podcasting fanatic and am probably better informed now than I was when I drove less. (Any chance The Embedded Pulse or The Embedded Muse will ever become an audio podcast?) I bought a Honda Civic Hybrid when I started this job, which keeps my gas bill down to about four gallons a week. If the car lasts ten years, it should work out to somewhere between $4000 and $5000 a year. The irony is that driving less wouldn't save me a lot of money, as the fixed costs (the car and insurance) are so much larger than the marginal costs (gas and maintenance). Having said that, I would love to work at home. I just don't see it happening unless I change employers. And move out of the defense contracting industry. Thanks for the thought-provoking essay. William Carroll The Embedded Avenger

    • Reader feedback: Dear Mr. Nass, I read with interest your article about the HP 35s. I have two very minor corrections: 1) The microcontroller is made by General plus, which was spun off by Sun plus some time back. 2) The second chip is a 32Kx8 static RAM, not flash memory. You wrote that you "(weren't) willing to destroy your 15c (sic) for the sake of this article". The earliest units packaged the LCD display and three integrated circuits onto a module wrapped in black tape (probably antistatic) separate from the keyboard. Peeling the tape reveals the three custom CMOS chips in PQFP packages: 1LF5-0301 "Nut" processor (bit serial, 56-bit word) 1LE2-0321 RAM/ROM/Display Driver ("R2D2"), 6K*10 ROM, 40*56 RAM 1LH1-0302 RAM/ROM (R2D2 with display driver not bonded out) Later units had a more conventional design with a single circuit board. Over the years the electronics was cost-reduced several times. The 1LF5-0301 was replaced by a 1LM2-0001, and the 1LE2-0321 was replaced by a 1LH1-0306. Eventually the CPU and first R2D2 were combined into the 1LQ9-0325. The "Nut" processor was originally designed for the HP-41C, and the variant in the 15C (and related models) is specified for a lower operating voltage but otherwise functionally identical. The architecture is an evolution of the previous two generations of processors introduced in the HP-35 and HP-25. I've written Nonpareil, a microcode-level simulation for many of the old HP calculators (including the 15C), which may be found at Sincerely, Eric Smith

    • A comment from a reader: Hi Jack, Very well put. I've got some stories for ya... Two years after I came to MGH, I was asked to accept a position as director-level position to oversee medical equipment maintenance. I was flabbergasted, in that somehow upper management had come to believe that I was interested in such a role. I knew where it came from; I had spoken up about the poor quality of service management, initiated and led a couple efforts to reduce key service contract costs by hundreds of thousands of dollars, and generally paid attention to the service-related aspects of my job. In other words, I just did the whole job, and somehow that fact that my predecessors hadn't led to the obvious conclusion that I was interested in doing this type of thing. I turned the job down twice but talked myself into it on their third attempt at getting me to take it by deciding I could apply systems principles to the problems facing the service operations, turn it around, and work myself into something I'd rather be doing. What I didn't factor for was "What if they have plans for me other than what I've espoused?" I was soon reminded of the line from Simon and Garfunkel's "The Boxer": All lies in jest Till the man hears what he wants to hear And disregards the rest Soon I found myself wrapped up in achieving others' visions, forcing me to either ignore the foundation-level systemic problems I knew had to be solved to languish or choose to put in 60 hours weeks. I tried the latter, but after two years impact on my family and me, I very suddenly exited the hospital for two weeks, got professional medical help, and came back to leave the uppers with a choice of their own: Find a way to let me go back to engineering, or amicably part ways. I was fortunate to report to a director who succeeded in finding me a way back to an engineering role within an organization that I actually liked and still like working in. I was soon leading and then managing the team I enjoy working with to this day, but on terms more consonant with my interests. I've looked back on the long drawn-out episode often. I'm struck at how management draws conclusions and makes plans without bothering to understand. A simple conversation with me would have revealed that I would not have been interested in what they wanted me to do. But then again, I could have explicitly told them what I wanted to do as well, and I'm not sure that I did. Unvoiced expectations have soured many a relationship. I refuse to get caught in that trap again. On a parallel track but at a different time . . . I was promoted to a supervisory role at Hopkins long before coming to MGH, and at the time the John had an outstanding leadership development program. I took every course offered, which back then was considered the equivalent to a graduate certificate in management. It was around the same time that I went back to church and soon started noticing similarities between the way I was being taught to be a leader and the way the Bible captures Jesus teaching his disciples. For the record, I don't believe Christianity holds the franchise for this, and I realized that the Golden Rule and all derivations thereof is really what it's all about. A manager is, after all, but one member of a community, called to contribute his or her share for the good of all. A year or two before leaving the John, I was invited to a leadership assessment program (they had plans, but I did, too . . . ). There were six of us being "assessed" by higher ups (I found it bizarre). We were to work together for a day as "the executive leadership of a community hospital". Before the event, we had a preassignment. I was told to show up at a certain time at an office and be directed as to what to do. I was given a manila envelope and directed to another office. On opening the envelope, I learned I had just accepted a job that I would be starting in a few days and in the meantime needed to review messages from my new bosses and reports, and I had to get it done in an hour before going out of town for three days. There were requests for me to make decisions, fix employee problems, etc. By and for people I had never met in a job about which I knew basically nothing. I basically wrote that I will deal with it when I get back. Going into the assessment session, the assessment of the assignment was done by a senior HR person leader, and she came right out and honestly said she'd scored me as low as possible and wanted to understand my handling of the messages. I told her there was no way I was going to make decisions like these after having spent a month, let alone an hour, on the job. I was quite convinced I had made a mistake in accepting the position and had no intention of working for an organization that would expect such a thing. She started to smile and said I should have been told I had more than an hour. I replied it didn't matter and asked if she would like to work for an organization like that. She burst out laughing and changed my score to the highest she could give. At the end of the session, the assessment of me was I could be whatever I wanted, I just had to figure it out. If all managers would take that to heart instead of trying to be what others want them to be, we'd all be a lot better off. Rick Schrenker Systems Engineering Manager Dept of Biomedical Engineering Massachusetts General Hospital

    • Richard, I read, with interest, your article on Jan 14 in EEtimes on open source. While I think you did a great job on the article, I disagree with "If you choose the version of open source Linux that's truly free, the sport you receive for that OS is nil. (You get what you pay for)" on page 20. I've been in the embedded software business for about 25 years, using a mix of commercial, open source, and "roll-your-own" RTOS's. First of all, in my view, the commercial RTOS's promise of support is highly overrated. Even for the best support, you spend hours on the phone (or email) trying to get to the (very protected) person that really knows the answer. With open source I can turn to any one of about 20 on-line forums to get help. I consider the "You get what you pay for" support of open source far better than any my company has ever purchased. You need to look at open source sort of like the Internet, the adage "You get what you pay for" does not apply. I think you'd agree that there is tons of good information available on the Internet for free. Well, some of that free information is support for RTOS's. Sincerely, Rick Bronson

    • Dear Jack, I read your article in EE Times and I couldn't agree more. (I write a direct reply because the website has no comment facility). First of all the term multicore is widely abused. SMP is not at all the same as putting a couple of CPUs.As you point out clearly, even with single CPU the bottleneck is the memory and worse, for real-time, it is the cullprit.Even on this PC there a factor of 100 between memory access to L1 vs. external DDRAM. Windows makes that even worse, we measured that the concurrency performance of a 1.6 GHz Windows PC is equivalent to a 15 Mips microcontroller running a native RTOS. It is even rather unclear how Windows manages the scheduling and as a word on consolation, Linux doesn't perform much better. While a dual core can help a little bit (if the code runs in a loop), I have seen benchmarks where the performance even goes down when using a quad-core. Very predictably because of the shared memory issue. A couple of things need to change: - People / engineers should learn to distinguish between real-time on a desktop and real-time for an embedded device. You are doing an excellent job with e.g. your newsletter, but when are computer scientist going to start teaching it? - Designers should stop developing shared memory architectures. Not only because of the speed mismatch, but it has also other benefits like physical decoupling between application tasks, less power consumption and simplicity. No need for complex bus sharing protocols en cache coherency logic. - Software engineers should learn that (embedded) sofware is concurrent by nature and that communication/interaction between "tasks" is as important as the tasks themselves. Communication means more than bus bandwidth, it also means latency. What it comes down to is that embedded software should be designed from the beginning as concurrent programs. This fits well with a model driven architecture design process. The issue is I believe that computer sciencist often don't see this concurrent and real-time aspect and hardware designers design often synchronously. In other words, both groups think in terms of sequential loops. You might say that this will never change because of legacy reasons. I believe that this is likely true for the IT market. But for the embedded world, there is little reason as quite a lot of designs are started from scratch anyway. You might say that we don't have the programming model for it. This is true if one keeps searching for inspiration on the desktop. In reality we have the programming models. Just think abouut CSP. CSP has been associated with the INMOS transputer and its arcane occam programming language. It was very succesfull in a small group and failed because of wrong marketing, but its value remained. I have spend most of my live applying this computing paradigm with success. But we called it a "pragmatc superset of CSP". Targets ranged from single chip micros to systems with a few 1000's DSPs. We have now reinvented this concept and called it OpenComRTOS. It is a network centric RTOS, but it is also a programming paradigm. We used formal methods to develop it and the results are astounishing. We can fit in 1 KB of code (SP) or a full blown RTOS with MP support (evenst, semaphores, fifos, resources, memory polls, ...) in 5 KBytes. We have a demo where it runs distributed on 2 16bit micros, each with only a few KBytes. Another demo runs the same code (after recompilaton) on top of a Windows node connected via internet to a remote virtual server running Linux. We cn trasparantly put a few tasks on these 16bit micros and hook it in the network using a simple RS232 driver. The aim of this reply is not as much to promote this OpenComRTOS, but to show that "multicore" programming doesn't need to be an enigma. Most of the basic solutions were thought out some 30 years ago. Even Dijkstra had already solved most of the fundamental issues. If you design "parallel", there is no need to reverse engineer big sequential programs and you gain a lot. Actually, is some part of very compute intensive, you have two options. Either one splits the data over multiple CPUs or if you have a big vectorising CPU, you can sequentialise. But if you run out of cache, remember the first paragraphs above as you will start loosing performance rapidly. Best regards, Eric Verhulst

    • Reader feedback from EE Times article: So let's see - MV vs. Wind River - WR's Linux business is growing rapidly, their VxWorks biz is doing well (they have real strength in MIL/Aero), they have a services group that is larger than MV's entire company and demonstrated major emphasis (eg., Android) and wins in LINUX. Seems to me they're the ones who are likely to win whatshisname CEO

    • Reader feedback: Rich, Saw your article on OS adoption. You nailed it when you pointed out how designers start with a "free" OS and only change to a supported technology if they encounter issues. Based on the market research we recently did on embedded databases (I sent you a summary a few weeks back), I believe a similar dynamic is at work for embedded DB technologies. Designers who don't want to do it all in-house first turn to open-source DB technologies like SQLite, MySQL or Berkeley DB. Typically, it's only if they encounter performance, size, flexibility or support issues with these open-source technologies do they turn to the commercial alternatives. This makes it hard going for the commercial embedded DB vendors who have to unseat something that's free (either in-house labor or open-source). Here at Encirq we remain focused on DeviceSQL, an SDK that makes it easier for designers to create a faster, smaller embedded DB than they would get with either the open-source or commercial embedded DB alternatives. However, as with the OS guys, we need to broaden our horizons to higher margin segments outside of consumer electronics to drive the business. Anyways, just wanted to share these observations. Jan Liband

    • Nihil Obstat: THanks for the catch. This was an editing error. I linked to the wrong book and referenced the wrong authors. I've corrected it above by reinstating what Jack originally wrote. Thanks again.

    • The following comment was submitted by Eric Weddington, Product Manager, Atmel: Hello, I recently saw the article, "Use an MCU's low-power modes..." by Miro Samek, in the October 2007 issue of Embedded Systems Design. I noticed, with interest, that the Atmel AVR microprocessor was reviewed and specifically that example code for the WinAVR toolchain (GNU) was given. However, I was disappointed to see that the author seemed to know nothing about the APIs that are available from AVR-LibC, the C library for the WinAVR (GNU) toolchain. The example code given in the article was inline assembly statements, which are ultimately not needed. The AVR-LibC documentation can be found online at: . AVR-LibC provides an Interrupt API, and a Sleep API which provides an excellent example of usage and description : ------------------- #include #include ... cli(); if (some_condition) { sleep_enable(); sei(); sleep_cpu(); sleep_disable(); } sei(); "This sequence ensures an atomic test of some_condition with interrupts being disabled. If the condition is met, sleep mode will be prepared, and the SLEEP instruction will be scheduled immediately after an SEI instruction. As the instruction right after the SEI is guaranteed to be executed before an interrupt could trigger, it is sure the device will really be put to sleep." ------------------- The Sleep API also provides a method to set the desired sleep mode that is portable between AVR devices, as just manipulating the SMCR register is not ideal as some AVR devices use a different register name. With the Sleep API, setting the sleep mode is as simple as writing, set_sleep_mode(SLEEP_MODE_PWR_SAVE); Additionaly, AVR-Libc provides a Power API , which provides further means of lowering power. The Power API provides convenience macros that manipulate the AVR Power Reduction Registers (PRR), and other macros that produce inline assembly to manipulate the Clock Prescaler Register (CLKPR) in a way that is portable across AVR devices (where these registers exist), and also that ensures that the correct timed assembly sequence is produced no matter what the optimization setting. An article that just skims the surface of the issue of low-power settings in micros does not do justice to what hardware and software features are truly available to the end-user. Eric Weddington Product Manager, Atmel Creator of WinAVR

    • We used this text with the permission of the publisher, Newnes/Elsevier. This series of nine articles is based on copyrighted material from "Computers as Components: Principles of Embedded Computer System Design" by Wayne Wolf. The book can be purchased on line. --SRambo Managing Editor Embedded Systems Design