Titanic discoverer Robert Ballard held the audience and yours truly spellbound as he described his oceanic explorations during the main keynote address at the ElectronicaUSA/Embedded Systems Conference in San Francisco last March. Among Ballard's finds are the Titanic, the Bismarck, and PT-109. He's now interested in ancient shipwrecks, those that plied the Mediterranean and Black Sea trade routes.
Wine was a prized commodity in the ancient world and ships carried vast quantities across the seas. Ballard realized that sailors then, as sailors now, had a certain fondness for the grape and were likely to sample the cargo whenever the captain was busy elsewhere. Even drunk seamen knew better than to leave drained amphorae lying about, so they surreptitiously slipped the evidence overboard.
Ballard, a sailor himself, searched in a route perpendicular to the course traveled by the ancient vessels, looking for a telltale pattern of amphorae empties on the sea floor. He simply followed the trail till coming across the old vessels.
In an hour Ballard's lecture ranged from oxygen-depleted sea bottoms where nothing rots, to tube worms that thrive through chemosynthesis, to his work helping children around the world learn about science. His message: white boys aren't measuring up as they used to. Minorities and girls are passionate and involved with his Jason project.
Diversity seemed an appropriate theme for this year's Embedded Systems Conference, which merged with ElectronicaUSA, sponsored by the German Messe Muenchen. The show had an international flavor more typical of a European location than the Moscone Convention Center. Attendees from all over the world wandered the halls. The 430 exhibitors up 50% over last year from more countries than I could count packed the show floor. Attendance rose 30%, and an astonishing 275 speakers presented courses that ranged from basic embedded to doctoral thesis coursework.
If you've ever been to Electronica in Munich you know the sprawling show that occupies dozens of buildings, with exhibits covering everything electronic. This year's Embedded Systems Conference benefited from the merger with Messe Muenchen; unlike previous embedded systems shows, this one included companies that sell components of all sorts. Since the embedded system is a feature of every electronic product this broadening of the show was entirely appropriate. Anyone buying connectors also needs a microprocessor, as well as a bit of software.
Jerry Fiddler of Wind River fame talked about this broadening of the embedded systems business in his keynote address. “There's no such thing as an embedded market,” he said, “the device is much more interesting than the embedding.” To me, the embedding is the fun part, but he's certainly right that microprocessors are as ubiquitous as printed circuit boards and switches. Talking about “the embedded space” is perhaps as silly as thinking in terms of “the printed circuit-board space.”
Fiddler claims that 70% of the development money for the, uh, embedded space, goes into firmware. He didn't touch on the odd fact that embedded systems tools account for about a buck fifty in worldwide sales, yet the products they render represent a market value of hundreds of billions, if not trillions. I've been puzzling over that dichotomy for years.
Prognosticating is always tough, but Fiddler's prediction was that within five years some 14 billion devices will be connected, which seems believable. Add up the number of PCs in the world and you quickly see that most of these connected devices are embedded systems, not Dells or Macs. He posed an interesting question: How will we build and manage hundreds of billions of smart, interconnected products?
Fiddler feels that biological DNA (deoxyribonucleic acid) is the ultimate RISC machine since DNA has only four “instructions”: adenine, thymine, guanine, and cytosine. As long as nobody ports Internet Explorer to DNA I guess we'll be safe from nonbiological viruses.
But what about cool new products? There were plenty, far too many to cover in one short article.
My unscientific guess is that the ratio of huge booths to tiny ones has declined; many new, smaller companies attended. That's always a good thing for innovation, and at this show many of the petite-sized companies had some very interesting products.
Micriμm was one of these, this being its first year at the Embedded Systems Conference. It's releasing a TCP/IP stack for μC/OS-II. Protocol stacks are legion, but I eagerly await this one, which will be available shortly after this article goes to print. I read a lot of code a lot of code but μC/OS-II is the cleanest code I know and is a joy to peruse. Micriμm sent me its TCP/IP stack, and the source is even prettier that μC/OS-II.
More interesting, it was written with the guidance of Validated Software to ensure the package will be certifiable to the highest levels of DO-178B. μC/OS-II has already been accepted at level A, which makes it appropriate for the most demanding of safety-critical applications. Even developers of games or consumer appliances benefit since you know the software works sometimes a rare occurrence in this business.
Another new company, eCosCentric also made its show debut at the Embedded Systems Conference. The developers of eCos, an open source RTOS, fled Red Hat when that company lost interest in the, uh, embedded space in 2002. Some formed eCosCentric, a UK-based firm that supports and provides tools for the OS. The company now provides an Eclipse-based integrated development environment (IDE) that runs under Unix/Linux or on Windows machines via Cygwin. They claim a mere four mouse clicks builds the real-time operating system. Interestingly, some 60% of their market use ARM processors. Next in popularity: Motorola's ColdFire.
Or is that Freescale's ColdFire? Though Motorola is spinning off the chip business into a new company named Freescale, at the moment (April) the deal still awaits U.S. Securities and Exchange Commission approval. Motorola's, uh, embedded systems business was exhibited under the 70-year-old Motorola name perhaps for the last time.
Eclipse is hot; most tool vendors showed or were talking about Eclipse IDEs. As the promotional material maddeningly states: “The Eclipse Platform is an IDE for anything, and for nothing in particular.” It's a framework that unifies the Babel of incompatible tools under one IDE. Individual tools are coded as one or more plug-ins; the Eclipse front end is essentially a mechanism for discovering, integrating, and running these. Like it or not, Eclipse is the future; expect your software-tool vendors to migrate to it in coming years.
SoC, CSoC, FPGA, ASIC, extensible, configurable, IP the customizable chip biz is awash in a headache-inducing sea of acronyms. But it all comes down to a simple idea: if you want to stuff a lot of logic on a single chip, build an ASIC. Except that ASICs cost a pile of money. In the good old days of 0.25m (micron) geometry a few hundred grand would get you a mask set. Today's 90nm parts cost millions in design, verification, and tooling. One white paper I saw suggested that a 90nm ASIC makes sense only for markets that sell over $10 billion per year. Cell phones and one or two other products fill that bill, but not many others.
Yet there's a market for very dense logic, whether buried in an FPGA or an ASIC. Put everything on one chip and production costs plummet. Beyond the cost issue, though, is performance. Increasing CPU speeds and horsepower can't keep pace with the escalating demands from customers (or, at least from the marketing droids). It's not practical, in many cases, to crank up the processor's clock rate. Power requirements skyrocket, sucking mobile product's batteries dry in minutes and creating excess heat. RAM costs are huge for zero-wait-state memory at high speeds.
In the olden days engineers working with breakneck data rates used a CPU for housekeeping and custom logic, often configured as a state machine, to handle the data torrent. Several vendors have extended the idea, pushing CPU cores into the FPGA or ASIC.
Design tools make plopping a processor onto the chip almost trivial. Today Gartner Research says a third of all FPGAs have some sort of processor included; three-quarters of these are soft cores. (By software, I mean one programmed onto the chip just like the rest of the custom logic; a hard core is part and parcel of the chip itself. Believe me, you don't want to learn about these on the Web”a search for “hard core” brings up lots of hits but little having to do with electronics.)
Altera previewed the Nios-II, a new version of its 32-bit soft CPU core. The device eats surprisingly little logic, consuming less than 20% of the cells on Altera's low-end Cyclone FPGAs. That drops to under 1% on the bigger parts. A 200MHz processor burns about half a buck of FPGA real estate. The Nios-II lets you extend the instruction set. Need a better multiply-and-accumulate? Design the logic and tie it to one of the extended instructions instead of painfully setting up an I/O device.
Both μC/OS and μClinux 2.6 have been ported to the Nios-II. An (Eclipse-based) IDE enables you to program and debug the thing.
Altera favors multiple cores on a single chip, using the processors to partition the software problem into smaller bits and pieces. It's a dandy idea since doubling the size of the program (in lines of code) grows the schedule by much more than a factor of two. Barry Boehm's COCOMO model shows how partitioning a big problem into many individual smaller and independent parts can cheat this schedule growth.
No company pushes the multiple-CPU idea harder than Tensilica. One of its customers has over 150 processors on a single ASIC, and the average is six per chip.
Tensilica's Xtensa processor is a bit of intellectual property (read: you pay your money, you get a disk) that represents a 32-bit RISC core. It has an extensible instruction set. Clever tools analyze your C code and automatically generate instructions to improve, often drastically, system performance.
The company previewed its Xtensa LX core, which adds a number of speed-enhancing features. One of the most eye-popping is a configurable I/O channel that supports transfers of up to 1,024 channels each 1,024 bits wide”in a single clock. At 350MHz that's some 350 terabits per second. Another addition draws on the processor's extensible architecture to essentially eliminate I/O ports; data is inherently transferred via queues as part of an instruction's operation. A+B could automatically go to a device or location, without the usual intermediate store operation. That's pretty cool.
Even cooler, though, was the SozBot demo at Tensilica's booth. These one-pound-or-less robots competed in an orgy of destruction reminiscent of the Roman Colosseum. Robots deployed flame-throwers, scoopers, and circular saws that tore through each other like a demolition derby. The crowd went wild; it's always fun to see someone else's technology destroyed in a shower of sparks.
To round out the FPGA/ASIC category I must mention Agilent's FPGA Dynamic Probe. Designers typically debug FPGA designs by routing nodes to a few debug pins. Want to probe a different node? Change the design, recompile, and try again. The Agilent product essentially installs a mux inside your chip that routes up to 64 nodes to as many debug pins as you're willing to allocate. Nothing special there, right? The company went an important step further and integrated the tool, the logic analyzer, and the FPGA-design software. Select the nodes you want to probe on the schematic diagram and the system recompiles the schematic, downloads to the FPGA, and sets up the logic analyzer with the nodes' names”all in seconds. All of the manual drudgery disappears.
Test has always been a thorny problem for embedded systems. Our friends in the information technology business can easily construct automatic regression tests. Embedded systems”uh, devices”are more problematic. Who will look at the LED display and press the buttons at the right time?
Over the years a number of companies have offered testing solutions. Few survived. Virtutech has a different approach, one that's not cheap but that offers more precise simulation of the system, from I/O through all of your code. Its Simics product is a platform that runs your firmware. A library of drivers simulates all of the I/O. I started to yawn at this point in the demo, having heard many such promises in the past. The intriguing part of the product is that Virtutech comes into your shop and spends perhaps months working with you to develop simulators for the devices you're really using. Sure, you have to write them a check but you get an effective simulator for your system. The show demo had 1,000 copies of MontaVista Linux running on 13 physical computers. The user interaction was surprisingly responsive.
Virtio showed another simulation environment more akin to the traditional canned software solution. But that twist, too, is interesting. Customers buy a Virtual Platform, which is a complete software-only simulation of a particular microprocessor development board and CPU including all of the I/O. The benefit over just buying development boards for each engineer somewhat eludes me, but if I were traveling, having the complete environment hiding in my laptop would be pretty attractive.
Berkeley Design Technology's booth was mobbed every time I wandered by. This small outfit offers design assistance, training, technical reports, benchmarking, and more for DSP devices. One handout, “Choosing a DSP Processor,” is a must-read for anyone making such a decision. If you have any interest in DSPs check out the company's Web site.
Tuesday morning I hosted a Shop Talk session about offshoring. About 130 people showed up in a room meant for half that number. Wildly divergent opinions ranged from “there is no problem” to “this is the end of the world.” A few demands for more government control brought an outburst from one Romanian engineer, who said “believe me, I've lived under socialism. It sucks.” After the laughter died down even the most liberal of us pondered his wise words. Despite lots of scintillating repartee, no one had any practical solutions.
Me, I've decided to just leave a trail of empties for some future scientist to find.
Jack G. Ganssle is a lecturer and consultant on embedded development issues. Heconducts seminars on embedded systems and helps companies with their embeddedchallenges. Contact him at email@example.com.
“Fiddler feels that biological DNA (deoxyribonucleic acid) is the ultimate RISC machine since DNA has only four “instructions”: adenine, thymine, guanine, and cytosine.” And I guess my PC has only two instructions since it is programmed in binary. DNA is read in groups of 3, giving 64 possible combinations, which resolve to about 20 “commands”. It might still be a record, but how many instructions did the 4004 have?
On another topic, I wonder how much luck I'd have convincing my boss that I was really looking for “hard CPU cores”?
– Mark Moss