Top ten milestones in embedded 2012 -

Top ten milestones in embedded 2012

Advertisement editors asked me to put together a list of what I feel are the ten best things that have happened in the embedded space in 2012. Rather than do that, I've compiled what I see as the ten most important things this year for embedded systems.

Number 10: Sub-$0.50 32-bit processors.
NXP and others have introduced ARM Cortex-M0 microcontrollers for tens of cents. Put a high-end CPU in your product for a tenth of the cost of a cup of Starbucks. Does this spell the end of 8 and 16 bits? I don't think so, but it does shift the landscape considerably.

Click on image to enlarge.

Related content:
Jack Ganssle's “The low-pin count LPC800”Number 9: Ada 2012
Ada 2012, a new version of Ada, includes design-by-contract to automatically detect large classes of runtime errors. Though Ada's use is still very small, it does offer incredibly low bug rates. In the past design-by-contract was only available natively in Eiffel, which has a 0% market share in the embedded space.

Related content:
Jack Ganssle's “Ada gets a makeover”
Ada 2012 Language Reference Manual lists the updated features
Number 8: Xilinx acquires Petalogix, meanwhile coming out with the Zynq FPGA
The Zynq has twin Cortex-A9 cores. Zynq is interesting in that it's less about a massive FPGA and more about cores with some configurable logic. And Petalogix has a great demo showing interrupt latency on each core, one running FreeRTOS and the other Linux. Although Linux is a wonderful OS, it isn't an RTOS replacement:

Related content:
Jack Ganssle's “The rise of FPGAs?”Number 7: The Coremark benchmark goes mainstream
While Coremark has been around for some time, in 2012 a number of microprocessor manufacturers have started using it strategically to differentiate their offerings. Now Coremark is even found in datasheets. ARM leveled the playing field… will Coremark upend it?

Click on image to enlarge.
Image from article CoreMark: A realistic way to benchmark CPU performance.

Related content:
CoreMark: A realistic way to benchmark CPU performance
Number 6: Ivy Bridge released
Although Intel's part is not targeted at the embedded space, their successful use of 22nm geometry, enabled by FinFET transistors, is causing the other foundries to scramble. You can be sure we'll see FPGAs at this process node before long, which will mean higher density and lower power consumption (at least on a per-transistor basis). Today both Altera and Xilinx are shipping 28nm parts.

Click on image to enlarge.
Image from TechInsights' Ivy Bridge teardown as shown in EE Times.

Related content:
“Analysts start Intel Ivy Bridge CPU teardown” by Rick Merritt, EE Times.
TechInsights' “Inside Intel's 22nm Ivy Bridge processor”

Number 5: Foxconn plans to add 1 million robots.
Nope, this isn't happening in 2012, but that oft-reviled company is starting to ramp up their robotics. What will this mean? A ton of layoffs in China, that's for sure. It will also be a shot in the arm for those vendors who make the embedded systems that go into robots. I suspect the economy of scale will drive prices down substantially, creating more opportunities for robots there and here in the West. The impact on employment will be scary.

Foxconn workers build products at a facility in Shenzen, China.

Credit: ©Steve Jurvetson/Flickr link

Number 4: ARM's BIG.little heterogeneous cores
If there is a theme about embedded in the last year or two, it's that of power management. It's all about the Joules when running from a battery. A smart phone demands a ton of computational capability when active, but does spend most of its time loafing. ARM mixed a Cortex-A15 with an -A7 on one die. The A15 runs when demands are high; otherwise it sleeps and the A7 runs exactly the same code while consuming less power. Other vendors have taken somewhat similar approaches, like NXP in their LPC4350 which mixes a Cortex-M4 and -M0 on a single chip.

Click on image to enlarge.
Image from ARM Cortex-A15 and Cortex-A7 big.LITTLE hardware from ARM.

Number 3: Improved tools to measure power consumption of devices
To continue with the theme of power management, a number of vendors have introduced or improved tools to measure power consumption of devices. ARM's DS-5 toolchain now operates with National Instruments' data acquisition devices; Segger has a brand-new debugger that measures power, and IAR's has been improved. All three of these correlate power consumption to the running code (with some caveats). Then there are the low-cost devices like Dave Jones' µCurrent and a new-new and very innovative product I'm not allowed to talk about yet. The bottom line is that designers of low-power systems now have tools that operate in the power/code domains.

Click on image to enlarge.
Image from originally from April 2009 Silicon Chip Magazine..
Number 2: Innovations in gesture UIs, such as Microchip's GestIC parts
Also huge in the last few years are new ways to interact with devices. Apple refined the UI with touchscreen swiping. Kinect uses a camera to sense a player's inputs. This year Microchip introduced their GestIC parts that sense hand gestures made within 15 cm of a device. It can detect the hand position in 3D space, flicks, an index finger making clockwise or counterclockwise circles, and various symbols. And, no, as yet it cannot detect that gesture you were just thinking about.

Click on image to enlarge.

Number 1: Searching…, searching….
Finally, the biggest development in 2012 is the one that didn't happen. Despite sales of hundreds of millions of multicore chips this year, no one really knows how to program them. The problem of converting intrinsically-serial code to parallel remains unsolved. Here's my six-core PC's current state as half a dozen busy apps are running:

Click on image to enlarge.

Jack G. Ganssle is a lecturer and consultant on embedded development issues. He conducts seminars on embedded systems and helps companies with their embedded challenges. Contact him at . His website is .

6 thoughts on “Top ten milestones in embedded 2012

  1. Regarding the number 1, that did not happen: I recently participated in project planning session. The problem of scheduling the activities in the most efficient order based on information flow, estimated time for executing each task and the ressources avai

    Log in to Reply
  2. The only way “intrinsically-serial code” can be converted into parallel code is with algorithmic changes–such a drastic rewrite is generally not considered converting. There is much more hope for converting *implicitly* serial code into explicitly parall

    Log in to Reply

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.