Consolidating the MCU market around the ARM architectureThe Cortex-M3 processor core from ARM will likely trigger a consolidation in the microcontroller (MCU) market that usually occurs in large markets as they mature. The ARM architecture already dominates in embedded processors, ASICs, and ASSPs, so it's well positioned to become dominant in the neighboring MCU market. In fact, with some recent introductions, the MCU consolidation process has already started. The wise designer takes advantage of the inevitable by using a Cortex-M3-based MCU.
Many large markets go through a consolidation process as they mature, when the number of vendors and product offerings start shrinking as the benefits of mass production and economies of scale overrun the desire for infinite varieties of the fundamental product. This has happened in automobiles, computers, and office supplies. It's a natural evolution of broad markets.
In some cases, a few large vendors may swamp the market. For example, in the US, Home Depot and Lowes took over the building supplies market, squeezing out the hundreds of small hardware stores who couldn't keep up with the attractive prices of the "big box" stores.
In PCs, it was stature rather than price that consolidated the industry. When a band of renegades at IBM assembled a "personal" computer, the industry decided that such small computers would be a force to reckon with. Realizing the layout of the IBM PC was fairly open for replication, dozens of other companies began making similar products, riding on the PC's meager operating system (OS) and common hardware components. Adding VisiCalc and a word processor put the PC on a one-way road to success.
There are many benefits to market consolidation, with end-user price reduction being the most obvious. When a product is manufactured in extremely high volumes, supply of the source material, labor overhead, and manufacturing costs can become so efficient that prices drop to all-time lows. But there are other benefits to users, such as service. A common platform, specified monitor, known keyboard, common OS, and readily available application software that would run on common hardware opened an industry to a boom in PCs that has completely changed the world.
Is the ARM processor architecture a catalyst for market consolidation? ARM is certainly a dominant processor in virtually every application category, including PCs. While not the central processor in the PC, many peripheral functions have an ARM core as the embedded processor running them: printers, hard drives, network cards, etc. Even that USB drive in your pocket or memory card in your camera is likely to have an ARM processor in it. Your cell phone is almost certainly managed by an ARM core or two—or five, in the case of the iPhone.
The worldwide microcontroller market was $16 billion in 2007, according to Frost & Sullivan. The top two suppliers by revenue were shipping more than 10 different incompatible architectures into the market. According to Semico, 8.5 billion MCUs shipped in 2006. The MCU market has more than 40 suppliers feeding more than 50 architectures, with no architecture holding as much as a 5% share. Each vendor offers its own designs with little commonality, feeding a broad applications base.
In the past, when an architecture was shared among vendors, it retracted back to the originator after a few years due to a number of reasons: a licensee failed to offer many products, found the market less lustrous than expected, or started encroaching on the originator's market. The 8-bit 8051 is the only exception to this common story, but the large vendors never seriously promoted the 8051 architecture, and the few 8051 proponents that tried failed miserably in attempts to migrate into a 16-bit architecture.
With this history, the microcontroller market is just begging for architectural consolidation.
It's all about the software
Dozens of factors uniquely drive the selection criteria for an MCU for each application: processor core, memory size and type, peripheral selection, price, development tools, OS, software, and support. It's hard to measure the importance of each of these, but certainly the processor choice influences many other factors. Memory size, OS, tools, and support clearly depend on the processor choice. That choice also drives time-to-market, as analysts VDC found that by 2004, 48% of total production cost was attributed to software development costs.
Code efficiency determines memory requirements, directly driving cost in the MCU market. Proprietary instruction sets offer limited options on OSs because they're either created or paid for by the proprietary architecture creator, reflecting the amount of market support the vendor can afford. Development tools are designed for specific architectures, and support is allocated according to demand. MCU support is offered by the vendor, but third parties support the architectures that vendors pay for and the ones that are the most popular. Driver software, libraries, off-the-shelf CODEC and protocol stack software, JPEG and MPEG decoders and encoders, and numerous other software packages are available based on the prevalence of the underlying processor architecture.
A popular architecture like ARM sounds pretty good in this environment. There's only one catch: ARM is a 32-bit processor, and today, most MCUs are 8-bits.
Traditionally, the microcontroller industry has been driven by 8-bit needs, and in terms of units shipped, 8-bit still reigns. But 32-bit has grown fast and is rapidly catching up to 8-bit. Many broad-based vendors even have programs emphasizing a direct jump from 8-bit to 32-bit MCUs, squeezing out their very own proprietary 16-bit products. Since most 32-bit MCUs are actually 32-bit embedded processor architectures bolted to the peripherals and memory needed for controller applications, 32-bit MCUs benefit greatly from the tools, software, and knowledge base that has been honed by their bigger brothers.
The experience base of embedded processors gives 32-bit MCUs a wealth of benefits around the development of the end application. That means a significant reduction in the development time and effort needed to bring an application to market; an improvement in the confidence and the quality of the resultant system; and a reduction in the total cost of development and ownership of the end equipment.
A 32-bit MCU may not cost much more than a similarly equipped 8-bit part. If an MCU is divided into a dozen components, most being peripherals and some memory, the processor core portion is usually less than 10% of the die. Even if that part doubled when going from an 8-bit to a 32-bit processor, 80%+ of the die is still essentially the same, yet the capabilities and performance of the 32-bit MCU are much greater, possibly eliminating the need for one or two of the peripherals or the use of additional 8-bit MCUs to handle the application requirements.
In controller apps, the MCU's peripheral selection may be the most important factor, and that's important in the consolidation movement. The peripherals are determined by the application because they directly link to other hardware in the system. Specifics of the peripherals make those interfaces easier or more intricate and appropriate for each application.
How did MCUs get so fragmented, anyway?
It's not at all curious that so many vendors develop their own processor architectures, but it's astounding that so many vendors have so many incompatible competing processor architectures. A proprietary architecture locks the customer to the vendor with velvet handcuffs. After a software stack is developed for an architecture, it's difficult for the customer to shift to another vendor's products. There have to be compelling reasons to abandon the unique software the customer has written and move to another vendor's processor, where the customer is again locked into another proprietary architecture. The way that users break these chains is to move to a widely licensed architecture where their expensive software can run on a number of vendors' MCUs. ARM provides that flexibility.
At one time there may have been some significant benefit of one vendor's instruction set over another. The early days of the MCU market were characterized by a vendor focus on hand-crafted logic, and trying to expand support to a wider bus never translated well, especially when the register size and stack size are dependent on the bus width. At the time, all software was written in assembly code. The assembly code is integrated directly into the application (no device drivers or higher level abstractions to support portability), so the lack of maintainability and reusability didn't matter since the peripherals in new MCUs also changed. Software engineers were focused only on efficiency (code and data size) rather than reusability, maintenance, and improving time-to-market with a platform approach. As applications and MCUs have evolved, the processor core has taken a back seat to application-specific concerns regarding reusability of software, broad applicability of software tools, and easy access to a world-wide community of engineers, tools, and software to accelerate time-to-market.
Isn't "open architecture" another way of saying "commodity?"
For vendors to rally around a non-proprietary architecture like ARM, they must have somewhere else to differentiate. When peripherals are so important, then the number and details of each are what determines the winning MCU. This is where each vendor can distinguish itself, choose the applications in which to shine, and focus its attentions where it sees its strengths. The processor in the MCU is something the vendor can simply trust to do its job well. The vendor that best serves the right peripherals, software, and support for an application will win the design-in.
The most important aspect of the processor is the ready availability of software to program these peripherals and solid development tools. A proprietary architecture from one vendor doesn't lend itself to these properties, except possibly in narrow vertical market spaces. A processor architecture driven by a number of vendors (including many in the embedded MPU, ASIC and ASSP areas) naturally excels in depth and breadth of its support.
A few years ago, a number of traditional MCU vendors recognized the potential of the ARM architecture in microcontrollers. Its capabilities, performance, popularity, die size, and power consumption fit well with the direction of MCUs. And it was an instantly available 32-bit core, meaning the vendors didn't need to invest money and precious resources into developing their own proprietary 32-bit MCU. There were a couple of trouble spots, but the vendors developed their own circuits to fit the ARM processor to the needs of controllers. They had good success with their ARM7-based MCUs and some progressed to the ARM9 core for their MCUs.
These early days of ARM-based MCUs were a period of learning for ARM. The original ARM7 was designed for computation, and doesn't have the deterministic interrupt response time required by real-time embedded applications. ARM realized that control-centric MCU applications need deterministic real-time response, rapid startup and sleep response times, and enhanced bit control. In 2005, ARM developed the Cortex-M3 processor core, which has deterministic interrupt response and better controller features, just right for an MCU. Luminary Micro was the first to produce and market MCUs based on the Cortex-M3.
The original ARM7 core had already gathered momentum with multiple ARM MCU vendors demonstrating the power of a single MCU processor being used across many vendors, and this momentum hints that the consolidation has begun. Some of those original ARM MCU vendors now also have embraced the Cortex-M3 design. NXP, STMicroelectronics, and Texas Instruments have all stated they will build Cortex-M3-based MCUs even though they already have ARM7-based chips. It just makes sense to use the engine that the market is demanding so that the tools can be common and the software is transferable.
Other vendors are likely to follow suit, though with some hesitancy. It's difficult for a vendor to move away from a proprietary architecture. It's hard to leave all that investment behind, even if the licensed architecture offers real advantages. It also requires a lot of explaining to existing customers about split loyalties. But it's the right move for everyone involved. The engineers invested in updating and maintaining a proprietary architecture can be better utilized perfecting or expanding those precious peripheral circuits that most benefit the customers' applications.
Embedded designers win with consolidation
OEMs have a disquieting choice to make as well, though it's not as hard as it may appear. OEMs with products based on a proprietary MCU instruction set should face the dilemma head on. Continue programming in the proprietary instructions and even more code is piled on the stack of hard-to-port programs. Make the move to an ARM-based architecture and there will be no regrets because every line of code that's written to further enhance functions, add features, or extend options will be code that can live forever, supported by ARM products across many vendors for years to come.
MCUs will continue to evolve. 8-bit MCUs are yielding to 32-bit MCUs. The Cortex-M3 core is the ARM architecture configured and optimized for microcontrollers. Its benefits are attracting other vendors to Cortex-M3, including larger vendors—even some that already have ARM7-based MCU product lines—because they see the advantages of a common software platform. This movement and the success of Cortex-M3 portends a consolidation in the microcontroller market around the ARM architecture, not unlike the PC around the x86.
Jean Anne Booth is the Chief Marketing Officer and founder of Luminary Micro. With 24 years of high technology experience in executive management, marketing, and engineering, she is particularly skilled in product marketing and design-in creation. Jean Anne holds a BSEE from the University of Texas and an MSCE degree from the National Technical University. Booth can be reached at firstname.lastname@example.org.