Year-on-year trends are sure to grab your attention. People considering Linux? Down. Analog brand loyalty? Nope.
Embedded Systems Design conducts an annual market study, and it's quite comprehensive. Many of you already know this, because you participated in the study. In fact, four of you were the recipients of our random drawing (see “Four lucky winners walk away with USB turntables”).
The study was conducted on a global basis, mostly through e-mail in the early part of this year. Over 1,000 embedded systems designers responded to the study, which was sent to a subset of subscribers of this publication (both the domestic and European versions) and EE Times and to attendees of our Embedded Systems Conferences.
For those of you who aren't familiar with the study, we look at such areas as:
• What functions are included in your design?
• Which wireless technology will you deploy?
• How many simultaneous projects are you working on?
• How many people are on your team?
• What types of tools do you use?
• Which processor did you select?
• Which operating system did you select?
I won't go through the complete study here, but I'd like to share some of the parts that I found particularly interesting, troubling, and fascinating. Contact me directly (email@example.com) if you'd like to get more information about the complete study. The real beauty of the study comes from the fact that we've done a nearly identical study for a number of years, and we can show the trends, the evolutions, the end-of-lifes, and so on. That year-on-year data can be very interesting.
Here's one remarkable stat: from 2005 to 2007, the number of new projects fell from 48% to 39%, as Figure 1 shows. This drop is offset nearly equally by a rise in improvements and maintenance. What does that say? Could it be that the projects are so complex that they require more maintenance than in the past? Or are those designs better than previous models, hence living longer lives? In my opinion, it's a combination of those two functions, with the latter being more significant as the cost of producing a new product rises significantly. But it's obvious that it takes longer to program each new generation of processor, simply because more functions are available to the developer. More code means more test, so the each phase of design takes incrementally longer.
To get a reality check on this question, I contacted Contributing Editor Michael Barr. Barr suspects that designers have changed how they define “new.” “It used to be that if you added Internet connectivity to your project, but it still did the same thing otherwise, people characterized it as new. Now, everyone that wants an Internet connection has it. So they view such changes, including application updates, as upgrades rather than new projects.”
Upgrade that CPU
For those designers working on an upgrade of an existing project, 56% say that they're employing a new processor (Figure 2). Is that because they need more performance or because the processor they'd been using is no longer in production? My guess is that it's a combination, but with a higher percentage looking for more performance. CPU vendors are very careful these days about how and when to obsolete a processor, one of a system developer's biggest fears.
As we saw in a recent Freescale announcement, the CPU vendors are trying to make it as easy as possible for designers to migrate to a higher performance processor. In the case of Freescale, they claim that designers can easily migrate from 8 to 32 bits using their Flexis family of microcontrollers. This is a good example of an incremental performance boost. But in most cases, step-function leaps of performance can only be had by switching to a new family of devices, rather than just moving up the ladder within a family.
Here's one that caught me by surprise, at least until I gave it some thought: the number of people not considering Linux for their next project jumped from 34% to 48%, and from 27% in 2005, as Figure 3 shows. Keep in mind that while the number of people using Linux is relatively high, the number of people not using it who would consider using it has dropped off. That's attributed to the fact that it's not “new” anymore. A higher percentage of those who would consider it have already done so.
Editorial Review Board Member Bill Gatliff thinks that we're finally turning the hype corner on Linux and realizing it's not right for all applications. “People are getting realistic about it.”
Here's what Barr had to say: “I was surprised a few years ago at how strongly Linux came on. There always seem to be some interesting new technologies, but they are not always adopted. But Linux actually succeeded, and a lot of people were using it in telecomm apps and so on, for stuff that looks like a PC. Clearly that trend is continuing, but obviously at a slower pace.”
And it's not rocket science as to what the number one reason is that people are interested in Linux: cost (Figure 4). A reason people are shying away from Linux is that the cost numbers between forecast and actual didn't exactly add up. While the kernel itself may have been free or relatively inexpensive, the support costs climbed faster than expected. And third-party tools were required to implement application-specific functions, which also adds to the cost.
Commercial operating systems
Looking at commercial operating systems in general, there's a pronounced drop in their use (Figure 5). But surprisingly, this drop isn't balanced by an increase in the use of commercial distribution of open-source operating systems. That's potentially bad news for the operating-systems vendors.
Barr reasons that this drop is because “for operating-system technology, the cat is out of the bag. Fundamentally, every RTOS is the same as every RTOS. What you need is a way to divide your problem into tasks and have sufficient computing power. Then you want to have a priority-based pre-emptive kernel. And they're all the same, whether you get your OS out of a book or with free source code included, or you get something else free. Unless you need that driver availability, or some special advanced features, you really aren't willing to pay for it.”
Is that drop due to the fact that users are unhappy with the support they are given (Figure 6)? A key influencer in the decision of which commercial operating system to employ is the quality and availability of tech support. The figure almost doubled in two years, from 27% to 50%.
Languages and tools
The use of C as a programming language is increasing significantly in both current and future projects (Figures 7 and 8), mostly at the expense of C++.
Why is C, which is a relatively mature language, increasing in popularity? One reason, according to Gatliff, is that more designs are being outsourced. “The skill set required for C++ is stronger than what's needed for C. Especially when you consider that the use of Java only increased slightly between current and next project. I would have expected (and hoped) for the Java use to increase. That said, I wouldn't say that an increase of around 8% means that people are abandoning the use of C++.”
Barr wasn't surprised by these results at all and says this is part of a continuing trend. “If you look at the year-on-year numbers, C++ doesn't add a lot of value. It actually takes away. Even though C++ can potentially be more 'reusable,' that doesn't necessarily hold true in the embedded space. So much of the embedded software ties to the precise hardware that's being implemented. You can reuse your APIs, but the guts of your code change from project to project. The first priority in the embedded space is that the system works properly. C does a fine job, and you can do a lot of great stuff with it, especially if you're looking at something safety-critical. C++ tends to introduce a lot of variables and make the project more complex.”
We asked what would be the one thing you would improve about your embedded design activities (Figure 9). The winner, by more than a factor of 2X is the debugging tools. Looking at the year-on-year results for the same question, the number of people who responded “programming tools” dropped from 25% in 2005 to just 10% in 2007.
Gatliff says that the analysis of this one is easy. “People just want better tools period. The drop in programming tools could be because Eclipse is starting to address some of people's complaints with debugging tools. Also, there are now programming-tool vendors that have offer tools that can do simultaneous kernel and application debugging under Linux, which, in my opinion, is an amazing feat.”
Barr says, “The key here would be scheduling, getting the product out the door faster. I'm surprised people are looking for better debuggers, because that doesn't really help you design. If you're spending your time in the debugger trying to find a problem, you're in trouble. I've actually spent hours watching people in the debugger not learning a thing, when all you have to do is reason out the problem, know how computers work. Then use the debugger to confirm your suspicions.”
Here's one that made me smile. I've been preaching to the processor vendors for years that it's all about the tools. It doesn't make any difference if you got the best, fastest, lowest-power processor in the world. If you don't have the right ecosystem built around that processor, you will not succeed (Figure 10).
A topic that's been discussed a lot recently, particularly by yours truly, is the issue of outsourcing, whether that work is done here in the States or abroad. About 39% of the respondents say that they've been involved in one or more projects that were either partially or completely outsourced (Figure 11).
Twice as many of those projects that were outsourced went outside the U.S. You get one guess as to which region outside the U.S. most of the work was outsourced to. Time's up–India (Figure 12).
Other notable stats
Which phase of the design took the longest (Figure 13)? Testing and debugging is the winner.
Analog components vendors beware! When it comes to brand loyalty, you may be out of luck. According to the study, almost two-thirds (63%) of the respondents claim to have no brand loyalty when it comes to choosing their analog components (Figure 14).
Is this accurate? Not according to Planet Analog Editor Bill Schweber. “Frankly, these results contradict what I have heard from both engineers and vendors (who are admittedly biased). In my experience, designers usually look at the top two or three analog-part vendors in a category, especially ones that they have dealt with and have good parts that meet specs (max and min, not just typical) and support them with application notes, demo boards, reference designs, and even live applications engineers. They are hesitant to switch vendors and even product families from a vendor, if they have had good experience.”
While a good number of designers are taking advantage of programmable logic, another characteristic I've discussed a lot lately, the number of those that employ an embedded processor within their programmable logic is far too low, just 36%, with half saying they use a hard care and half using a soft core (Figure 15). Thankfully, the number of embedded processors is on the rise, albeit slowly.
Special thanks to Jack Ganssle for helping me analyze some of this data.
Richard Nass is editor in chief of Embedded Systems Design magazine. You may reach him at firstname.lastname@example.org.
Go to www.embedded.com/columns/survey to find surveys from previous years.