Editor in Chief

Hi, I'm Rich Quinnell, engineer, writer (threatening to become novelist) and also editor of EE Times' Industrial Control Designline and EDN's Systems Design Center. In my spare time I still review plays (part time drama critic) and still find time to dabble in circuit design for fun.


's contributions
    • Despite the competition from low-cost, low-power 32-bit MCUs, 8-bit MCUs are not only holding their own in the embedded market race, they are getting their second wind.

    • WiFi and ZigBee have limited range, and cellular can be too expensive and power hungry. So numerous alternatives for low-power wide-area networking have arisen to serve the needs of the Internet of Things (IoT).

    • The Weightless SIG has a new standard in the works, for two-way, low-power, wide-area networking in the IoT.

    • TI's DesignDRIVE gives motion control developers a sandbox in which they can experiment with sensor and motor control topologies.

    • In his presentation at the upcoming ESC Boston, Dave Nadler will show how failures can help point the way to success.

    • Under a mild-sounding title, software safety expert Sean Beatty will be presenting a design teardown for a detonator at ESC Boston.

    • Bob, exactly the point. Our survey doesn't capture such use of the 8-bit MCU, which is part of why it shows a decline in 8-bit use. But that's a decline in 8-bit as the primary processor. What I am hearing, as you indicate, is that 8-bit is tackling supporting tasks instead.

    • There are some commercial providers of such globes. No idea on cost. Companies include Digital Globe Systems of CA, Pufferfish (UK), and Leurcom. Also, a research group out of University of Sao Paulo created a design using standard picoprojectors inside a globe they showed at Siggraph last year - called Spheree ( If interested in the kinds of spatial transforms needed to convert to a 3D global display, here's a tutorial from Mathworks -

    • Interesting reasoning. Surveys and statistics like this are always subject to interpretation and you make a good point. There could be a distortion effect. About 34% of the respondents said their last project lasted 6 months or less, another 33% said 7-12 months, and the remainder said 12 months or longer. So, some distortion might arise due to the effect you describe, but not as much as in your example. In any case, the 8- and 16- bit response has been in steady decline the last five years of the survey while the 32-bit has grown. One more statistic. Asked if they had upgraded from an8- or a 16-bit processor to a 32-bit processor in the last 12 months, 16% said yes from 8-bit and 17% said yes from 16-bit. In Asia, the numbers were even higher than the worldwide average: 26% in each.

    • (Hit "post" too soon...) So by these numbers, about 16% (40% of the 40% using multiple processors) might be using an 8- or 16-bit in conjunction with something else. Then, too, of the 40% who are using multiple processors, nearly half are using three or more. As you point out, many of those might be 8- or 16-bit. This survey asked about projects, and says nothing about the production volume. So it may well be that new 8- and 16-bit design wins (as the main processor) are in decline while sales figures are rising due to multiple-processor designs as well as the production volume of 8-bit and 16-bit designs. I will also point out that the survey lists Microchip at the top when developers are asked what they are considering in the 8-bit and second in the 16-bit arenas for their next design.

    • You make a good point that the question relates to the main processor. I might add that the survey also indicated that nearly 60% of projects used only one processor (it might be multicore, though) and that only 40% of those using multiple processors used different processors

    • Also a UofMD graduate, I can relate to all you have to say here. The organic chem class was a waste. Odd thing was, what turned out to be the most useful classs were periperal ones I took for other reasons. Phil 170, Introduction to Logical Reasoning, turned out to be of much more help than circuits theory when employment time came and I had to work with those new-fangled logic-gate ICs rather than with transistors.

    • There are two Radio Shacks locally, but the nearest one is woefully understocked on parts and I have essentially stopped going there. BTW, I used to live in Westminter, on Old Washington Road, in the early 80s.

    • To some extent it depends on what they mean by "coding." From what I have seen, the educators pushing this idea take coding to mean "whatever it takes to get a computer to do something." So, kids may be learning Basic or Python rather than C. I think the idea (which is expressed badly and from some ignorance about the realities of coding) is to get kids to gain a bit of understanding of how computers work. So it's really about learning a bit of comuter science. As I understand the objective, because computers are integral to so much of what we do, knowing a bit of something about how they behave and what makes them behave tha way could help them understand somethin about the foundaions of our technological society. Computers become something more than magic boxes that do stuff if you have an inkling of how they do it. We teach science to kids without expecting them to become scientists, so that they have at least an appreciation for how science works. Doing the same with computers seems reasonable to me. Having them learn what a program is, and does, and create a few simple ones for practice seems just as worthwhile as having them do simple physics experiments in school. The goal is exposure, not mastery.

    • Yes, I really think that. If I can see what my AC is currently costing, and can see that by adjusting it a degree or two I can save money, I may well decide to do so. My power company already adjusts pricing throughout the day, having warned me in my monthly statement that costs are higher during peak times. But without a smart grid I am "flying blind" in making my decisions an evaluating my results. I currently get a monthly bill, which comes far too late to provide effective feedback. The key point about the smart grid is that better information leads to better control and utilization. Both for the consumer and the utility.

    • I think he would be surprised at the use of CPUs as contol elements in embedded systems. His work focussed on the computational aspect of these devices. Using them as part of a control system probably never occurred to him.

    • Making the grid smarter can change human behavior to the extent that it can provide more immediate feedback and consequences. If reducing my power usage during peak demand periods can save me money (or if not reducing it costs me money) and the system can tell me immediately what the consequences of my choices are, I am more likely to choose to conserve. Learning 30 days later that my bill was high due to peak-demand usage doesn't help much as I cannot easily tell what I did that yielded that result. The Gee Wizz electronics may not be able to convince people to turn off their air conditioners, but it will give the power company a means of turning up those people's thermostat a degree or two to ease demand, or to impose higher prices during high demand to encourage conservative behavior. Neither of these is possible or effective without the technology being discussed here.

    • Many airlines are offering WiFi on board already, so it seems to me that the effects of WiFi on navigation have alread been field tested. I would expect that cellphones would not operate at normal flying altitudes, with the combination of shielding, distance from cell towers, and speed of movement from cell to cell. So, I would expect cellphones to not be used mid-flight even if restrictions were removed. So lifting the restrictions does not seem like a risk to me. But I would be happy to see more testing done regardless.

    • On item 6 in your list you qualify the interest in adding IP to IoT designs with realtime, deterministic requirments. Can you give an example of an IoT design that does not use IP because of these restrictions? I was of the impression that the IP capability was needed for reporting data and for system control, neither of which are restricted by the Inernet protocol. The real-time needs of connected systems don't have the Internet in the loop, do they? And if they do, what protocol instead of IP would they use?

    • What was the specific problem with that z80 board? Was it now generating EMI that was getting into other circuts, or was the timing off because the faster risetimes meant earlier triggering?

    • The fact that this version supports a wide range of Microchip's device families is a key feature that will help make the company's producs appealing to developers who don't want to have to maintain a different toolchain for projects having different processing needs. Time needed to restle with tools and software development is becoming a key MCU selection criterion. Teams are choosing a suboptimal processor for their project just to avoid needing to set up a new tool.

    • For me the social media are an essential part of business communications. They keep me informed on what is happening now in my fields of interest, and are ways to get noticefrom someone who does not routinely check my website.

    • Yes, what about the overhead of having all this protection? What kind of memory resources are needed, what processing power, and how does the protocol overhead affect things like the time awake and transmit duration for systems trying to survive on really little battery power?

    • I've been advised that it is also necessary for endpoint devices to be configured to allow outbound connections only, so that there is not an open port for hackers to find. This requires them to periodically poll their server to see if there are any messages pending as the server cannot initiate any communications. Do you agree with this precaution and if not, how do you protect those devices needing to have open ports.

    • While there is plenty of processing capacity left in wireless modules, most of the ones I have seen are provided as black boxes, with no insight given into their internal operation so that the excess capacity can be scavanged. And for many wireless module providers it may not be within their capability to open their systems and deal with the documentation and support issues that will come from making this information accessible and useful to the developer. So, which wireless modules are actually open for such use?

    • Whenever I think about the KISS principle I think in terms of keeping the user experience simple. But doing so often involves a considerable amount of design effort. So it works against the LAWS principle in my opinion.

    • Now that GUIs are so common, are there any good guidelines for the design of the graphics? It seems to me that it's really easy to create a confusing or frustrating interface (I have been subjected to some myself). There's lot to consider. I recall seeing a presentation where one of the viewers asked a question about a slide that seemed really clear to me, but he didn't understand it. Turned out he was color blind and the color coding of the charts didn't work for him. Developers creating GUIs need to be aware of such issues. So, is there something that will help them?

    • Perhaps the answer is not to lessen the electronic control of automobiles, but to increase it. It's that pesky "liveware" that is the source of most driving accidents, not faulty software. Bring on the autonomous car!

    • Alas, it seems like most forms of print in the trade industry are falling by the wayside. It's good to know that is thriving, though. One thing print did well was to filter and validate information before presenting it (necessary because mistakes lasted forever) which took a team of knowledgable people. Unlike many net-based "information" sources, sites like preserve that benefit. Keep up the good work providing places where today's engineers can go to get reliable, accurate, and useful infomation.