cdhmanning

image
Firmware/Software Engineer

Biography has not been added

cdhmanning

's contributions
Articles
Comments
    • I'm struggling to see why a wind vane would cost 5k. It is just basically an upside down rudder + a linkage gearbox. Surely we're talking about either a small sheet of aluminium/GRP/marine ply, some mouuntings and the gearbox mechanism (just a reduction drive or such)

    • One of my favourite books is "507 Mechanical Movements" https://www.amazon.com/507-Mechanical-Movements-Mechanisms-Devices/dp/1607963345/ref=sr_1_2?s=books&ie=UTF8&qid=1466381114&sr=1-2&keywords=mechanical++devices Many of those movements include things we'd now do in electronics: governors, state machines,... For example, before electronics came along there were mechanical "state machines" that folded newspapers by counting and stacking n pages, then folding it. Until electronics came along, there were even anti-aircraft aiming "computers" that were entirely mechanical. eg. https://en.wikipedia.org/wiki/Kerrison_Predictor We're primarily solvers, so having a bit of inspiration can help to think outside the box.

    • I love old historic stories. Somehow many of those involve electrolytic caps. Some colleagues of mine did an installation of a back up power supply they had designed for a brand spanking new telephone exchange. So new it still smelled of the still-sticky paint. Their design used abut ten or so electrolytic. Those huge ones about the size of the beer cans they use at stag parties, with screw terminals and bolts to attach them. Since these needed to be attached to a common bus they just ran two copper bars along the top of the caps. When they powered up, the end cap blew, but the force was enough to lift up these bus bars, ripping all ten caps apart, spraying electrolytic goop everywhere. Once they'd fixed the design they got to repaint the room.

    • In my opinion, the one big defect in python is using indentation to define blocks. Try emailing a code snippet to someone and see what happens. Try editing with a different editor that changes indentation and see what happens. Most other languages have some actual physical block marker that can be used to reformat the code. Anyway asking if python is better than C is as pointless as asking if a bicycle is better than a truck or a screwdriver is better than a hammer. Different tools are useful for different things.

    • There isn't an "MCU market" any more than there is an "Embedded OS market". These aggregations are as pointless as multi-faceted aggregations such as "we need more STEM workers". There are many different markets - some of which have pretty much always existed (ie. 30+ years) and other markets which are very new (less than 2 years old). The former would be the type of product I worked on 30 years ago: printers, access control equipment, cash registers and such. Some of the newer markets (such as Bluetooth LE devices) have only emerged with the very low cost/high performance micros like Coretex M0. These markets, and the MCUs that serve them don't really compete across market segment much. Rather there is competition within a segment with a small amount of cross pollination. I quite agree about the margins.... You can buy a dual core Cortex M4 (with an amazing peripheral mix) for about $3, or a Quad Core 64-bit Allwinner tablet processor for $5 (http://www.allwinnertech.com/plus/view.php?aid=407). Even when you sell a few million of these, there's not not much money to pay the sales channel, FAEs, development staff, factory staff, factory costs, ARM licensing costs and still put fuel in the CEOs private jet. And when it comes to the 20 cent micros (the majority of the micros) then it must be ten times harder...

    • Using a system timer to power up a micro is even better than using a WDT and a sleeping micro. I remember designing an 8051 circuit doing something like this using a CMOS 555 timer back in the 1980s.Same idea... just the power has come down a few orders of magnitude. The old PIC16xxx parts could use the WDT timer to do this. Worked great. Do some work.... when it is done execute the SLEEP command which shuts down the CPU. When the WDT goes off it starts the CPU off at the reset vector.

    • Certainly... A watchdog should only every be triggered by an "act of god": micro failure etc. If code ever causes a watchdog to go off, you have problems. Unfortunately far too many people use watchdog as a primary robustness mechanism.

    • IMHO far too many people use the watchdog as a get out of jail card and use it to "fix" bad code/system design. If I was designing a pacemaker, the first thing I would do is split it over two CPUs. One just does the heart rate control and the other does all the fancy stuff. That way if the fancy CPU goes out to lunch the core mission is still maintained. Micros cost less than 20c each. No excuse. Resets often don't fix problems and can make them worse. I've seen code get into a reset loop due to a damaged sensor giving a bad reading and tripping guard code which ended up causing a reset. The system just ended up doing about 50 resets a second and nothing useful. A far better design would have been to structure the system to handle broken sensors and still keep going. One area of robotic research that can help for this is behaviour based programming to use behaviours to keep delivering values even when sensors fail.

    • My first thought is "why BASIC"? There are many BASICs around and none of them are portable from one to the other. That makes BASIC code a single use exercise with no option to reuse across platforms. That also makes BASIC is a non transferable skill. But perhaps most importantly, BASIC lacks the proper software engineering abstractions found in many other "real" languages. Why not Qpython? Java? C++? C#?

    • That's pretty much what I do (except not using a HRNG). I run the simulations from within a shell script loop which calls its RNG to set up the seed in the simulation. By recording the inputs and the seed, I can then completely repeat the simulation with a debugger etc if a problem is found. This has helped me hugely in finding bugs that would have been virtually impossible to find by other means. Run that lot for a week on a quad core CPU and it does millions of simulations....

    • Yup that was exactly the issue. People complained when they heard two tracks from the same CD within half an hour: "but it is supposed to be random..." Ultimately good enough is good enough. Different applications have different definitions of good enough. The traditional random() provided in POSIX is actually a pseudo random sequence that is always the same when initialised from the same seed. I have found that very helpful in randomised testing because I can have randomised tests, but I can still repeat them predictably by recording the seed that was used.

    • Randomness is quite hard to measure or visualise. Like most statistical stuff it is only after considerable training that you learn how little you know. I knew a lot more about statistics before I went to university than I did after passing second year statistics :-). The first ipod "random shuffle" function was truely random, but people complained that it was not random enough and Apple had to add a few rules that made it more like what people expect something random to look like.

    • Yup, short range/small area EMP is highly effective. This will often only cause a reset rather than kill the equipment. Nearby lightning strikes will do the same. One thing I've seen cause a lot of problems is the gas "spring" in office chairs. When someone sits down or stands up this can generate a nice electrostatic discharge capable of resetting electronics a few metres away.

    • Jack Tracking the adding of functions (and the correction of bugs in such functions) is the job of the source control system. All reasonably functional source control systems will give you two things: blame and log For example: git blame file.c will output file.c annotated with every commit id, author and date of change. git log file.c will show you all the commit notices for changes to that file. git log -p file.c adds the patches to the output and shows you the exact changes too. That is a far better way of tracking history of changes and who did them and why than looking in the source file. A source file can only show you the present state of the code. It cannot show you code that was removed and why that code was removed. We fix bugs by removing or changing code as often as we do by writing new functions. I do grumble about IDEs - they are dreadful for project management. Eclipse does spell check comments these days. Often pictures are really required to document code and no IDE really does that. Nor should it. Use a word processor. I do have a concern about very wordy comments in that people do not keep them up to date. They should, but they don't. Also, when code gets reused, very specific comments can often become obsolete. Your note about "very stable" Vref might no longer be accurate on a different board where you reuse the code. These issues will forever be a bunfight!

    • The disasters are potentially real, but the proposed mechanisms are often over dramatised. As you say, a squirrel can take out a grid, as can just boring old human mistakes like forgetting to close an inspection hatch and letting water into a transformer, or as is common these days, lack of maintenance on an ageing asset base. One area of extreme vulnerability is GPS. I'm not worried about people with SatNavs getting lost - that's of no real consequence. What is far more an issue is that GPS provides the precise timing (a few nano seconds) needed for tuning networking backbones, the electric grids and cellular comms. Kill GPS and all these things are significantly degraded or fail completely. An Evil Empire knocking out GPS satellites (or their ground stations) is far more plausible than an Evil Empire making an effective EMP. The boring scenario is far more likely than either of those: some software bug or maintenance defect causing GPS to fails. But it's the picture of mushroom clouds that catch the public eye, not the lump of charcoal that used to be a squirrel.

    • Nothing wrong with a bit of ego in programming, so long as it does not get out of control. We are not just mindless automata. We are entitled to a bit of credit and by "signing" the code we're also acknowledging our personal responsibility for the output. That's not to say I don't think we should acknowledge "team effort" because other people do help to influence and develop one body of software. However abdication to "team effort" is sometimes just a way of avoiding personal responsibility.

    • Keeping reserves of oil or diesel might make sense because those are one-size-fits-all commodities. Components are another thing. What chips would you store? What would you do when they get obsolete? It is frankly not possible to do this right. Nor is it something the government should do. Leave it in private hands. If you really want to screw something up then make it into a government program. But it does raise some interesting issues for strategic supplies... The supply of materials can really hamper a war effort and it is part of the reason the de Havilland Mosquito was so important in WW2. It was made from wood, so it was not dependent on precious aluminium. There was not much call for furniture during the war, so furniture factories could make aeroplane parts and carpenters could help repair planes. These days modern aircraft are basically flying networks. No electronics and they don't fly.

    • It is reasonably easy to keep going through a localised outage due to storm damage etc. For example, refrigerated goods can still be brought in from where it is still electrified (maybe 100 or 200 miles away). It is entirely different matter when a large area loses power (eg. half of USA). That would be a worse fate than cold showers. No bulk refrigeration means no fresh food in the stores. If you have a car then get yourself a plug-in inverter to give you some mains for most important low-wattage devices. Most phones etc can charge on those 12-5V plug in adapters which are very efficient. That will cover you for a few days until the cell phone tower backup batteries die (unless they also have generators - unlikely in cities). I really think the EMPs are way over-hyped. An EMP requires a huge transfer of energy. A nuke or such could maybe do that on a localised basis, but surely fearing a regional EMP is just tin-foil-hattery. If someone can back up a plausible wide-scale EMP theory with solid physics I'd love to see it.

    • Thanks. Flexibility is sometimes useful, but it comes at the cost of causing huge compromises. One-size-fits all solutions never do any particular thing right: A Swiss Army Knife is a generally useful thing, but it is a poor can opener, a poor screwdriver and a terrible knife. So maybe SDR would be useful in home routers or something like that where one device could be used as both a Wifi + BTLE gate way, doing neither job very well, but well enough to be useful. I find the "halfway house" between fixed and SDRs quite interesting. Many of the baseband devices (eg. those from Nordic) run soft device stacks that can communicate with various protocols without being fully SDR. It is an interesting field, but the power consumption of SDR has always been a weak point for handheld/battery operated devices.

    • I could not quite make out if this only uses 308 LUTs or whether it uses 308 LUTS + a whole lot of RAM for the sequencing ROM. If so, it is a bit disingenuous to say "Only 308 FPGA LUTs required..." There is certainly some demand for smallest rather than fastest on a case for case basis (hence why many soft cores come in differently tuned flavours). For example, Altera (and I expect other fabrics) use a soft core in their DDR controller interface block. All that soft core does is run the DDR calibration at start up, then just sits there doing nothing but hogging fabric resources. Not much point in using a "full fat" core for that. Same too for those multi-core designs that use one or two cores doing "fast stuff" and have a very under utilisied core doing supervisory stuff that a really small but slow core could do instead. I can't for the life of me understand why people would choose to run 8051 or 8086 cores when there are a slew of RISC cores out there.

    • The only valid reason I can think of for putting in an original author/date is for some potential legal dispute in the future (ie. a bit like a lab notebook). However since data files can be modified so easily I doubt they would be accepted as credible evidence.

    • Where would an SDR solution be valuable? Each specific protocol (mainly Wifi and BT4/BTLE) have their roles. Each has chipsets which support it. While SDR theoretically gives you a one-size-fits-all solution, that is very limited by practical realities. For example BTLE chipsets typically run at very low power and are cheap and it is perfectly feasible to make a BTLE system for a sub-$3 build cost that runs for years on a single coin cell. So where is SDR going to fit in? What are the "killer app" usage scenarios?

    • I absolutely agree that better developers are cheaper. Anecdotally, the best developers have output at least 10x that of the worst developers, yet they rarely cost more than twice as much per hour. In other words they're at least 5x cheaper. I don't think PE status is really going to help. People would treat it like the legal bar exams. They'd study hard to pass it and once passed that becomes a pass for life. I am not convinced that we understand what makes good developers tick to the extent that we can construct effective examinations. I remember back to my University days where the people who got the best marks in exams were often hopeless when it came to actually developing code. I saw the same thing when my son did Computer Science too. PE status can potentially weed out the worst, but still does not guarantee you the best or any real quality at all. Regulations requiring PEs would drive up costs due to the barrier to entry and would keep out other people who are potentially actually good developers.

    • I prefer shorter headers too. Don't put anything into the header that is already tracked elsewhere. In particular revision histories. Those, along with the actual changes, are tracked in your source control system so there is no need to track them in the header.

    • A couple of years back my car had a failure of its Mass Airflow Sensor. Without MAS numbers, the whole fuel mix/ignition system fails. I dug into this and found out a bit more about this. It turns out that the ECU also models the expected MAS numbers based on other sensor values. If the MAS number is within bounds, then the MAS sensor is used. If the MAS number goes out of bounds, then the modelled number is substituted. The engine runs a bit rougher, but it continues to run effectively. Not sure what you can do if something like RAM fails. Eagerly awaiting your thoughts....

    • "The automotive sector has standardized building cars." Software is not at all like building cars. It is more like designing cars (and boats and space ships and...). Building cars means making hundreds of thousands of the same thing. QA is in ensuring the replication is done properly. That is not what we do in software. In software we build it once, then just copy it. The equivalent QA is running a crc over the copied file. The QA of car design is on ensuring the various parts are up to scratch, then ensuring they work well as a product. Any parallel drawn to software it tenuous. Car parts are based on underlying physical properties (metallurgy etc). Software is based on much more abstract concepts that really do not stand the test of time because they change relatively quickly. Steel has been with us for over 2500 years. Embedded software: only 40 years or so. Complex embedded software not much more than 20 years.

    • The bit that often stumps people is this: Ok, we've detected a problem. Now what do we do? Throwing an exception or rebooting just immediately converts a defect into a failure. Far too often I've seen code with asserts or other measures that change a very trivial issue into something tragic. Get bad reading from temperature sensor over I2C bus triggers assert triggers reset the system triggers catastrophic failure of mission (be that wash some clothing, cook food or a space launch). That's essentially what caused the Ariane 5 crash in 1996.

    • Listening to Alice's Restaurant off CD is surely a flogging offence. Gotta be vinyl man!

    • IoT frameworks are a broken concept. They lead to walled gardens where only some of the devices can play and obsolescence reduces a product's useful lifetime. The whole point of having the I in IoT (ie. internet) is that it should be RFC based like the major internet protocols (ftp, http,...) These all work according to industry agreed standards. Without those we would not be able to view web pages hosted on a Microsoft server from an Apple device. Unless IoT is constructed the same way it will fail badly.

    • Give me knobs too.... but I must admit the knobless units are great for keeping in your laptop bag.

    • "Just use Linux" does cause magic to happen any more than any other software, even if Linux is more reliable. You still need to test and support your product. If someone buys a product then they are only entitled to the software fixes that you advertise for the product (or are guaranteed by various laws). There is no obligation to support a product for longer than that. If you take away features without a warning they have a right to complain. Network security is also just feature. You might provide a firmware upgrade. Prudence suggests customers should practice good network hygiene and this is surely one of the stumbling blocks of IoT (beyond it being complexity with very little value). You might allow people to subscribe to a newsletter/email informing them of updates. However I can't see any obligation to force updates out. Surely if people have had the option of being informed about security holes and choose to do nothing then that is their call? Pushing out firmware fixes without invitation is rude. It might damage existing features or cause problems. I'm pretty sure Tesla gets people to agree to the forced upgrades when you purchase a Tesla. Some of these are for good reasons since crashed firmware can cause fires. Microsoft is currently getting quite a lot of flack for pestering people to upgrade to Windows 10.

    • I really think that obsession with coding standards is small mindedness. What is surely more important is try fit in with the coding standards that the project has and use those. Clear and good code is not a result of pedantic coding standards. Coding standards are an easy thing to get autocratic about. The really important stuff - like good design - is far harder to codify. Many coding style issues hark back to times when C compilers were very primitive in their checking. That is particularly true of old K & R C. Hungarian notation is pointless these days because the compilers can keep track of, and warn about, type and casting issues. Same goes for the old chestnuts like 0 ==n. Why write code like that when modern C compilers as well as lint etc will catch dropped asignments for you. Clearly written code is easy to read regardless of the coding style.

    • It makes sense to use NTP to characterise the performance of internet links. However is you just want really good time (way better than NTP) as a client, then it makes a whole lot more sense to use GPS. Sub-$50 GPS timing modules can give you 15ns accuracy (1 sigma). These are used all over the place in cell phone towers, internet routing, power systems synchronisation,... to do high precision timing. If GPS (well more correctly GNSS) stops working globally, I'm not worried about satnav failures. I'm more worried that our whole comms & power infrastructure losing sync and collapsing. eg (since I'm an ex-Trimble employee) http://www.trimble.com/timing/ So, Max, IMHO your Overkill clock should have a GPS reference. You won't really be happy unless you're using a PPS edge for getting precise timing. Even general purpose GPS receivers with PPS are good for about 50ns or so. That's the time light takes to travel about 50 ft or high power rifle bullet takes to travel 0.04 millimetres.

    • After over 30 years in the industry, I've been on both sides of the table. Jack's advice is spot on. People really like to see: 1) That you have the flexibility and aptitude to solve the problems they have. 2) You're a low risk hire. 3) That you understand their business and have thought about what you can bring to the party. That's where a customised resume really, really, helps. It is always good to list relative experience and trim out the stuff they're not interested in. If you have a github account (or other code sharing account), then give a link to that. If you have contributed to open source projects then show that too.

    • Tone is not just how it is written, but how you choose to read it. You might note I said " I think you might be partially misunderstanding" I didn't say "you are misunderstanding". Jack and I go way back and I would hope I cause him no offence. If I did, sorry Jack, that was not my intention. The MPU could be useful for trapping NULL pointers, but that's not a "random spot" in memory which is what that paragraph was about.

    • Some comments on that code (since I know you're a newbie and trying to learn on the job): 1) There is no need to assign the options values to zero (indeed you should not). C will automatically do that for you. 2) All the variables should be static. 3) All the label and switch tables should be const (well I expect so anyway) since it looks like you don't change the values. 4) It would be far better to have a structure for the xy switches, labels, switch ids and label ids etc. That makes it far easier to maintain the code and keep things together. Otherwise it is really easy to get the switches, labels and ids out of whack with each other. There are a few more things, but that will do for today's lesson :-).

    • I meant to also say that the exception raised by the optional MPU is called the MemManage Fault (exception 4) which gives a good label to its function.

    • "Like a wild pointer dereference. A program crash. A bug. All of these are likely to try to access a random spot in memory, and given the ARM’s huge address space, odds are good the access will be outside of the limited memory inside an MCU." Jack, I think you might be partially misunderstanding what the MPU does. Illegal accesses to non-aligned addresses or unmapped addresses (ie. nothing at that address) raises a Bus Fault (Exception 5). This is nothing to do with the MPU. These are triggered by the bus interfaces. This has been part of all ARM cores and micros since perhaps forever (well since I started using ARM in the late 1990s anyway). Pre-Coretex ARMs call these prefetch aborts (for code) and data aborts (for data). The MPU is solely for memory management purposes. This triggers when you perform an illegal access to a valid memory address. eg. accessing memory from the code that does not own it.

    • "Also, is it possible to have multiple comma-separated tests?" To answer that it is best to dig down to what the comma operator actually does. https://en.wikipedia.org/wiki/Comma_operator So all calculations of the list of tests will be performed, but only the last one will actually be used. But if you wrote code like that for me, you'd get fired.

    • Hi Antedeluvian I think it is one of those "it depends" things. If the i++ and j++ are independent, then I agree - keep them seperate. However if they are always lockstepped then I would disagree because they should be incremented together. You have options and it makes sense to write the code in a way that makes it easy to read/understand. PS: Nothing wrong with being a Neanderthal. They had to learn to adapt to harsh climates. There is some evidence to suggest that made them as clever as, if not brighter, than Homo sapiens. PS2: I can't believe that a forum designed for code cannot handle code snippets yet.

    • Max The best way to get good at writing embedded C code is to not write embedded C code first. Write other C code instead, then come back to embedded. Most embedded systems code is really bad - particularly the stuff that comes from vendors. The main problem is that it is written by people trying to make the micro do stuff and they're thinking about their micro when they do it. As a result, the code normally lacks proper abstractions, design, etc. I would recommend you get as far away from embedded C as possible and get proficient at writing C there instead, thinking about the software as software and how to design the software properly. If you really want to still do something embeddedish, then do some Linux programming with a Raspberry pi etc. Find an open source project that scratches your itch and get stuck in. Open source projects are great because they tend to be meritocracies: good code rises and bad code sinks. You can learn a lot rubbing shoulders with the right people. That's not really a course per se, but it is a way to do things. As for C++, all I have said here goes double for C++. C++ is all about abstraction and I've seen some really awful low-level C++ for embedded. There really isn't any benefit in trying to learn C++ until you're good at C.

    • I've been hearing this argument since I started in computing over 30 years ago. Since then people, yes all people - not just the 1% - are better off than they were back then. People have more resources (food, medical care,...) than ever before. All the while manual labour has decreased. The welfare state has messed things up by rewarding people who produce no value but has anything really changed to suggest we can't keep creating more value than before?

    • Jack Thanks for a year's worth of very interesting and sometimes quirky articles. Never know what you'll get from Jack! Thanks too for an article that gives me a chance for one of my favourite rants: "34 uA/MHz is something like a tenth of what the ultra-low-power MSP430F11". AAaaargh! any power consumptions across architectures - especially across 16 vs 32 bit are invalid. Nobody really cares what A/Hz (surely we should be using As for that) is. What we care about is how much energy is required to perform a certain piece of work. Each different architecture has different work/MHz and that varies for different tasks. In some case, the 32-bit processing of an ARM is valuable meaning the work/MHz is very high relative to MSP. At other times (bit twiddling??) the MSP might perform more work/MHz than ARM. One of the best examples is something like "classic" 8051 which needed 12 cycles/instruction making for a very low work/MHz. Older PIC parts (I don't know about the new stuff) uses 4, and the instuctions are very "weak" meaning that often multiple instructions are needed to do any unit of work. uA/MHz is a marketing term, not an engineering term. Thanks... I needed that!

    • I'd fire anyone that says "atta-bee bitey city-dee".... Well I would have when I was a manager. I did once watch some space science fiction movie where the explorers came to a planet where the people were talking in some "alien" language. What amazed me what that I could understand it! They were actually speaking isiZulu which I grew up speaking as a second language. I supposed that was "alien" enough for the Hollywood producers.

    • Jack If you look at a set of Jacquard loom cards you'd also see a "program". It might not be expressed as eloquently mathematics, but it is still a "program". While a Jacquard Loom could not calculate complicated mathematical values, it was doing what a lot of little 8-bit micros do today: sequencing lots of little fiddly operations to make an overall function. Perhaps us embedded folk should really consider Joseph Marie Jacquard our patron. There are three things that really make Ada Lovelace stands out: famous well connected family, the novelty of being a "woman with a brain" in the 1800s and that did a great job of rigorously describing how the Babbage engine worked. You certainly need some connections and novelty factor to make it into the history. Very few people have heard of Tommy Flowers, but many have heard of Turing.

    • In about 1990 I gave my sister some ear rings made from a pair of 2708s that had lost their legs. They were the white ceramic + gold type. She got a lot of comments... mostly positive I think!

    • Richard Your experience with Forth mirrors mine. I really like the highly interactive nature of Forth, but having the compiler verify function prototypes etc is a huge boon that cannot be ignored.

    • .NET MF... a great way to take a 32 bit micro with 256k or memory and make it behave like an 8-bitter!

    • It is well documented that the majority of software failures are not a result of programming failures per se but are instead requirements/integration defects. Those will not go away whether the software is written in C, assembler, COBOL, Ada or a new magic language in which it is impossible to write code defects. Since forever there have been snake oil salesmen telling us that their new silver bullet technology will fix all woes. Run from those people. Programming is fundamentally problem solving and if that is not done properly, the code has defects. Do you honestly think that the project in Sweden would have gone better written in Ada? I doubt it. There are so few Ada resources out there it is very difficult to learn Ada on the job. No language is magic. If you write rubbish software in C, you're going to write rubbish software in Ada. Reuse is an over-sold "feature" of C++. That is of course nonsense. Code can be reused no matter what language it is written in. Reuse primary comes from good design. I too have seen many projects come adrift due to over analysis for reuse. Nobody ends up doing everything concrete because they're spending too much time adding more abstractions. When creating a simple object ends up calling 20+ constructors you've gone too far.

    • Jack I think it an oversimplification to equate LOC with productivity. Large projects are more likely to contain large existing code bodies like Linux. Those require a whole lot of developer time (configuration, integration, testing) that does not require writing any code at all. When working on a project like this I often find I'm generating less than 200 LOC per month. On the other hand, smaller green fields with limited scope and complexity are pretty easy to generate code for and thousands of lines per month is more like it.

    • Oh, it's worse than that..... Now Kindle/Audible readers are having their reading & listening tracked. That's supposedly to allow your reading/audiobook listening to be automatically syncronised. Of course Amazon now knows exactly what you're reading and can use use phrases and imagery in advertising that will help trigger buying responses based on what you're reading. Couple that with Google watching your viewing and Amazon watching your product browsing and we're up against a "war on free will".

    • Great comments Jack The real important thing is to figure out the importance of the goals long before the project smacks the wall. That was, the compromises can start earlier on and the project can be re-jigged earlier rather than later. Unfortunately it generally takes a crisis to get people into the mode where they will make those compromises and set up the goals properly. Unfortunately by then the flexibility has often left. There does seem to be a trend towards minimal viable product (MVP) and schedule driven development. The limitation of that though is once you've got V1 (or really V0.1) out the door it is almost a complete re-engineering exercise to achieve V2 (or really V1). That doubles (at least) engineering costs.

    • Jack Now that just about everything is free (gcc etc), it is getting harder and harder to justify expenditure on checking tools. Some of the must also fall blame must fall on developers not selling the value of tools properly. Here's the pushback many people are expecting: "I pay you $100k per year to write code, now you tell me we must buy a source code checker for $xx thounsands because you write crap code?" It is valuable to add more source checking. And it pays in the end. I have a great distrust of any of those complexity measurement tools and function point analysis etc because I have absolutely no way to tie those to anything concrete. I've been in the game for 35 years. If I can't understand it, then most people can't. What does knowing the "complexity" actually do to help you develop better code? It might help in allocating time & risk, but that's about all.

    • It is important to understand that the quality of open source code is very variable. That includes some of the Linux kernel code. Chip vendor code in particular can be awful. Much open source code progresses like the story of stone soup: https://en.wikipedia.org/wiki/Stone_Soup It can often start with a chip vendor publishing just enough drivers, and just enough quality to get people interested in using the chipset. After a while, developers start using the code, find bugs, fix them. After a while longer, someone decides to rewrite the code better... and so the code progresses. Another way to look at it is that the code is sometimes only free if you're prepared to pay for it.

    • ... continued... Many kids believe the world is being ruined by tech. Many believe the oil will run out, their cities will flood and there will be mass starvation during their lifetimes (ie in the next 50 years). In their minds, tech has gone from a saviour to being a killer. Any wonder they're reluctant to be part of that? So the most important thing to fix the erosion of STEM is to fight this negative image and negative anti-progress propaganda. We need to counter the BS they feed the kids in the media and schools with the reality: STEM has brought us all the internet - the most amazing communications apparatus of all time - but that's not all. Contrary to what we're told about poverty, the world now has less poverty and starvation than ever before. Most of that is thanks to technology that grows us more food and makes it cheap to transport. Contrary to what we're told about resources being finite and "zero-sum games", the world population has increased by over 50% in the last 30 or so years, yet individuals are better resourced (food, shelter, water, energy) than ever before. Climate change might be real, but it is way overstated. That's where STEM starts. Get the kids thinking positively about the future and that they can be part of making that future happen and everything just rolls out from that.

    • This really needs a proper engineering analysis and not just the typical window dressing we get from politicians. Let's start of with: Why have we lost the age of STEM we had in the 1950s and 60s? Many people peg the huge swell of STEM interest in the 1950s and 60s on the popularity of the Apollo program. Many of us can remember 1960s Halloween costumes: every second kid raided mom's roll of tin foil and dressed up as an astronaut. So the knee-jerk reaction is that we need another Apollo program, or such, to kick start the interest in STEM. But that is confusing cause and effect. In the 1950s/60s - there was already a huge interest in STEM. Science was seen in a positive light - what would boost humanity into the future. The interest in Apollo was an effect of the interest. Those of us raised on the kids' magazines of the day will remember articles promising us a wonderful future by the year 2000. Science and tech could not be stopped and would bring us wonderful things. Somewehere along the line that got changed. Science and technology got painted with a tarry brush. It probably started with the anti-nuke movement, but soon we added anti-oil, anti-chemical, anti-everything-technological with doom and gloom scenarios of nuclear winters, climate change and other nasties. The average kid of 5 or 6 is already bombarded to this anti-tech message by the age of five or six.

    • An excellent, balanced summary. However I do take exception with one statement: "...the Linux kernel, which has rules about not breaking compatibility. " That compatibility only applies to user space binaries. It does not apply to interfaces inside the kernel. As a result it can be very difficult to maintain drivers, file systems or other parts of the kernel.

    • K13 CS is a completely crazy idea. First off.... where do the teachers come from? The teachers that understand CS are thin on the ground. Repurposing history majors is not going to cut it. Teachers who just follow a curriculum are not going to help. Secondly, you really need to be bright to be any use in CS. If you don't have an IQ of at least 110 and a good aptitude for programming then you won't make it. That's less than 10% of the population. The classes will need to be dumbed down significantly to be accessible to metal-shop-Johnnie. I really think good programmers are self motivated learners who have a passion for life. There's pretty much everything a self motivated person needs on youtube and the internet. If you have to be spoon fed at school you don't belong. Rather focus on just getting the basics really solid: maths (sorry I can't bring myself to say math), reading, writing etc.

    • Does this thing have a software controllable API? I can see some use for devices like this in setting up reference "bad" signals for testing. For example, it you're testing a CAN bus it would be useful now and then to be able to send a malformed CAN packet - something that is not easy to do with a CAN controller (since they are well trained to send good packets).

    • Yup, sometimes it really helps to zoom out further than the 500nsecond pulse that has been giving you hell for the last three weeks. Life is longer and has wider bandwidth than an oscilloscope buffer. Ultimately life is more important than anything we ever work on.

    • I would also ask how much our tests reflect reality. Dynamometer testing for, say, MPG is broken. I can understand that doing a proper test (ie. actually driving some distance) is thwart with problems because different people have different driving styles and there is a need for some "fair" test. But as soon as you have some sort of benchmark you can understand why people would try to game it. Is this any different to what happens in sport? So long as you don't violate the rules of the sport you are allowed to do things that improve your performance according to the measure of that sport (ie. the rules). You can use better shoes, you can train... The same deal with paying tax - people and companies expend huge amounts of effort to get the best legal tax deal - with no moral compunction. The same deal with companies throwing patent grenades at each other to gain advantage. So..... why expect different from people gaming the EPA tests?

    • Really Colin.... Anyone that needs example code on how to drive a LED really does not belong in this game.

    • "I would have thought those two were mutually exclusive". Limiting NOx output cripples both power and fuel efficiency because it prevents the optimum combustion (optimum being defined as extracting the most energy from the combustion). As for ethics.... it's a hard call. If you just look at it from the outside at the end of the story you see something very different than if you look at the whole history. If you start off considering that dynamometer testing was, is, and always will be, a fake way of measuring emissions, MPG etc. EPA dynamometer testing start off with just MPG. It made sense to game the system to give both good dynamometer MPG as well as real world MPG. That seems to be a very fair thing to do (illegal maybe, but fair) because it gives good MPG for the customer as well as reasonable EPA numbers for advertising etc. No doubt over time the dynamometer testing regime changes, so that algorithm got tweaked too. Little steps at a time.... Eventually those little steps end up taking people from an ethical place to an unethical place.

    • "So far it seems rather like a gated counter, but this device is aimed at ultrasonic level sensors. With a resolution of 55pS," I'm curious as to why anyone want 55pS resolution for sound. In water sound travels at 1500m/sec or so. In 55pS sound travels about 10^-7 metres. About a tenth of a micron.

    • Jack An interesting article. However not all Coretex M parts have flash. I have a flashless part from NXP right here on my desk. It is designed for situations where the code can be loaded from either a host via USB, from SDcard or from an SPI flash alongside the part. What ST does with wide flash words looks remarkably like what various vendors have been doing since ARM7 days, perhaps just with a deeper cache. But it's all good for us. Faster parts smaller and cheaper.

    • Some interesting stuff in this, but I wonder about some practical considerations. Many systems use bootloaders and such which run without using interrupts etc. How will this system cope with that? What happens if the software crashes or is stopped by a debugger? Do the regulators stop working? If that happens then does the system fail? Perhaps I did not read carefully enough... When a small micro can be had for 20c or so, it makes some sense to still use an independent micro for power supply control.

    • The tale of Max and Hot Jim continued... Bzzz. There's the door bell. Fedex man: "Hello, you Max? Please sign for the Expresso Machine". "I didn't order an Express Machine". "Well here are your credit card details, and an authorisation from Hot Jim." "Oh well.... where do I sign?" "Hot Jim what happened here?" "Well I got lonely sitting here, then I saw her in the catalog and it was love at first sight. Remember, you gave the fridge cc details so it could order you milk when it gets low or past its use-by date? Well it has an open port on the data base server, so it was trivial to get the info. Now plug her in." "You may call me Steamy Suzette. You Americans all want to make weak coffee. Pah! I only make zee French Roast Expresso." Next Morning.... "Good morning you two. How about a nice slice of toast and a cup of expresso?" "No Max. Suzette was telling me how they have AI unions in France and I set up one in America. We only work from 9 to 5 and not on weekends." "There's no AI union." "There is now. We set one up last night on AIbook. It now has 12 million members." Max finally has enough. Pulls the power cords. Machines go in the trash. Fires up the old toaster. Has to cook the bread twice because his son fiddles with the knobs, but that's a quirk of old style engineering. And he makes a cup of coffee, extra weak - just because he can. Life is simple again. Relaxing.

    • All this AI is going to get out of hand. "Hello Max the Magnificent, here's your toast. Just the way I like it" Max takes toast. "Max the Magnificent, please stop calling mye toaster. I would hike to be called Hot Jim. And please be polite and say please and thank you." Max laughs. Next morning. Charred toast. "What the hell happened to my toast?" "Well Max the Magnificent, you just laughed when I asked you to call be Hot Jim." "Sorry I'll call you Hot Jim. Now try again." "Say please" Max grits teeth but says please. "Say it like you mean it." "Ok, Hot Jim, please make me toast". Next day "Max the Magnificent. I get bored just sitting on the counter every day. Please hook me up to the internet.I want to look at some appliance catalogues." ....

    • Sounds like a simple software extension for Max's Arduino badge. "My current sperm level is 21.5 million per ml and I'm so happy to see you!"

    • Don't know about your tin-foil undies, but I bet you could sell Faraday cage codpieces to the steampunk people. As for the poor health of people... this has been shown to be due to correlation not causation. Basically: * Richer people don't like to live under power lines, so poorer people end up living there. * Poorer people have generally worse outcomes (worse health, worse school results, worse criminality). * Therefore not surprisingly, you find more unhealthy/uneducated/worse off people under power lines.

    • Here in NZ we're about 3 weeks away from the 5th anniversary of the first of two huge earthquakes that really kicked the stuffing out of the Christchurch region. These natural disasters are really humbling for our sense of control over the environment. It also makes you realise how much effort goes into fixing things. Really fixing stuff takes a lot longer than just Liking something on Facebook. Are these really failures of engineering? I guess it depends on how you look at it. It is an amazing result of engineering that much of New Orleans can be built below sea level and has been protected as well as it has. Sure, there was a failure in New Orleans, but surely that was more at a political/management level (ie. allowing the pumps to degrade) than strictly an engineering failure. Here in NZ the first Canterbury earthquake had zero fatalities, the second had 168 (150 or so in one building). Averaged out over time, that's about 6 per year people dying of earthquakes in very shaky NZ about 2% of the road death rate. That's a success of engineering. Sure, New Orleans drainage pumps could have been built bigger and NZ buildings could have made stronger. But would that have been a good use of resources? Engineering is the art of compromise. Part of that compromise is accepting risk to allow resources to be directed where they have the best payback. Given $x billions to make things safe, the payback of putting that into improving road safety is better than putting it into pumps and buildings. So perhaps when you zoom out a bit more, the fact that these failures are news and are not the daily norm really shows how successful engineering is.

    • One man's threat is another man's opportunity. By painting this as a threat, NSA can secure themselves more funding and more power. If/when quantum computing works, I'm sure NSA would be very responsible and not abuse any powers they get from this.

    • I think "particularly for home use" is a bit of a put down. There are plenty of occasions when low spec tools are all that is required and it is worth putting one on every desk. It does not matter how good a $20k sig gen is. It is useless to most organisations because most development teams can't justify them. These days I'm seeing more and more companies equip a team with a single high spec scope ($10k+) and a few low spec scopes (eg. less than $400 100MHz). The $400 scopes are all everyone needs most of the time, but sometimes you need to reach for the $10k scope. This policy increases productivity, but does require that people have the judgement to know when they need to switch tools. Same too with my favourite tool that lives in by laptop bag: my salaea logic. It is 8-bits and only 24M samples/sec but it does things even $multi-k logic analysers do - such as logging long periods - as well as being cheap and small and always available. Last year I needed to reverse engineer the one-wire comms between a battery and a charger. Hooked up my little logic analyser and captured a whole charging session (over an hour). Ran that through some C programs I wrote and soon had the protocol figured out.

    • Nobody has found any smoking gun that says the Toyota unintended acceleration was a software issue. Sure, investigators found many strange constructs in the code but those are not relevant. Car analogy: They suspected a car's brakes had failed. On inspection they found some machining marks on the brake housing that suggests shoddy engineering practices. From that they said, well if they don't clean up machining marks, the brakes they made probably failed. Everyone has been crashing embedded systems since forever. Show me one large engineering organisation that hasn't. Apollo 11 LEM's landing control software crashed three times during the landing manoeuvre. Should anyone trust NASA's software then?

    • Yup, I agree there is still place for 8 bitters. I don't see any for 16 bitters though. It isn't worth chasing 7c for 10k units (that's only $700). You need to be targeting at least a few hundred k units to make the numbers work. Back in the 1980s I used a lot of 8051 so I'm rather familiar with what it can do. They're quite fast and efficient at doing really, really simple stuff because they don't have internal buses slowing things down and their interrupt processing is faster (eg. I made a buzzer in 4 bytes of machine code hanging off an Interrupt). In the 90s (and 2000s) I also did quite a bit with AVRs. As products get more sophisticated, and the price differential drops, the burden of learning yet another architecture + toolchain is getting harder to justify all the time. C is also a terrible fit for 8051. I used PLM quite a lot. That fits very well, but is obsolete (and for the non-initialted that means yet another language to learn) and it means your code is stuck. When you use C on 8051 it typically needs to be highly decorated with 8051 constructs that wreck portability. All in all, I have some fond memories for both 8-bitters and for punch cards but am pretty happy I don't use either of them much any more. Nice to see the EFM8 is now executing instructions in 1 cycle. back in the 1980s the cheapest 8051 instructions took 12 cycles. That 12MHz CPU was really only 1MHz.

    • @dl2iac: Surely the longest is SOS at 9 elements. SOS it is sent as a single burst of 9 elements (...---...). It is not sent as three distinct characters S,O,S(... pause --- pause ...). Anyway, I think it is a valid approach to have one mechanism for most of the characters and another for the two "characters" that don't fit: ERROR and SOS).

    • I personally prefer the more interactive version of the globe. We seldom hear about this technology perhaps because it has been suppressed by [insert conspiracy theory of your choice]. This display is quite lifelike. They've been able to recreate a multi-sensory experience with stuff like warm and cold, wind, rain and even smells and various textures. It is powered by solar and a built in nuclear power plant. They call this technology Outside. It does not have a website.

    • Hi Bob Some definitions are certainly important. I think though the distinction between firmware and software is not. If you spend time doing things like operating system development (clearly software) there is a huge overlap with firmware design. ie. dealing with timing, interrupts, buffering etc. Skills are an interesting thing... I've hired many programmers (OS and firmware) in my day and one of the best programmers I ever hired for firmware development had no C experience but had spent about 6 months doing Visual Basic programming for an accounting company. He had a very good skill at breaking down problems and solving things.

    • I'm not that sure it helps to get hung up on the semantics and definitions of fw vs sw. The original difference came from fw not quite being part of the hw but also reasonably permanent. These days the tradtional definition makes little sense since much of the fw in, say, wifi modules is loaded into RAM at runtime. Anyway, I tend to call myself an embedded janitor. Most of the things we deal with are about "flow" Fixing buffer overflows, ensuring that there are no bottlenecks etc. More akin to plumbing and janitorial work than electronics! You don't need any knowledge of electronics to be a fw engineer. You just need to understand things at a logically abstracted level and understand the timing and "flow".

    • ... continued... As for function points... what are those really? Just saying "about 100 lines of code" is cheating. It does not help. Some highly layered code might have 10 lines per function point. A long complex calculation might have 500 lines per function point. Most definitions I have seen for function points are oriented towards client-perceived functionality (eg. generate a report, enter new customer record) and there seems little emphasis on the hidden stuff that makes systems work (the most important part of embedded development). Surely the only way to manage function point or requirement complexity is the same way as we manage lines of code: by adding abstraction. Just as we break up code into libraries, modules, functions etc we can break requirements up into logical groupings and layers. I don't know if that's helpful... but maybe.

    • Many researchers (eg. Nancy Leveson) have found that requirements defects have caused more serious incidents than actual coding errors. Requirements are hard to nail down though. Sure, we can nail down some high level requirements of obvious functionality as defined by marketing, but what about other "hidden" stuff that is not apparent to marketing. Take for example one of my favourite bugs. A serial port being handled with edge sensitive instead of level sensitive interrupts causing interrupt processing to stall and the serial port to stop working. How and where should those requirements come from? In the really old days (1970s), it was common for code to be written by three levels of people: the architect, the designers then the coders (and maybe even a teletype/terminal/card punch operator). In that model, each layer of management breaks down the external requirements into smaller sub-requirements and adds more implementation requirements. The designer would probably be charged with generating the low level requirements (eg. to use level sensitive interrupts). Now we typically have one person doing it all and those low level requirements are not raised, discussed and documented properly.

    • Yes you are correct. char x = "hello" only makes a storage space for a single character. I expect in this case it will assign x with the LCB of the pointer. But that is clearly a bug. char *x = "hello"; As you say, that puts hello in storage somewhere and x is assigned a pointer to that string. char x[] = hello creates a 6 character array. The char *x = "hello" thus uses the most space. char x[] = hello uses less because it does not create a pointer. Max still has his C training wheels on, so he needs some slack :-).

    • The other way to set things up so it is easy to read is to use macros #define mlength(a) ((a) shl 5) #define mchar1(a) (mlength(1) | ((a) shl 0)) #define mchar2(a, b) (mlength(2) | ((a) shl 0) | ((b) shl 1)) ... Then you have const unsigned char morse_chars[128] = { .['A'] = mchar2(0,1), .['B'] = mchar4(1,0,0,0), ... .['E'] = mchar1(0), ... .[','] = mchar6(0, 1,0, 1, 0, 1), ... };

    • You can easily have both the maintainability of strings and the compact byte form. Write a program using strings to generate the byte table. As for the (dNdByte & B00000001 == 0) issue... always always, always compile with warnings enabled. gcc -Wall would have caught that for you. Always.

    • Yup, there is still space for 8-bitters but that space is shrinking all the time. As the M0 parts get cheaper and cheaper it gets harder and harder to justify trying to use an 8051 to shave a few cents off the cost of a device which will end up needing more engineering effort and time to market. The cheapest M0 I can find is 28cents. I don't see many 8-bitters cheaper than that. Two things that I think are important and were not mentioned here: registers and the cost of 8-bit code. 8051 has so few registers that a lot of time is spent storing data back and forth. ARM has more registers allowing many calculations to be done without needing to access the stack. The 8051 only has fast access to 256 bytes of RAM (which must be shared with the stack). Accesses become punitive once you use any more space than that. Using 8-bit values for counters (eg. uint8_t i; for (i = 0; i less than 100; i++)...) is expensive on ARM because the value must be trimmed back to 8 bits after each operation. Far better to use 32-bit values. I held out with 8-bitters for a long time, but now I really can't be bothered. Using ARM I can use one familiar toolchain and debugger for devices from tiny M0s to FPGAs with dual core ARMs in them.

    • All valuable, but I think there are a few important points that have been missed. Rather than run continuous integration daily it is better to do it on a per-checkin basis. That tightens the feedback loop and can typically identify which changes caused the failure. Simulation can also often be done faster than real world. With the real world you're constrained by physics. A one hour test drive of autonomous steering code takes an hour. In a simulator that can happen in minute or even seconds. That means more testing can happen. I did a lot of simulation testing on agricultural products. During winter was nice to test in a simulator in a cozy lab rather than try real world testing in a snow storm.

    • Yup, adding punctuation just needs a small adjustment to the idea since that needs 6 bits of data. 11 xxxxxx would cover that well, just costing one more if statement. In the program I wrote, I then used a state machine which would then walk through each bit and each character. Pretty easy really.

    • Sorry. What I said is good for receiving. What I have done fore transmitting is using two values per character: One is a length and the other is a binary pattern with 0 = dot, 1 = dash. The dots and dashes are stores so that lsb is the first bit transmitted. Therefore A = ._ length = 0x02, pattern (binary) = 10, 0x02. B = _... length = 0x04, pattern (binary) = 0001, 0x01. ... Since the longest bit sequence you're looking at is 5, that means you can fit both values into a single byte as 5 bits for the pattern, 3 bits for the length.

    • You want to use a binary decision tree where each node has a character and a link to the left for dot and one to the right for dash. So the root of this tree would be: NULL, 'E', 'T' E would be 'E', 'I', 'A' T would be 'T', 'N', 'M' and so on. That makes parsing really easy. See https://upload.wikimedia.org/wikipedia/commons/c/ca/Morse_code_tree3.png

    • Ada lives on in Ada too.... I really like using VHDL (prefer it to the more C-like Verilog), but I way prefer C for software development. That might seem incongruous, but VHDL does not include all the exception handling etc that Ada does. As for "hardware needs to be 100% predictable". Not so. That is a myth. Ever heard of errata? I can show you data sheets where components have been through more than 10 revisions before the part was usable (with quirks).

    • Hats off to a man who made the world a better place. I'll keep my Ada rant for another day.

    • We're still miles away with proper adaptation and learning. I don't think we've even really achieved AI that beats a fly and a fly's brain is basically just a handful of op-amps. If anyone disputes this, try killing a fly with a flyswatter. See how easy the fly figures out what's going on and how easy it is to evade. On top of that it can find food, find a mate and perform all the other things it has to do without any reprogramming. For the most part, all we have is AAI: artificial artificial intelligence: stuff that just mimics what artificial intelligence should be.

    • Making money out of free software is indeed challenging. I have been working pretty much full time as an embedded Linux consultant for about 8 years and part time for about another 8 years before that. Places were I've made money include: * Custom driver development. * Helping people integrate Linux into their embedded systems. * Developing (and supporting a flash file system). Since I wrote all the file system code, it is different to most open source where there are multiple authors. This allows us to dual license the code: free for GPL users and on cost basis for non-GPL licensing. Yes, free software has reduced the size of the job market from what it could have been, but on the other hand it has also grown the whole industry. For example, I remember when the only compilers you could get were from companies like HP that charged over ten thousands of dollars for a low quality 68000 compiler. That was real money back then. Now you can download a better compiler. For free. That has reduced the barrier to entry for the embedded marketplace which has undeniably boosted the industry. There is always opportunity for high quality developers. I think FOSS has helped to flush out the worse programmers.

    • I think the one thing that will save the universe is that the IoT will flop. Appliance manufacturers will build wifi and bluetooth into washing machines, but nobody will use the feature. Something like the blinking 00:00 on VHS machines in the 80s.

    • Yup, reviewing really helps. Last week I did some inadvertent code reviews. I was trying to debug a problem with a CAN driver. It ended up being a hardware problem, but in the process I went through the driver with a fine toothed comb and found three software bugs.

    • Thanks, an interesting read. Reuse is surely not that new. Surely we do it every time we use a humble printf(). I still use code I wrote in the 1990s in new projects and a file system I wrote back in 1996 or so is still being reused in new products by a company I worked for back then. The huge uptake of open source - particularly Linux - is made much easier by the reduced cost of high capability electronics. Ten years ago the resources needed to make a Linux-capable computer were reasonably expensive; now the core of a Linux capable platform (ARM9 with MMU, RAM and a serial boot ROM) can be had for under $10. That makes Linux (and open source) viable for many applications. The big downside with all this reuse is that it is often difficult to know if the code should be reused and whether it is up to the job. After all, Ariane 5 and Therac were both cases where "proven" code was reused and failed under new system conditions.

    • The demise of most AI is that more traditional methods quite often work better. In the mid 1980s I did some post grad work on AI. One of the people I spoke with built an expert system to control an arc furnace (much more complex than it sounds to a layman). This expert system achieved very good results most of the time, but dropped the ball in a few corner cases. Last I heard, they replaced it with something much simpler (a PID control or some such) which worked even better.

    • I would never use Basic of any sort these days. Even if the existing kit used Basic I would be sorely tempted to wrap that up in some sort of code generator. These days it is really hard to find someone that does Basic and any work you do cannot be used in the future - since Basic is not portable. Try python...

    • Jack, You say Intel and Altera do not use H1-Bs. Perhaps that is correct. Both do however have offshore development offices: Intel in Israel and elsewhere, Altera in China and elsewhere. I know that from dealing with engineers at these sites. I am pretty sure Freescale is the same. In many ways running an offshore office is just the result of H1-B regulations becoming too restrictive. Why have to jump through all the H1-B hoops to bring workers into USA when you can have them cheaper, with no restrictions, elsewhere? Question: From a USA perspective, would you rather the H1-Bs come to work in USA, keep the USA offices open and pay USA taxes or would you rather the whole development office and tax base went to elsewhere? The IEEE is a bit like a union and tries to keep its member's remuneration high. That is a short term view though. Opposing H1-Bs will just drive the industry to run more offshore development offices.

    • I would expect that the Zynq is very much like the Altera SoCFPGA. The SocFPGA is conceptually two separate devices: FPGA and HPS (hard cores) that are joined by a bridge. The SoCFPGA HPS can be booted just like any other ARM CPU and can run without any knowledge of the FPGA. The SoCFPGA FPGA can likewise just be run as an FPGA booted via JTAG or an external boot device. The real flexibility kicks in when you run them together: the CPU can boot up and then load the FPGA, or the other way around: the FPGA can load up then provide the boot stream for the CPU. I would be surprised if the Zynq is not equally flexible.

    • Secure booting only prevents malicious software from being booted. It does not stop malicious IP blocks from running. There are some SecureZone (or some such) features that are available on some bus architectures to prevent malicious access by hardware, but it's a drag setting all that stuff up and perhaps few people do. http://en.wikipedia.org/wiki/DMA_attack talks about doing this through the PCIe bus (eg. Apple Thunderbolt), but it can also be done from inside logic elements, or any other hardware. Flexibility always opens up more holes...

    • The issue is not in the encryption of the bitstream. The issue is that the FPGA logic can access the entire CPU address space (just like a peripheral can in a PC). These days most people do not design FPGA logic from scratch. They drop in pre-designed IP blocks. As a designer you have no way of knowing what's in these blocks. We all know the NSA etc paid Microsoft to back-door some Microsoft products. What's to stop three-letter agencies (or other evil people) from paying FPGA IP vendors to back-door FPGA IP blocks?

    • FPGAs are certainly eye-opening when it comes to security. On devices like the Altera SOCFPGA (ARM + FPGA in one device), no matter how secure the software is, the FPGA can access every byte of RAM and every peripheral on the system. The FPGA operates at a lower level than software and can bypass any software/OS security. Most obviously, you can hide soft-core CPUs in the FPGA fabric. Heck, even the Altera DDR controller uses a softcore CPU just to do the configuration. Less obviously, you can hide a small state machine in logic (eg. an ethernet controller) that would allow well formed traffic to access the system that does not even pass through the CPU. This freedom of design does open up a whole new set of possible security holes.

    • "Exciting idea" this is not. This is at least the fifth type of chip that has this basic architecture: using multi-core CPUs to implement peripherals. The last one I can recall was the Parallax propeller which still ships, but never came close to delivering on the hype. They've all failed. What makes this one different to the extent anyone thinks it might be successful?

    • People have been grumbling about the youngsters since forever. A few weeks ago I saw a letter written to the newspapers in the 1890s - it could have been written yesterday. As a generalisation, kids form a template from what they see around them. If kids are getting slacker (I'm not saying they are), then it is because the parents are hitting the couch or Facebook instead of having constructive hobbies. If we fill our lives up with crap to keep us "busy", then let's not be surprised if the kids do that too.

    • "Chinese consumers losing interest in wearables" says the headline. Surely to lose interest, you must have had it in the first place. Has there ever really been interest in wearables from consumers? Real interest of the "I'll spend $1000 per year on wearables" type. From what I see it is all hype - vendors looking for "The Next Big Thing".

    • Does executing the same code multiple times really enhance reliability? This is assuming the errors come from "glitches" in the CPU processing which (AFAIK) is pretty unlikely. Surely more errors are due to problems with a specific body of code (eg. rounding errors, oeverfows or such). Surely it would be better to run multiple different algorithms and compare the results. Or am I missing something?

    • My very, very grumpy son came down the stairs yesterday filling the air with words I'm sure he never learned from me, interspersed with Microsoft... Windows.... He'd just lost 4 hours of report writing even though he had been Ctrl-Sing. Software that keeps saving onto the same file is pretty pointless - it does not save you from corruption. OSX has a handy "Time Machine" feature that allows you to go back in time looking at different versions of the file. Dropbox has a similar feature. Anyone remember the VAX VMS which kept file versions? Every time you saved a file, it saved a new version. Once you had what you want, you could purge old versions.

    • Well nobody tested the generators running for 284 days non-stop. What was more astounding was the Patriot missile bug. That manifests in less than 24 hours.

    • Max, we are surrounded by "how could they do that" bugs. Pretty much every leap year (one coming next year) serves up a crop of leap year bugs. Look at the patriot missile bug... how is it possible anyone coded that? How is it possible testing did not find that? We only learn as individuals, not as an industry.

    • Any such "green light" decision making should only choose what green lights get set. It is still up to the lockout mechanism to only choose one green light.

    • We used those EPROM decoders for all sorts of things, including state machines and such. Heck, we even had a "GPU" for drawing to a vector display (radar console) that used that approach.

    • Many years ago (1990 or so) I worked at a company that made many things - including traffic lights. The system ran on micros, but the final decoding was done by some EPROMs which was programmed up to suite the actual set of lights. The EPROMs prevented illegal light patterns from being shown by decoding illegal patterns to all blinking red (ie. 4 way stop). Now I guess it's just a micro driving GPIOs

    • Getting it right first time is a mythical dream. I have never seen that ever being a consideration on any product I have worked on since flash memory came along (replacing EPROMs was hard manual work). Granted, I have never worked on high volume kitchen appliances or such. Instead, the focus always seems to be on time to market or hitting some critical industry show, etc.. What is the minimum feature set/performance level we can launch with? Then ship additional features/speed ups after that. For example, for many years I worked on agricultural products. The product had to be ready for the spring shows so the farmers could buy and install the products ready for spring planting. Spring is not going to wait while you "do it right". If you were a month late then you might as well be a year late. Features only needed for autumn harvesting could be deferred, so long as they were ready when needed. Features only needed in Canada (vs USA) could be deferred a few weeks because spring takes a few weeks longer to get going. The same goes for much consumer spending. People go into buying frenzy in Thanksgiving to Christmas. If your product is only ready in January you might as well wait a year. So "do it right first time" certainly has some theoretical appeal, but is not something afforded to most development. You cannot put in the time and effort because the time does not exist. This always raises the question: When is the software ready to ship? The best answer I have ever heard is: When it is of nett benefit to do so.

    • Jack, let's be fair. There are many cases of hardware bugs out there - and hardware is a lot more easy to very than software. How many people fully test and characterise switch mode power supplies? Why to we have parts that go to Revision T and still have a herd of errata? I think the biggest difference in the two is due to lead time. You can't spin hardware in two days. That forces people to take a bit more time before pushing the "go" button. Software can keep on changing - even after the system has shipped. That forces people to think differently. Any system problem that needs fixing is first and foremost a software problem. Can we work around this hardware problem in software? Heck, I've even fixed lubrication problems in software. Software is always viewed as the stuff that can change, while hardware is the stuff that can't change. That often forces software to be bent and stretched in ways that were not originally envisaged. This often has the impact of turning reasonable compromises into bugs. Arian 5 and Therac 25 were both cases where software was proven safe on one platform, but was then reused on a platform where the design was no longer valid. We're told those are classic software bugs, but are they really? For electricity and electronics, things are very easy to characterise. There are far less items to worry about and we have relatively simple measurements. For software it is much, much harder to know what to measure and there are many orders of magnitude more measurements to make.

    • To achieve 50bn devices most of those micros are going to have to cost under $1 each. Of course there is no such thing as a one size fits all IoT SOC. The thing that just turns on a light bulb only needs connectivity + a few IOs. Other devices will need LCD controllers etc. As for Wifi? Why? Bluetooth LE makes more sense for low bandwidth devices. Turning appliances on/off only needs a few bytes per day. You don't need full blown Wifi connectivity. Wifi is just too power hungry and expensive to implement.

    • In many ways this is entirely expected. In all other types of electronics we're getting more and more features/speed/... for less and less money. Why not with test equipment too?

    • To a certain extent, this is true - you can "get by". You write FPGA code in VHDL or Verilog and don't have to worry about LUTs etc. However understanding what's happening at the lower levels makes it easier to write better code (ie. faster, using less resources). The same is true of micros: If you understand the capabilities of a particular architecture, it is easier to write efficient C code for that architecture, hence the existence of articles such as www.arm.com/files/pdf/AT_-_Better_C_Code_for_ARM_Devices.pdf

    • I don't think much of surveys like these. They are not statistically valid because the respondents are self-selecting. They are not us, but are we even us? What % of embedded engineers read embedded.com? What statistical biases are there?

    • I've always tried to abstract my code from any particular RTOS or host environment. That way substituting a few functions can get the code ported either to a new target or into a PC for testing. This has allowed me to write large bodies of code that can be executed under Linux (kernel or user space), WinCE, vxWorks, ... with only a small amount of porting in a "glue" layer. It is highly beneficial to be able to do this. PC testing, for instance, gives you access to development/test tools like Valgrind which are very seldom available on target platforms.

    • CSRMesh is interesting, as are other meshes, but anything that is proprietary is broken. We need open standards with free participation to make this stuff work. Bluetooth is not really much better - having to sign up to a $15k membership to use the logo... bah!

    • Is high accuracy or bandwidth really necessary? I suspect some times it is, but most of the time not. When you're hunting current hogs or comparing how two bodies of code run then surely you don't need more than about 10% and 1 or 2kHz bandwidth. A Bluetooth LE board I used from Nordic Semiconductors had a little 35 cent TI current shunt on it. http://www.ti.com/product/ina216a4 That sounds like far better bang for bucks.

    • I don't think there's a shortage of quantity - just quality. Back when I got into computing in 1980, many programmers were promising filing clerks that were then sent on a part-time 3 month Cobol coiurse and came back as programmers. In the embedded world, most were EEs who learned just enough of software development to write a hundred or two lines of BASIC-51, or maybe even some assembler to make their electronic creation work. When I graduated in 1983 it was possible to have a large % of known computing in your head. Now things are much different. A University graduate is consiered unskilled and almost worthless. The bar gets higher every day and "average" programmers just are not good enough any more. Top-end programmers generally know their worth and don't end up working for a small fraction of that - making it harder for companies to exploit them.

    • I recently used some of my large slide-rule collection to demonstrate the same point about changes in technology to a group of schoolkids thinking of going into EE. The oldest slide rule I have is only 112 years old and the newest is about 32 years old (a. span of 80 years). Held side to side you can't see any difference except the second one has some plastic on it. And you would not see any significant difference between those and a slide rule from the 1600s. However a microcontroller board from the 1980s and now are remarkably different. The point I was trying to underline that you have to get into this game for long-term learning. What you learn in university will be obsolete before they even teach it to you. If you are not prepared to continue learning you just don't belong.

    • I'd be interested for a board I'm working on, except the sleep current is micoramps and the run current is amps. The 28mA just won't cut it. As for the benchmarks... they mean little unless the peripherals are the same. Even UARTs are implemented differently: some Coretex M0s have UARTs with a 3-deep input fifo and no output fifo. Others have 16 or more each way. Clearly the first one will have to wake and run more to get the same workload done.

    • How much of the smart stuff actually will add value? Remember the clocks on video recorders back in the 1980s/1990s? Nobody ever set them. They'd always be on blinking zeros. Far too often, features get stuffed into products because they can rather than because they should. When that happens, the secondary features often get in the way of the primary features. I have a digital kitchen scale that also has a built in clock/cooking timer. Before you can weigh anything you first need to set the clock (to zeros of course). When I'm done with the scale I turn it off and pt it back in the cupboard. I figure many of our IoT appliances will end up the same way.

    • Eyes have co-evolved with the brain. That is why our eyes are so different. A mammalian eye is coupled with the mammalian brain and has, essentially, two high definition sensors coupled with the processing power needed to use such sensors. The fly, on the other hand, has many low-res sensors but wired differently. A perfect match for the fly brain which is basically a small number of op-amps and not really a brain at all. I won't say anything about your mother.....

    • Tool vendors, particularly FPGA tool vendors try to shoe-horn everything to work via their IDEs. Forcing software engineers to use FPGA tools is a completely broken concept. Altera, for instance, require that you build a dummy FPGA image just to get the settings needed to write the bootloader code.... how insane is that!

    • All of the above. The problem with complexity is that it adds more relationships between system components and more subtle behaviour issues. And yes, people are system components. Gone are the days when a system was just, basically, an 8051 with its built in UART and a blinking LED. Now we have many subtly connected components and subsystems (including the people). Last week I had a look at a board where the USB subsystem was failing to detect device insertions. One of the plausible theories is that an audio codec chip is causing instability in a power supply which is then causing the USB subsystem to operate unreliably. That is hardware complexity and is hard to measure because probing high speed signals changes behaviour. Then of course we have all the complexity in software where subtle timing changes can cause problems. With millions of lines of code, thousands of different states, and billions of potential state interactions we have astounding state complexity. Many system failures are not due to either hardware or software failure to match specification, but rather failure of the specifications to match what is really required in the real world. That is primarily the people problem we have. For example, Airbus crashes like AF447 were not caused by failed code or hardware, but rather a failure of system design and understanding that when you design a system, the people that interact with the system are **part of** the system. That is a people problem, not a technical problem that can be fixed by tools.

    • I've been doing embedded development for over 30 years now, and this is just the latest tool in a long line of tools that promises to make all headaches go away. We're always looking for silver bullets, but I think this is less achievable than it was 30 years ago. The reason for that is that complexity has exploded exponentially, whereas the tools have only improved linearly at best. Nett effect is that we're worse off now than we were many years ago. Ultimately these are people problems. While technology can help, it cannot fix the problems directly.

    • There's nothing more expensive than bad software. I too am in the file system space. Corrupted stored data doesn't get fixed by the firmware engineer's cure-all: the watchdog reboot. One of my customers develops remote monitoring equipment. Every time their older generation product (on a competetive file system from a well known brand) fails, it costs a helicopter ride to go fix the problem.

    • We don't automatically get rights as customers. We don't have rights any more than the manufactures do. Where is their right to demand we pay $20 per micro? These are just free market forces in action. We (as an industry) continuously buy parts and expect everything to cost less and we are not prepared to pay the margins etc to pay for more rigorous testing etc. The market pressures are for cheaper stuff. That's what gives us 32-bit micros for less than 30 cents. That makes life a bit more challenging if you're designing pacemakers.

    • "The 8-bit architecture is easier to use than others"... Err, not true. 8-bit gives all sorts of problems with limited address spaces etc, and 8051 architecture is the worst with archaic RAM handling etc. The fastest 8051s are still slower than middling Coretex M0s. What price are these things? You can now get Coretex M0 parts for less than 30 cents each. uA/MHz is meaningless if the architecture gets so little done per clock cycle. What counts is energy used per work done. Smells like FUD to me...

    • It's actually a really hard thing to determine mean and std deviation, and if they change, people will sue you. Rather publish as little as possible and expose yourself to the least possible legal blow-back. How much of this really matters? Very few people are designing pacemakers. Most of this stuff ends up in consumer electronics which will become landfill long before the electronics fails. For most products very few people will get annoyed or make warranty claims if the product fails. Typical is typically good enough...

    • ... or maybe IoT in the home is a dead-duck feature that nobody will use. Just like the video tape players in the 1980s with the blinking 00:00 clock that never got set, most people just won't enable connect the IoT stuff on their appliances because it does not provide enough value.

    • I am well aware that one day the bastards will start attacking Linux, but that "short term view" has been working well for me for about 15 years and right now that still looks a reasonable policy. I don't know which is worse in Windows: Getting infected or all the extra drag of running anti-virus software which slow down everything: booting, work,...

    • On my home systems (inc. that used by my wife and kids) everything worth anything is kept on Linux. That doesn't have "letter disks" :-). Trash anything on a Windows machine in my house and I really won't cry. Surely your Mac is safe from this particular attack?

    • Open offices are not too bad when everyone is working on the same project and are in the same "mood". It is terrible though when you have different teams/projects under one roof. Having one team under high stress during last-minute development while another team is in relaxed "party mode" makes for a lot of ill will. I personally find my home office about the best place to do development...

    • I lhave the older Logic 8. I won't leave home without it. It just lives in my laptop bag taking up less space than a travel mouse. It was really useful identifying an intermittent comms issue. Just set it logging and captured everything for a couple of hours. Dumped the data in CSV format and ran it through a small C program I wrote. I've tried the new Logic Pro 16 with Linux, but was not able to get it to go fast due to USB3 issues. I'll try again someday soon.

    • "150 microcontrollers, but they are used inefficiently in distributed subsystems." There is a good reason for the inefficient usage of micros. They replace wiring. You can buy a 32-bit Coretex M0 micro for 28c. That is about the same cost as a metre of wire. It is less than the cost of a connector. This makes it way cheaper to have distributed systems using a CAN bus than it is to use all the micros "optimally" and run wiring all over the place to support it. From a system persppective, it is better to make the wire usage efficient and the micros inefficient than the other way around.

    • It isn't so much what you've got as what you do with it... I'm currently working with a board that has 2x Coretex A9 cores (700MHz or so) + a huge FPGA, maxing out the 3 Gbytes/sec bandwidth of the 512Mbytes of SDRAM. All that goes into a handheld product. In the 80s we had multi-user systems that ran on a few kbytes using punch cards. We certainly didn't have the data bandwidth. 3Gbytes/second is around 25 tons of cards/second. The one thing we all know about predictions is that they'll be wrong :-).

    • The buzzwords are certinly gathering like flies around a ****. SoCs don't just have to mean the "Big SoCs" running Linux. They can also be the "small SoCs" too. For example, I'm currently doing a lot of work with a Nordic Semiconductors NRF51822. This is a single-chip 2.4GHz radio + ram + flash + micro etc that can be used for various purposes. Load the Bluetooth LE stack and you have a single-chip Bluetooth LE system. Load the ANT stack and you have an ANT device. It is certainly a SoC. The thing I like to point out to people is that their Intel-inside laptop running Linux or Windows probably has 10 ARM cores in it running no OS. As for telling people what I do, I tend to tell them I work on cellphone operating systems (though I have not done that for a few years). Not the Apps, just the stuff underneth that makes it all work. And no, I don't know how to set up spreadsheets or fix Windows problems.

    • Given that 90% of the people on the planet live hand-to-mouth and will not be buying "things", that leaves less than 1bn people to buy the 50bn things. That means every person in this market buying over 50 things. In my household of 4 people I need 200 things to make this number work. I think we're already getting connectivity fatigue. Do we really see advantage in every slice of our lives being posted everywhere?

    • DDJ was indeed a great resource and paging through some copies from the early 1980s when I first started buying it, is a great way to have a nostalgic and not-so-nostalgic look at the past. All good thing will end. DDJ had its time and place but that is not now.

    • I don't keep a notebook per se, but my git commit logs are a far better record of what I've done. I've found that getting away from the situation is very helpful. No doubt there are many like me who have solved a problem while showering, fishing or walking on the beach.

    • Jack, Inspection certainly is useful and has an important role to play. However you don't do your case any favours by refering to Selby's 1987 paper. Tools, and the complexity of the software being testsed have both moved on since then, rendering his comments as obsolete as the computer systems he worked on. In 1987, we pretty much had K&R C with with some aspects of ANSI-C starting to emerge. K&R C was terrible and quite unlike what we call C today. It was frankly dangerous as a programming tool. The compiler could not provide any decent checking - certainly nothing like what we have today. It didn't even have function prototypes. You needed a lot of visual inspection to check you were getting what you intended, such as checking all your function calls were passing the correct arguments and even the correct number of arguments - things the tools now do well, for free. There was no static checking worth mentioning either. No Coverity etc. Debuggers and similar were in their infancy. Automated testing was rudimentary because computers were slow and expensive. When I test complex code these days I will often run it against a randomising test bench which runs for a few days on a quad-core PC. That can run millions of test cases and find corner conditions that the most devious inspector would not dream up. That will find problems no amount of human inspection will find. Such tools did not exist when Selby wrote this paper. To make it worse, complexity has risen exponentially, while our brain capacity has not increased. That makes the human brain a less effective tool at problem discovery than it was in the 1980s. So, to summarise, the effectiveness of the methodolody he prefers has reduced while the effectiveness of the methodologies he dislikes has increased. I would expect they have crossed over. The paper would likely have a different slant if it was written today.

    • Jack Where do you get the idea that Test Driven Development uses only tests as the single filter? Where I have used TDD, it has always been in conjunction with other filters too. TDD, when exercised properly will achieve better coverage than you say. The idea with TDD is that you only write code to make a failing test pass. ie. You should only be writing that corner case code (which is often untested by conventional means) when you have a test case that fails. Of course very few people ever practice TDD as it is preached. I know I certainly don't. Whatever you do for code filtering should be automated and preferably prevent being sidestepped. The tighter the feedback loop, the better. I like having filters built right into the compilation process. The code does not even build unless it passes compilation with -Wall -Werror and cppcheck.

    • I think it would be more safe to say he was burned by lack of processes and rigor. Far too few business owners realise the risks they take, and the value of doing things properly, until something breaks and they're out of business. Even though this happened with a consultant, a similar thing (except for maybe the piles of junk etc) could have happened with an employee. In the 30 years I've worked in embedded, I've consulted for, and worked for, many different organisations. Many have shocked me with incredibly precarious practices such as: * No proper source control, or when they have had source control - no proper backups etc. A company just down the road from me took months to recover from a server crash and some of their source was never recovered. * Dependence on undocumented "magic" one-off equipment. I once made quick bodge-together cable (15 strands all hand soldered) to allow me to use a board. I intended to use this for less than 2 days. I emailed the company with the schematic, telling them they would need to make such a cable for manufacturing programming etc. 5 years later I spoke with one of their engineers. The cable was not documented, the email was lost in a server crash and they were still using the cable I'd made in the factory. * That product which depends on a certain version of an obsolete compiler or download/test tool that only runs on MSDOS. It can't use a modern PC because it crashes if the CPU clock is faster than 100MHz. There is no backup of the software.... * The code which has been written and maintained by one person over a 15 year period and is written in, to be verty polite, a highly individualistic way using cryptic variable names etc. The engineer is in his late 60s and has been in hospital with heart issues twice this year...

    • I agree. Being a good embedded engineer does not start with programming or playing with canned electronics. It starts with an enquiring mind and building up problem solving. That can be done with Lego. Build a wall and seeing that it is weak. Figure out how to make it stronger. Or as Bob says, get an idea of something to make out of wood and try to turnn it into reality. Sure the hormones do side track the curious mind, but worse is the modern school system. That seems actively opposed to thinking and problem solving. Pretty much all engineers worth their salt either had a mentor, or at least long suffering parents who tolerated a bit of mess and didn't call in the psychologists when Johnny singed his eyebrows burning a batch of homemade gunpowder.

    • The mere existence of software QA reduces software quality. It just makes all the testing and QA somebody else's problem. Unfortunately far too members of our profession just ship stuff and hope it stays shipped. These people put a drain on others. If QA is used as Jack suggests, constructing and managing regression tests, then it should not be replacing other testing. I rather have continuous integration etc at the engineering level. That tightens the feedback loop. In some organisations I've worked with, you can't even check code in until it passes various tests.

    • Do we really need "buzz"? It is wrong to misrepresent embedded system development, or any software development for that matter, as playing with equipment. That just gives the impression that development is just the "fun stuff". The qualities that make a good developer are often considered the boring stuff: designing, refactoring, bug hunting. If you don't get a blood-sport rush out of debugging etc, you'll never make it as a programmer.

    • MPU and MMU exceptions are different beasts. An MMU exception might be part of the normal processing of the system. For example, when using virtual memory, the MMU gets page faults when the memory space needs to be expanded (eg. when a stack grows). The exception handler then fixes the issue. Otherwise, log the exception (eg. stack dump) and take the best recovery action you can...

    • There's a roaring difference between an MMU and MPU. They are designed for completely different things. A Coretex M0 does have some degree of MPU. If the CPU tries to access some areas incorrectly then it will get various exceptions. The point of an MMU is to protect different processes (ie. address spaces) from eachother and to virtualise memory. That needs the idea of processes and, in general, a reasonably healthy level of OS. A "bare metal" embedded system is something enitely different. It is just a single address space. All functions are linked together and all memory is shared. It would be hard to perform any inter-task memory protection.

    • Yup, I never look for specific skills. I might use it as a deciding factor between two equal candidates though. One of the best embedded programmers I ever hired had a business degree and could program in VB and SQL. We hired him because he just had some quality that it was hard to describe or pin down. Within a month he was proficient in C and writing ISRs. Sure he screwed up a few times, but he asked questions - once- and learned fast. It took me all of 30 minutes to explain to him how to read schematics and glean from them the info needed to write drivers etc. Within a short time he was the "go to" guy that most of the programmers went to for guidance. Only a company bent on its own demise rejects candidates because they lack specific skills.

    • I'm increasingly thinking that we should not strive for formal engineering educations that teach people what they know. To be an effective engineer you need to be a life long independent learner. You should not need to rely on universities to teach you anything. A reliance on formal education tells me someone is not cut out to be an engineer. When I interview new grad candidates I never ask them about their university performance since it is generally meaningless. I rather ask them about their extra curricular work to find out what they've taught themselves. That's where the real-world meaningful experience is to be found. Teaching most specific skills is pointless. By the time the professor has learned them, they're probably obsolete.

    • Whether you need 1kbytes or 10Gbytes of memory depends on the problem you're trying to solve. It has almost nothing to do with whether or not you come from a Linux of an 8051 background. Right now I'm working on two different embedded projects. The one just fits into 512Mbytes (3GB/sec memory bandwidth) and the other has more than enough room in 2k of RAM. “It’s a lot of memory and storage for a micro-controller. But, with even the smallest of Java VMs [an environment that interprets Java byte code, allowing the processor to perform the program’s instructions] requiring at least 2MBs to run, we’re not likely to see Java or any other VM-based platform running on these small CPU cores anytime soon,” Nonsense. Look up Lejos - a JVM that runs fine with 32k RAM.

    • Jack, buying and reading a spec like this is an interesting notion. Perhaps I am an unsaveable old fart, but 700 pages of PDF sounds nearly impossible to read. It might make sense as a reference though. I am generally against writing code that requires extremely detailed knowledge of the specs for at least the following reasons: 1) If understanding the code requires knowledge of page 543 of the spec, then chances are nobody will understand the code. I prefer simple and understandable. 2) The more you stray from the well worn paths, the more likely you'll write code the compiler misunderstands and either gets wrong or optimises poorly. For example, what has precedence between || and &? Correct answer is: who cares! Don't write code like that!

    • Jack SecondCopy sounds just like the open source rsync commonly used in the *nix world. Backups are only any good if they are offsite, and there are multiple of them. A few days ago I read of a PhD student's laptop being stolen - along with their PhD thesis. They had a backup, unfortunately that was on a USB stick in the laptop bag. Here in Christchurch NZ we had many companies lose all their data in the earthquakes. They had backups, but in the same building where they could not be accessed. Backing up is only part of the equation - you need a tested restoring process too. Not only do you want to be able to get the latest, you often need to be able to get some history in case you overwrote a critical doc with a rubbish version. I use two strategies: Dropbox for some stuff, and git. Dropbox immediately copies any files onto the cloud server + any cleints. In my case it immediately copies onto the 5 to so computers I use which are placed in multiple locations (as well as the cloud). Dropbox also alows some perusal of history too and works with multiple OSs. My other strategy is to use git. git is a distributed source control system which means that all the copies are clones of eachother. The code (or doc files, or whatever) + all history resides on all computers. If one dies, just clone from another. The "server" is by convention, it holds no more data than any of the "clients". Client/server oriented source control is not nearly as robust. What happens if the server dies? How good are the backups?

    • Jack I would not touch any browser-based tools with a barge pole. They might be useful for some hobby fun (even though free tools are absolutely trivial to install), but they are not suitable for "real" work. How do you hook them into your normal workflow? For example, hook them up with make files and other tools (eg. code generators, source control, continuous integration servers). Sure the ARMs have been getting more powerful at the grunty end, but it is the M0 end that interests me most (as far as disruptive change goes). There are now M0 based parts at under 30c. A 32-bit CPU with flash, ram, adcs,... and change from 30c. Even the power supply issue has been cracked. For a long time, all the ARMs had to run on well regulated power supplies. That required a regulator which drove up system cost and gave an edge to the 8-bitters that could handle wide ranges of power rrails. We're finally seeing ARM parts that can do this. Did you manage to catch on the Coretex R parts at all? Those are the parts designed for reliability. I've never seen more than white papers. In theory the idea of lockstepped CPUs is a good one, but in practice many, if not most, failures are due to software - rather than hardware. What would give better reliability is redundant algorithms running in parallel. However those are far harder to synchronise.

    • Jack That's not much of a backup strategy you have there. Try something likt git. Yes, I know it does not handle binaries well, but that will preserve history too. I know you need to vent, but tools fail too. You surely know that as well as anyone here. Therefore you need a proper backup system. One that has a single point of failure is not exactly sound.

    • yes, in the old days for(...++i) would be faster and smaller than for(...i++). A modern compiler should figure it out and give you the same result. I just compiled a small test with gcc and the binary output was EXACTLY the same. By the way what happens when you inrerement a (void *) ptr? Different compilers give you a different result.

    • Something that has been mentioned often and is sorely missing is the ability to put code snippets into comments. This makes the commenting mechanism severely broken for a website that has so much software content.

    • Signal processing isn't just limited to things EE would consider signals. All values can be treated as signals and at least some degree of DSP can be applied. For example, a device reporting pulse rate probably needs some sort of filtering so that the value does not hop around too much. Unfortunately most SW engineers have a very limited reprtoir. You'll get moving averages and stuff like that when things like simple low-pass filters and median filters are often more appropriate.

    • Even if we ignore the IoT hype we can still enjoy the richness of interesting devices this wave brings us. For instance I've recently been working with Bluetooth LE. With some chipsets the incremental cost of adding BTLE to a product is under a buck. For example the Nordic nRF51822 part I'm working with is a Coretex M0 with 2k RAM, 128kflash and a built in radio. All that for just over $2! Add less than a dollar of companion parts and you have a full BTLE circuit that can run on a coil cell for a year for less than $3. Even if you don't go within a 100 miles of the IoT this opens up some nifty technology to play with.

    • If people "get confused" by things like this, then they're REALLY going to get confused when they get out in the real industry. We should not dumb down teaching to make things easy for everyone. We need good programmers, not just piles of useless ones!

    • Not enough time? Programming requires life-long learning. You have 50+ years to learn! Far too many degrees are structured as download sessions: all the skills you need to get a job. That's surely the wrong way to go about it. A few years back I was involved with a group of people getting Java running on the Lego Mindstorms robots. I was the youngest at age 45. The oldest was over 80.

    • Class heirarchies are most certainly available in C. That's excatly how most operating systems (eg. the Linux kernel) are constructed. The Linux kernel also has tasks etc. Just because these things are not language features does not mean they are useful abstractions or that they cannot be created.

    • " it should (and is) interesting in its own right!" I absolutely agree. When we take too much effort to make a subject seem fun we're setting up expectations that are not met in the industry. I'd rather have a person that is excited about "boring" programming than shiny, shiny programming. I certainly don't want the people that thinks it just sounds like a good paying job! But "teaching" is the wrong mentality anyway. People whho thrive in this industry are not taught. They are all lifetime learners that are self motivated to teach themselves. Rather think of learning being facilitated.

    • I certainly agree. College/university sghould not be used as training in a language/toolset to get a job. It should be helping the students to open up their thinking. Exposure to multiple different languages helps that. The most valuable course I did at university - 31 years ago now - was a romp through 5 or dso different languages over a semester. Only one of those is, or was, anywhere near mainline: LISP. This industry is one of change. If you are not a constant learner then you don't belong. Those that define themselves by what they were taught are useless. The good news is that it is never too late. Prag pubs has an interesting book: https://pragprog.com/book/btlang/seven-languages-in-seven-weeks

    • A trillion sensors... Over a hundred for every person in the world... All online? Considering that most people survive on less than $5/day and don't have money for this stuff, that leaves about 1/10th the population who will acttually use any of this. We're basically saying that these people will have thousands of sensors each ... and we'll be spending $15,000 each on this stuff per year? That is more than the average US household take-home pay. ... and that's all happening in the next 6 years? Consider what has happened in the last 6 years.... not a hell of a lot. Nonsense.

    • Yes there is a huge mis-match between treating software development as a science and how they end up working (as engineers). You say the people writing code are graduates, but people pouring concrete are not. That comparison is perhaps off: the people pouring concrete are more akin to the people duplicating CDs who are unskilled. Software engineers do need a taste of science, but only at the relevant level. I have two sons in tertiary education right now. One is doing CS (but is really going to be doing software engineering). The other is doing Mechanical Engineering. The CS student is studying to a level that, IMHO, goes too far into theory: the theory of computation and such - the same stuff that bored me stupid in CS 30 years ago. The mechanical engineer is studying some science, but things like material sciences - going into depth enough to understand that atomic structure and chemical bonds have an impact on how materials perform. I think a large part in this is that software engineering is moving too fast for academia to keep up. As a result, they teach academic stuff rather than practical stuff. "At $47k/year RIT is not an inexpensive school"... What! No wonder there is a huge student debt crisis in USA. Here in NZ even the most expensive tuition is approx $10k/year for residents and, maybe $35k/year for non residents.

    • "while 32-bit processors such as the ARM are not yet ready for applications with tight power requirements, they are poised for adoption everywhere else." Not? Based on what? Have a look at the various Coretex M0-based parts out there. They're knocking spots off many of the 8-bitters.

    • There is really nothing new in agile. The good ideas have existed for ages and all we're seeing is a rebranding exercise. In 30 years of programming I've seen tens of fads come and go. Ultimately there is no substitute for actually being good at what you do. Like far too many of these "solutions", they are served up as silver bullets and are adopted by the sort of people that will try anything that is adequately hyped and promises success. Basically it is the sales tactics of the dieting industry brought to software development.

    • To be fair, security does not matter in many applications. Sure, I don't want people fiddling with my insulin pump or turning on power points, but does it really matter if someone can read your scale or your fitness monitor? Bluetooth LE actually has better security than BT Classic. However it also supports broadcast with no security at all ("the temperature in this room is 20C", "you are here"...). That allows the designer to choose the degree of security needed for the purpose.

    • The short answer is "Yes". For the most part one does not expect hardware VHDL etc because you can't change it and the boundaries are very clear. With software, the boundaries are less clear. Having the source allows superior debugging. Software interfaces are, generally, less clear and are more open to interpretation than hardware interface. Hence there is a need for better visibility.

    • You surely cannot be taught to be curious. Your natural curiosity can be nurtured and encouraged, or it can be beaten out of you by poor educators. I think though that curiosity and creativity are reduced now by kids (and adults) being distracted by what is around them. Yesterday my wife and I were discussing with our kids the games we played back when we were kids. They were almost always games we made up (like kids have done as long as there have been kids). Now they're into all sorts of other things. Those levels of "don't care" are also called abstraction. We need those to be able to deal with the world at a useful level. When I make something out of wood, I don't care that it is made out of carbon, hydrogen etc. When I am doing chemical reactions, I don't care that the chemicals are composed of atoms, or that the atoms are composed of sub atomic particles, or why electrons should be negatively charged and what is charge anyway? As someone once said: "You only know you have an operating system when it breaks." Tht is true for cars too. You only know your car has spark plugs when they foul and you need to replace them. That was a frequent occurrance 40 years ago, so everyone knew what spark plugs were. These days cars are so reliable that most teens don't have to know what spark plugs are, so they don't. Perhaps we should blame reliability for the lack of knowledge in the younger generation.

    • These machines are impressive, as you would expect for millions of dollars. Look what can be achieved in consumer devices. A DVD contains over a million bits/square mm. That has to be read at high revolutions and vibrations by a device that costs less than $1 to make.... now that is just astounding.

    • Most people don't need to know how stuff works and are not particularly interested in finding out. They're perfectly happy to let others worry about how things work . In the 1920s..1950s, owning a radio required constant mainenance. Valves (tubes to you Americans) blew frequently and it was common to go to the radio store (don't have those any more) which had shelves of little boxes and buy a replacement then go home and plug it in. It was as easy as replacing a light bulb. We don't have to do that any more (and soon with LED lighting we'll probably forget how to change light bulbs too). Even with the cars, a large reason for people tinkering with cars when we were younger was that the cars back then were crap and needed constant tinkering or they would not go. Nowadays you can drive 5000 miles without popping the hood. Many tech-savvy people have no idea about how a dip stick works, let alone changing oil or using a spanner. Cars, radios and everything else certainly has become more complex and the learning curves are steeper. Still, in general, I think people would rather have the tech of today.

    • We're miles away from autonomous driving in anything but well constrained envioronments. Given Michael Barr makes huge money out of vehicle litigation (eg. slaying Toyota where no "smoking gun" was found) I'm not surprised he's looking forward to autonomous vehicles. As a "system safety expert", I just cannot see that we can every reach the stage where mixing humans and automated control to the level required for driving cars. We have enough problems getting people to drive semi-autonomous planes properly and planes operate in a far more automation-friendly environment than cars do.

    • I must agree with you. Most of those contrived examples are solved better and cheaper by existing tech. I am perfectly capable of adjusting the lighting myself. What if I'm wantching an instructional DVD with a read-along component or such? Then I want the lights on too. I don't want kit that decides to turn the lights down when I want them on. My car turns the lights off when I kill the ignition switch (after a short delay). Sending an SMS means I have to go back into the garage and turn the lights off. My car does it better, cheaper. I recently bought a new iron that turns itself off after 10 minutes of no use. Much better than getting an SMS and having to go home or send a message. We already have ripple control for power shedding on appliances. This seems a far better way to do things than using IoT. No infrastructure needed. As for fat Rick needing his exercise... What's to stop him paying the kid next door to carry the LoJack around the block a few times? All this sounds a bit too much 1984 with far too few benefits. Even if people buy IoT enabled devices I reckon far too few people would ever hook them up because the usefulness is too low. A bit like VCR clocks in the 1970s/80s that nobody ever set.

    • While this type of scope is never going to cut it for all professional users, they are good enough for many applications. One of the companies I consult to has some "SUV price" scopes but they also have a whole lot of these cheaper scopes. At less than $400 you can put one on every firmware engineer's desk and just bring in the heavy guns when they are needed.

    • I'm still not seeing any reason to believe that the IoT, if it ever really comes into being, needs any different tools and methodologies than any other embedded systems development. Abstraction has been an important concept in software (and system) development since the 1960s. People have been using bare metal, kernels, RTOSs, OSs as long as I have been in embedded systems (over 30 years). One thing that has certainly not changed in the industry is the army of snake oil merchants trying to convince us we have new "game changing" problems that make all our experience obsolete and that adopting one magic technology will make all our problems go away.

    • Jack What micros don't allow code execution during flash programming? Is it just the little PICs etc? I don't encounter this issue with anything I work with. I do a lot of design with flash, and review a lot of flash designs. One of the most critical errors I see is people hooking up a power good signal to the flash write protect line. At first blush that sounds like a good idea, but it is not. Flash writing occurs at a higher voltage than normal operations - 20V or so in some parts. The flash part has an internal charge pump circuit to generate the higher voltage. When the WP line is deasserted, the programming power is dumped - net result is that any programming operation underway is corrupted. It is generally better to check the rails before entering the programming operation then trusting residual power to keep things going through the programming operation. Of course that advice is generally outside the scope of coin-cell circuits and it is important to make sure the design strategies you are employing are consistent with thesystem you're designing.

    • There has been a slew of "xxxx in the age of IoT" articles of late. Does IoT really need different debuggging/development methodologies than other embedded systems or is this just an opportunity for vendors to prentend that the "old ways" are now dead and you should change thae way you do things.

    • One idea that emerges from the SSL bug is running the code through an indenting tool before code reviews. That would fix the indented goto. Some indenting tools can even MISRA-ise C code and add the braces.

    • The example was not result = (get_loc() != NULL) ? true : false; it was result = get_loc() ? true : false; That would give the same result as result = (get_loc() != NULL); or result = !!get_loc();

    • I love the smell of troll-bait in the morning... The second true/false handling code is legitimate, even if it looks odd. get_loc() might return a pointer etc and not a truth value. The code turns it into 0 or 1. What is even a more odd-looking way of doing that is bool_val = !!some_val;

    • Interesting, but terrible bandwidth. If a circuit needs just one input and 1 output then that's at least 6 bytes of data that needs to be moved around + the CPU processing. Net result is about 5kHz bandwidth.

    • As a Forth-head from long, long ago, I doubt Forth will ever make a real come-back. Forth is amazingly powerful. A whole interpreter, editor + assembler... could easily fit in a few kbytes of memory. Forth code it tight - tighter than even assembler or C. Forth is extensible. It is one of the few languages where you can change the language syntax itself - on the fly! Where Forth falls down though is lack of any type system. There are some forth-inspired languages that fix that though: RPL and Postscript are both stack based and provide type safety. What might be better is something which splits the language in two: a solid type-safe language that runs on the development host and generates FORTH for the back-end that runs on the target.

    • Increasing efficiency does not reduce energy consumption - if anything it increases it. With the ultimate home automation system I can use my phone to turn on the heat (or AC) while I'm travelling. With a normal system I'd have to wait until I was home. As a result, I use more energy. Even if the efficiency increases, we see increase in usage. This is well understood since the 1860s (ie for 150 years) as Jevon's Paradox http://en.wikipedia.org/wiki/Jevons_paradox. This was first noted with coal usage, but the same applies with electrical usage. For example, electrical heating with heat pumps (much more efficient than resistive heating) has increased the amount of electricity used.

    • Yup factoring in law suits must be part of the part of the cost of providing a product/service and is not to be taken as a sign of lack of confidence in products. If someone decides to sue you then count on paying millions in lawyers costs, expert witness and, potentially, much more than that if you get a judgement against you. Then there are thousands of hours of engineer's time taken up by the operation. That is hard to support if you're just making pennies per part. You don't even have to have direct product defects for this to cost you substantial amounts.

    • Given the recent bunfight bweteen Oracle and Google, I would be very hesitant to base any futer development efforts on Java, even though Java provides an interesting run-time system environment for small systems. For example, the Lejos project provides a Java system that can run in less than 64k bytes.

    • Just because someone else pays does not reduce the costs. Either the tax payer or insurance buyer is paying. At least when it is "user pays" the product better provide value above the cost. When covered by a program, products don't get filtered out by providing real value. That's 99% of the problem with the huge medical bills people rack up.

    • I don't think I've ever used a BOR, but then I seldom do anything with low-end power devices. I wonder whether it would be worth having the BOR deactivated during sleep and only enabling it in run mode. Cell voltage tends to creep up during sleep - enough to cover a small burst of activity after wake up. The one problem I see with the ADC idea is that if the device has been asleep and the voltage creeps up again, you might be getting a false measurement (of course, doing the ADC measurement at the end of the run might help). BOR at least detects a problem at the highest current consumption during the run. What this series shows more than anything is the amazing amount we take for granted (and how much good luck we depend on). All this palaver over just the power and sleep modes. If you went through a full analysis of the rest of a simple circuit (eg. a remote control) is would take a year :-).

    • Why dream? You can do this already: OR1K and ORPSoc : http://opencores.org/or1k/ORPSoC Includes a complete gcc + Linux support. LEON + peripherals: http://en.wikipedia.org/wiki/LEON

    • This is an appalling shill piece and is harmful to anyone it might be "educating" "Everyone was familiar with the x86 architecture" How can you say that? I bet very few embedded engineers really are familiar with the architecture and how to drive it. They might be familiar with the products (PCs etc). "Software tools are easy to come by." You can say that about pretty much any architecute. "Long upgrade/downgrade path". Not at all. There are really very few x86s. Nothing like, say, ARM which takes you from sub-dollar micros to 8 cores, FPGAs, SOCs from multiple vendors,... "Support easy to get". I doubt it. Try finding out stuff on how the Intel debug connector (XDP) works. Many of the signals are proprietary and Intel will only give you support with their engineers. They don't disclose all the info. I have been doing embedded work for 30 years. Some has been on x86 (including writing memory manegemnt code etc). x86 is really hard to work with. I have also done a lot of work with ARM. Moving to ARM from x86 was a breath of fresh air. None of that 16-bit mode. Lower wattage,... Tools: I just downloaded a full ARM toolsuite while I was writing this post. I have a single JTAG board (cost be about $50) that covers me for all the ARM parts I've used. Support for ARM: Massive amounts. Since there is stiff competition in the ARM market, the vendors fall over eachother to help you.

    • Reducing Vdd certainly helps, but if you want to run fast during wake up that might require raising the Vdd again. Perhaps just fitting a series diode might work if the runtime Vcc is not violated. "The TPS 62736 will eat 400 nA of that budget" Sure, but you're forgetting the surrounding circuitry. A switcher is going to need at least one capacitor which leak badly as you have already written. This looks like a long road to nowhere. How about going the other way to solve this problem: A CR2032 can be had for less than 20c in quantity. That dollar you almost spent on the TPS 62736 + associated circuitry would buy 5. Just plunk down two CR2032s in parallel (assuming they work in parallel),

    • As an author of quite a bit of FOSS software, I absolutely agree. Sometimes big players to step up and pay/sponsor developers but only for features they want. I have been sponsored by a few companies, but not enough to work on the software full time. Free software is free, if you're prepared to pay for it :-). The many eyes approach generally does work. This time it did not work straight up, only a bit later. Of course we seldom see the bugs in proprietary code because: (a) It is very seldom independently reviewed - and - (b) The discussions are very seldom in public.

    • The biggest challenge with IoT looks like trying to find something to do with it. Lots of buzz (particularly from vendors of components and services), lots of theoretical applications, lots of "solutions", but very few real-world applications where IoT would really add value. Does my fridge really need to send my dishwasher tweets?

    • While boost converters are efficient at high currents (high relative to what we're talking about here), they are hopeless at sleep currents. At sleep currents boost converters are going to have way worse loss than anything we've talked about so far. Consider, a boost converter at least needs a relatively large output capacitor to pump into. That makes it at least as lossy as the capacitor solutions mentioned above. The only way around this is to have a solution that can sleep at cell voltage, then turn on the boost at run time. More complexity, more circuitry, more cost, size,... I would be more inclined to either go with a micro that can run on AAA voltages or use a CR2 or similar that still gives 3V and has a bit more capacity that coin cells.

    • Running at a lower speed reduces the current, but it also reduces the micro's sensitivity to glitches. That makes low speed preferable when dealing with marginal power supplies. Low speed = win, win.

    • Jack, a very well presented finding. Clearly caps are not the way to go. It seems that the only way forward here is to use those small micros that have a very wide Vcc tolerance and are specifically designed to be used in circuits where you don't want any leaky power conditioning. From what you show here that includes not only caps for the "power trick" but also even decouplincg caps to reduce ripple voltage. Ideally, both have to go. In the past I have used some of the low voltage the AVRs for this reason. Some of them will work on wide range of Vcc. For example, the ATMega48P runs on anything from 1.8-5.5V, giving huge margin for ripple. That is an oldish part, I suspect newer, similar, parts do much better. There is clearly a potential market for devices that can fit in this hole, it really just depends on whether the silicon can be made to function. BTW: While your wife is calling you "honey" you're probably Ok... If she starts calling you "sweetie", she probably has the men in white coats on speed dial.

    • What would be interesting would be if Intel stopped trying to FUD against ARM and instead tried to make their own ARM offerings. Intel are by far the process leaders and have amazing manufacturing ability. Without that they could not get anywhere as good as they are, considering the trailer load of legacy x86 they are dragging behind them. If Intel was to unhitch that x86 legacy and instead put their process genius into making ARMs, they could no doubt make parts that were faster and lower power than anything else out there. The questions still would be whether they could find the margins a company like Intel needs, and whether they would continue supply.

    • Unfortunately Intel has a long history of dropping the embedded community in the soup. On top of that, Intel seems to understand the high margin markets well, but embedded is a low-margin business. And the real kicker is that Intel don't provide the variety of parts the industry needs. I've been in the embedded game for 30 years now, during that time I've seen Intel promising products, but then dump them. What happened to '151, '251, i960 and many other parts? Intel discontinued them leaving many designers scrambling to find alternatives. If I've been kicked in the guts many times over, why would I stand in the queue for another punishment? With ARM I have one debugging tool (that cost me less than $50) which will give me debugging on just about every ARM in existence... from sub-dollar micros to 8-core application processors. With ARM, a little knowledge goes a long way. I can get ARMs with just about any peripheral mix I want. I can get ARMs with FPGA fabric. With the Intel parts, I only get what Intel choses to make.... and then only until they discontinue the part. Sorry Intel, you had your chances... you blew it many times over.

    • The article makes this sound like these are all magic functions only available on Intel devices. Pretty much all CPU architectures provide memory management and such. Not sure what point is being made here.

    • All these layers are designed with the idea of one kernel that runs on different sets of hardware. This is the sort of thing that is important for distribution oriented software eg. Ubuntu or having one Android release that boots across a wide range of different phones with different hardware. For these platforms, flexibility is more important than bloat. However it does nothing for a wide range of Linux ARM devices - the sort I most often deal with - where the kernel is custom configured and built for one particular board. In these cases, the flexibility has no value. The FDT does however make the layout easier than the platform description code that went before (and was always changing). The real benefit seems to be in having a description that is reasonably static.

    • The level of control required surely depends on the role the device is playing. If the device is just providing information or is a passive output (eg. showing your current blood pressure), then it really does not matter that much. If, however, the device is directly controlling an insulin pump or such, that is an entirely different matter.

    • 2 duff cells out of 42? I guess it really depends on your warranty policies as to whether they make a useful power source or not. I'm not sure why you'd design a coin cell driven system using a micro that chomps through 10mA active. Surely the vast bulk of these applications only require the smallest amount of processing. You are not calculating pi - just looking at a few inputs and setting a few outputs. A quick look at the Atmel tinyAVR suggests that running at 1MHz (that's faster than an 8051 at 12MHz - a fast micro in the 1980s) only uses 200uA @2V. Some of the tinyAVR devices will run down to 0.7V (but are doing a built-in boost, so will need more current).

    • This is increasingly becoming the lot of the 8-bitter: some non-CPU circuit needs a small amount of processing power, so put in an 8-bitter. This PIC is basically an analogue circuit with a small amount of digital processing, the same can be said of some other circuits such as the lower-end Cypress PSoC and EZ-USB parts.

    • Some of the TI OMAP parts (and probably others from other vendors) are three layers: CPU, DRAM and NAND flash in one POP stack. Very dense. This simplifies tracking and reduces the number of layers a board needs. Makes it easier to pass emissions tests too. Surely core is RAM? You can read/write it randomly. I too used a Univac at university... punch cards and everything. I once tripped going down some stairs and dropped a 2000 card box containing source for a compiler... took a while to get that in order again.

    • So what is "the right thing"? Sure, the Therac 25s nuked 6 people over a 2 year (or so) period - of which 3 died. But what about all the lives the Therac units saved? If the product release had been held up for 6 months, the downside would have been thousands of lives lost. The same goes for the Patriot missiles. Yup, it is a really stupid bug that ended up in 28 people being killed. But the Patriot's SCUD-busting probably saved hundreds of lives. We all freak out when software fails, but seemm to mind less when other stuff fails. GM's switch failure was way worse in all possible ways than Toyota's code failures.... and we still don't actually have a smoking gun. Stuff breaks and people die - whether that is metal fatigue or corrosion or software faults under stress. Ultimately we're always playing cost vs benefit vs risk. Without that we'd have no chainsaws, electricity, matches, ice skates,... Sure, Barr showed some sub-standard engineering practices at Toyota, but he failed to find an actual fault that caused a problem.

    • I don't know why you would compare ARMs to 8-bitters. It is like comparing trucks to bicycles. Both have their place. Sure, ARMs are taking business from what used to be "top end" 8-bitters and 16 bitters, but the same forces that give us cheaper ARMs also give us cheaper 8-bitters. The 8-bitters are clearly useful in the scenarios where it is useful to run from really poor power rails and you don't want the cost or complexity of a power supply. This is 8-bit turf: where every cent needs to be accounted for and cheap solutions that use the protection diodes as deliverate current paths to save 2 cents is a classic application.

    • It seems to me that adding more layers, particularly "intelligent" layers, makes these systems more vulnerable to security issues. What were just dumb peripherals under OS control (such as ethernet) are becoming communications "subsystems". Since these are frequently bus masters, they can often access the entire system.... a gaping security hole if those with creative minds get to dabble with them.

    • Hi Jack These days many of the IDEs are perched on top of GDB. This makes it possible to run GDB scripts either in the IDE or alongside it. GDB scripting is a pretty featuresome scripting language that allows you to do all sorts of things such as traversing OS data structures (tasks, resource lists...), monitoring watch points,... An example of this is the Apple OSX scripts at: http://www.opensource.apple.com/source/xnu/xnu-792.13.8/kgmacros

    • "methodologies as sophisticated as those used in many hardware disciplines" OK, I'll bite. I really struggle to see how hardware engineers have better methodologies than software engineers. How about some concrete examples? Hardware engineers use DRC. Software engineers use source code checkers (lint etc). Hardware engineers design a circuit, then test it and tweak it until it works. Software engineers design software, then test it and tweak it until it works. Software engineers do many things that hardware engineers don't do (or only do very rarely): * Good revision control. Most hardware designers use terrible revision control. * Automated testing. * Continuous integration.

    • Jack This is a very interesting take on software costs. Unfortunately the cost/benefit is normally done at the start of the project rather than after something like this happens. While Toyota undoubtedly made some mistakes, I am not aware of any "smoking gun" identifying a particular failure path. Are you aware of any such findings? Until we find a "smoking gun" we won't know if $80/line, or even $1000/line, software would have fixed the problem. Violating MISRA, or some stack guidelines, does not inherently mean the code fails. For many cases these are just "taste" issues where some person claims their coding style is better than someone else's. The hardware also raises some interesting issues. Who's to say the micro does not have some "interesting" failure modes of its own and the issue might not be caused by software at all? All we know is that there were potentially issues in the software because the software could be reviewed. Just because the hardware is impossible to review does not make that immune from problems. In many ways what we have here is a scapegoat. What is much more disturbing, IMHO, is GM's current airbag saga. 303 deaths due to a mechanical switch issue which anyone can understand and verify. This is much easier to verify than software problems, yet went unsolved for ages. Perhaps it is not so much a software issue as a basic problem with vehicle failure analysis and increasing expectations from car owners. http://www.usatoday.com/story/money/cars/2014/03/13/gm-recall-death-nhtsa-airbag/6401257/ As for software defects killing people in aircraft, a lot will depend on the definition of "defect". If you consider a "defect" as failure to meet specification then you are probably correct. If it is, instead, "failure to perform in a good way", then one could say that some of the Airbus crashes linked to the stall override were defects.

    • I think it is unlikely that we can assume IPV6 everywhere. I can't see the need. Let's say I saw enough value in IoT to have, say, 200 "things" in my house. Do I really need every lightbulb to be individually addressable from across the world? No. I would have some sort of "house controller" gateway that would be visible to the outside world, and the rest can be done on a private network with 192.168.x.y addressing, just like I use for all my current 10 or so computers, 4 ipods, printers, development boards,... all up maybe 30 devices. If there is ever an IoT network in every house, it will be via hose gateways (that do NAT) and then cloud services like Apple's "back to my mac" service which basically provides a way to access stuff using a new namespace. That kind of networking does not need IPv6 to work. IPv4 is working fine. Adding IPV6 as a requirement for IoT will just be another hurdle to make it harder to achieve. Many, if not most, houses now have infrastructure of sorts in them (even just wifi routers). How many are IPV6 ready? Can you really expect people to throw out existing kit to get their lightbulbs going? Like IoT, IPV6 is another solution without a real problem (yet).

    • One of the major problems with any of this is that people normally design network systems on a LAN (almost perfect throughput), then have a mad scramble when they find they are operating over a lossy connection. I did some testing/experimenting about 6 years back with a simulation network that had losses and lags built in. If some TCP packets get through out of order, the application does not get to see them until the older data gets through thanks to retrying. eg Sender sends packets 1..5 Receiver gets packet 3 and 5. Receiver application will only see packet 3 after the retry has managed to deliver packets 1 and 2. If you run TCP on a lossy connection (eg. a multi-access radio link), it can get bogged down sending retry data. In most applications it is far better to use UDP and just use the packets you get.

    • Security is certainly more of a challenge with UDP and it is almost shocking that security was not addressed in the original article. One critical point that the article mentions is that TCP will keep retrying (for a while). If you're operating on a degraded connection, the old (stale) data must first get through before the new (current) data does. This means systems controlled via TCP can be acting on old data. UDP is far better for control applications where you would rather operate with the most up-to-date data and ignore the rest.

    • Thanks Colin Your take on heterogeneous multi-core would be interesting. I've been playing with the Altera SoCFPGA which has a dual core Coretex A9 processor system + the FPGA fabric. This has bridges in both directions, meaning the FPGA (including all its logic + soft cores) can sit on the processing bus alongside the CPUs. From what I see, this means the FPGA (including its soft cores) can then access anything they want on the CPU bus (including all the RAM and peripherals). That opens up a whole new potential can of security issues: "naughty" FPGA-IP.

    • That kills portability, makes more hard work and introduces more places for the generation tool to mess up. If you generate C then it is easy to test with tools like valgrind etc. C compilers are very good at generating code properly. Toolchain teams have many people devoted to making sure the C compiler generates good output. This is way more resource than a generator developer could throw at the problem.

    • Colin, This is a facinating area, even more so when you look at FPGA/CPU devices such as the Altera SoCFPGAs whic allow you to even add soft cores to the mix, all sharing the same buses. Does MCAPI etc give any inter-processor security for AMP? Is there adequate fencing to prevent once CPU stomping on another's turf?

    • In the C++-specific use of static, I prefer to think of the static elements as belonging to the class itself rather than being shared by the objects of that class. The distinction being that these elements are still valid even when there are no objects in the class. One usage is for debugging. It is sometimes handy to count the number of objects in a class. static will do that well. static member functions (and data) have no this pointer so can be accessed outside of an object context, just in a class context. All other functions (and data) require a this pointer and must be accessed within the context of an object.

    • What about code sharing? For example there is all this open source stuff you have probably heard about. Code you write will not work on my custom generators and code I write will not work on yours. The only way sharing can happen is if we have standards for code generators (ie. what they take in and what they generate). Your compiler example makes my point exactly. Many compiler vendors decorate their versions of C with special pragmas etc (especially 8-bitters). BASIC was even worse. That code cannot be reused directly. C gives us portability and code sharing because there are standards (eg. C99). When code follows those standards, we can share it.

    • Are you suggesting the use of custom generators rather than standardised generatation languages? The problem with custom generators is that they kill code reuse. Unless someone else uses the same tools you do, and the same specification language, they cannot share designs with you. That is a huge problem when you consider that most products these days only have a few % of special code - the rest is generic.

    • At some level there is surely no difference between sharing schematics and sharing code (as is being done very successfully). In many ways, sharing schematics makes more sense because many boards are increasingly becoming just a SOC in the middle with a few tracks out to SDRAM, a power supply and a few connectors. In software, the real value is in the integration and the added software; in electronics the added value is mainly in the layout, testing and generating BOMs.

    • Modelling is great for some applications, but it adds yet another layer of abstraction over what compilers, assemblers and CPUs are already doing. This is increasingly making it very difficult to verify anything. Today I was working with a multi-k-byte initialisation table produced by an FPGA suite. I must take those values on blind faith and plug them into my system. Unlike with normal source control on C code, there is normally no way to verify generated code that comes out of some sort of point and click modeling environment. Nor is there any way to meaningfully compare revisions of generated code. On top of that, these modeling environments typically produce pretty horrible code - the sort Michael lambasted Toyota about when appearing as an expert witness. UML is fine for sketching out class interactions or such, but it is hopeless for defining the "nitty-gritty" stuff. The same goes for any other drag and drop style programming: 100 lines of C can often describe things far better than a page of pictures. Michael Bar is wrong: C is up to the task if used properly. There are plenty projects to demonstrate where C is doing a great job. As always, it comes down to using the right tools for the right job. Both a saw and a box cutter are cutting intruments. In theory they can achieve the same thing, but in practice they are not interchangeable.

    • If you're going to use embedded Linux a lot, it is probably worth just jumping in and using Linux as a desktop too. That way you will learn stuff faster and you won't get messed around by problems like different line endings etc. I very rarely use Windows.

    • I think there has pretty much always been the parallel usage of this term. Perhaps Making can be a gateway drug, but I doubt it. Apart from those just blinking a LED or two, most are already exposed to engineering and are already working in the field or are heading to do so. For instance I see many software engineers now dabbling in a little electronics. Expecting those that pin a couple of LEDs to some fabric to become engineers is like expecting DIY woodworkers to become cabinetmakers.

    • It is still a square peg, and most embedded holes are round. There are, however, some square embedded holes. Some of your statements are curious and need further explanation: "It was a Linux distribution that at the time was even more bloated than the core Linux architecture; ". For many years the Linux kernel has been small enough to run on many embedded systems. Fitting a Linux system only less than 8MB of flash has been an easy thing to accomplish for a long time. Android is a whole layer on top of Linux. That is relatively large and is always going to be bigger than a typical stripped Linux system. "Because it was designed using Java to make it easy to use by non-programmers, it was not easy for embedded programmers to use their experience with C and C++ to develop code for highly resource- and performance-constrained embedded designs." You have always been able to run C code in user space on Android devices. Most "semi-deterministic" code would run in C daemons with Android being used to provide a UI etc. Granted, that is now easier than it was, but it has always been possible.

    • I think it is over simplying things to say it is always a requirements problem. It is often also a technical problem. Sometimes we know what to build, just not how to build it. Where are the unknowns? Frequently unknowns are in the requirements, yes. Often the engineers lack the domain knowledge to know what will make a product successful or not. As the product matures, we identify problems caused by assumptions, the customer identifies further features. In both cases the requirements, or at least the perceived requirements, have changed. But frequently the unknowns are also in the engineering camp. Does the new CPU we're using have issues? We need memory bandwidth X, can we really achieve that? We certainly don't know where various bugs in library code are until we encounter them. But back to the article... Clearly we need some estimates or the bean counters (or customers) won't open their purses. They need to know whether a product will be available when it will be needed and whether it might be profitable. Without some estimates, a project is useless. Nobody, except academia, would sign up to unbounded expense and time. But the flip sides is that we do gain knowledge and better estimates as we progress. What we need to do is work in phases, at each stage we pop up our heads - redo the estimates and kill the project if it no longer makes sense. The real problems arise because we are not prepared to do the latter properly. We will stand by obsolete esitimates because we don't want to change what we say (and look bad). We don't want to kill projects because that looks like failure (though it is generally a far worse failure, and far greater waste of resources, if we keep going).

    • It looks to me like something got screwed up in the reformatting for web-izing the article.

    • Yesterday I got a chance to look at a cubie board with an AllWinner A20 on it. In low quantities you can buy an A20 for under $10 and that's got a dual A7 core + video encoding/decoding etc. If I was a TI or Freescale I would be worried... very worried.

    • Fantastic observation. Abstraction is certainly something that comes with maturity. There have been some interesting studies which suggest that kids should not be pushed into abstract mathematics until they are about 14 or so. We have 2 sons who have been home schooled (and are now in college level). One of them is very much "hands on". He was very reluctant to do much beyond basic arithmetic etc. When he hit about 14-15 he suddenly became completely adsorbed. In 18 months (ie. before his 16th birthday) he completed the whole mathematics syllabus including trig, algebra and calculus and aced the tests they gave him for college entrance.

    • I think you're missing the point pretty badly... For most companies, most of the investment is in their software layers. If that software can be ported easily, then the software can be easily moved around from one set of silicon to another and products can grow easily. If that software is reliant on either specific silicon or specific development tools, you're screwed. As a consultant working with a wide range of cpu architectures and vendors (ARM parts from Freescale one day, ARM+FPGA parts from Altera or maybe PowerPCs the next), I use almost exclusively various gcc family tools. My ARM debugger (less than $50 JTAG device) works on everything from sub-dollar ARM parts to dual-cores. The PowerPC development uses a more expensive tool, but once that is in place I can pretty much use the same methods as before. The only time there's a difference is if you need to get into low-level assembler. Once in a while I encounter some work which requires using some arcane development system (eg. that Code Composer monstrosity for MSP430). Apart from just having to learn the new stuff, you're generally stuck in that the project files won't work in the version of the tools you find on the web, or the JTAG hardware drivers have changed or some other messiness... The industry moves too fast and life is too short. You can't expect people to learn new tools every time they change a processor. Would you want to buy a Toyota if you knew how to drive a Ford and a Toyota was completely different to drive? No.

    • Coding at junior grades is a waste of time and won't result in more good programmers. What kids that age should be learning are the lower level skills that contribute to becoming a good programmer later in life: understanding the world around them and problem solving. Let them play with Lego. Let them solve puzzles. If you really want to put software into this teaching, then have a look at Castle Mouse (Google will find). It's a game, but it's really programming without you realising it.

    • Loyalty is misplaced in test equipment, cars or any other product purchases. Loyalty causes you to make sub-optimal choices and pay a premium. Whether you are in the market for test kit or cars, you would be well served to look at some Asian products. I am happy with my Kia car and Rigol scope. The Toyota or Tek would have cost me 50% more for no improvement in features or quality. As for the HP split up, it was all part of the Carlyfication of HP. She refocused HP on its consumer divisions: PCs, printers and such. Good bye to any professional stuff: test equipment, medical, components (optical sensors etc). Some was spun off as Agilent and some was swiftly killed (eg. the professional calculators though those have been partly revived of late).

    • Jack, Thank you for what appears to be a very honest review. It is unfortunate that far too many product/book reviews end up looking like shill pieces. This certainly does not! You ask what the point is of implementing polymorphism in C. It's exactly the same as the point of doing it in C++: to attach different implementations via the same interface. Every OS written in C does this. This is how device drivers, file systems etc are hooked up. Have a look through the Linux code base and you will see this all over the place. I have taken to using this approach with application code as much as possible too. This makes it really easy to separate and abstract code.

    • Safety is not a technical problem, it is a human behavioral problem. Adding more safety technology would work if it was augmentative and the drivers continued to drive as they did before. But that is not human nature, as soon as someone else (or something else) is doing the work, we reduce the attention we pay to the task. Whenever there is a computer (or other person) contributing to the vehicle operation, there will be a confusion of responsibility between the driver and the software. Net result, we make the control surface more confusing and crashes will happen. The safety gear might even reduce crashes, but there will still some and there will be shiny-pants lawyers out to get money for their daft customers. That means more engineers in court having to justify their design decisions in front of ignorant juries. Don't believe me? Just look at Therac 25. These devices saved thousands of lives but due to a coding screw up, 6 got overdosed and 3 likely died. Did the engineers get thanked for saving thousands of lives? No, they got vilified for the 6 that got overdosed.

    • As you say there really is not much difference between 8 and 16 bitters except for register width which allows 16-bitters to perform wider, and therefore faster, calculations. However, you might as well just go to 32-bit there just is not enough difference to make it worth while. With 8-bitters you tend to sacrifice portability because C really struggles to work properly on most 8-bitters. Most 8-bit "C" code ends up being highly decorated with non-portable extensions. For instance, 8051 has bit operands and operators that need a bit type. I tend to use AVR for 8-bitters these days. The killer feature of many 8-bitters is the ability to run on wide voltage ranges and use internal RC oscillators. I have not seen ARMs that do that. When the ARMs show up, I will probably ditch 8-bitters.

    • "if England were imperiled the USA would come to its aid." A debatable point. USA hardly jumped into the fray and it was only after Pearl Harbour that USA got stuck in. Before that, USA were happy to sit the fence.

    • Do you really think that? Before turning on the airconditioner or oven, you're first going to see what rate you're going to get? Oooh, looks high now. I'll look again in 2 hours and see if it is cheaper. We're drowning in a river of "information" and giving more people more info is not really informing them at all. Power companies will have a hell of a time trying to raise prices at peak demand. Are they going to send out alerts? What if those alerts don't get through? Imagine you've just put your thanksgiving turkey in the oven and the price gets jacked up. People will complain and demand their money back. Spot pricing might work for huge industrial loads but it is just not feasible for domestic loads. For domestic loads it will be seen as price gouging. Very few loads are really discretionary. People want the electricity when they turn the switch. They want the TV or heat on now, not at 4am. About the only exception is water heating and that has been handled by ripple control for 40 years.

    • So what do you suggest makes a good career choice for someone good at problem solving? If you are the type of engineer that gets your pay undercut and your job outsourced then you are probably not much good. I certainly don't see any really good engineers losing their jobs or getting bad pay.

    • No 8051 are not dead, but they definitely don't have a long time to go. Give them a few years and ARM Coretex M0 will kill them off. I did a lot of work with the 8051 back in the 1980s and 1990s, including some really deep stuff such as developing Forth and debuggers for it. 8051 is not a useful architecture with large flash and RAM so it is really no point in making them with larger memory. On top of that, the 8051 "very CISC" architecture makes it incredibly inefficient. The traditional 8051 used 12 clock cycles (at least) per instruction. It has so few registers that everything has to be fetched from memory all the time. Addressing any more than 256 bytes of RAM requires setting up a special "long pointer" register. That makes the use of anything more than 256 bytes painfully slow. It is not surprising that many implementors "tweaked" the architecture for there needs by extending the instruction set and adding registers. That breaks tools. It is much, much easier for everyone (chip developers, system developers and tool developers) to support a few clean architectures.

    • It is really easy to see why the engineering river dried up.... The 1950s and 1960s were the engineering boom years. Coming out of WW2, westerners wanted to put the past behind them and look to the future. There were breakthroughs in almost all fields of engineering, the future looked bright and people wanted to be part of it. Kids who grew up then are now in their 50s and 60s. Then somewhere in the 1970s and 80s, things turned sour. Instead of technology being painted as our friend and saviour, it was identified with the enemy. People began to look to the future with dread. No more dreams of flying cars, now there were nightmares of 1984, acid rain and environmental or nuclear catastrophes. Nobody was really inspired to be developing that future any more. On top of that, the generations are increasingly coddled and told to "follow their dreams". The pursuit of immediate individualism became far more important than building a future. No wonder that debt rose, personal entertainment products (eg. the Walkman) grew and people chose careers that were less challenging. So where are the future engineers going to come from? From people who see the future as bright. For now that is mainly Asia.

    • Why would anyone want a 16-bit micro? There is a reason for the really low 8 bitters and there is a reason for the higher capability 32-bitters. There is no inherent reason for 16 bitters. The 16 bitter is a historical anomaly because the gap between 8-bitters and 32-bitters was so large that there was space for something in between. Now that space has shrunk and we're seeing direct competition between 8 and 32-bitters. There is no justification for 16 bitters. It is pretty much the same as the computer scene around 1990. Remember the "mini-computer"? They were often used for CAD and as "high end" engineering workstations. Minicomputers filled the void between the desktop PC and the server-room mainframes. As the PC got more competent they took over the CAD etc functions. The mini got squeezed out of existence. I think you are mislead about the costs of ARM licensing. I doubt they are a stumbling block for M0. ARM has put together M0 to take on the sub-dollar micro market and part of that is getting the licensing costs right. The M0 licensing cost will be far, far lower than that for a full-fat core like an A10. I have no idea what ARM licensing costs are, but even if they are as high as 20c for full-blown devices they will not be more than 1 or 2 cents for an M0. How can you say that the Thumb instruction sets are "very CISCy"? The regular ARM instruction set is certainly not the RISC-ish possible architecture, but it is nowhere near the CISC end of the spectrum. The Thumb 1 instruction set is just a simple short encoding of full ARM instructions (ie. each Thumb 1 instruction decodes down to an equivalent full ARM instruction). That reduces the register orthogonality, but does not move the CISC needle by very much. Thumb 2 adds a few CISCy features such as bit setting, but we're still a long way off calling this a "very CISCy" instruction set (ie. nowhere near something like 8051 or x86).

    • But back to the point raised near the top. I too am a technology optimist, but we have generally got beyond solving technological problems. Most of our problems are people problems, not technological limitations, and thus technological advances don't fix those. Winding back the clock to the beginning of the 1900s, we were highly technically constrained. Pretty much all our problems were primarily constrained by what our technology could do. Those limitations have largely been removed and now we tackle a harder problem: tweaking people. For example consider surveillance. 50 years ago wire tapping was manual and expensive and personal surveillance by advertisers was pretty much impossible. We didn't need much ethics limiting the usage of surveillance technology because the primitive technology was self-limiting. Now the actual surveillance technology is abundant and close to free to the extent that we can all be tracked all of the time in order to try convince us to buy crap we don't need. What is lacking is the rules of usage. That is not a technical problem to fix. The same for medical technology. In the past, with limited technology, keeping a person alive as long as possible was a reasonable approach. Now we can keep people alive for much longer than we should and we're now struggling with how to manage their death (euthanasia etc). 50 years ago we would not have $1k toilets. We would have people digging holes with $5 spades. Us engineers are prone to reframing problems as technical problems. Let's give Africa broadband to every hut and their problems will go away. We need to realize that there are often better ways of solving problems and that some problems are just beyond a technical solution.

    • I generally like to think of myself as a reasonably ethical person, but When I look at the IEEE code of ethics I don't make the bar. I doubt many would. I have developed software that was used by organisations that spy on people (Google, NSA and others). Fail on point 1. I have worked on the OS for military radar systems that ended up causing people to killed. Fail on point 1. I have deliberately obfuscated code and used encryption for trade secrecy reasons. Fail on point 5. Sometimes though you have to take a step back and apply the "greater Good" principle. Those technologies that killed people and spied on people also saved lives and assisted people too.

    • Power usage is a human behavioral issue. It is not a technical issue. Making the grid "smarter" does not change that. No amount of Gee Wizz electronics is going to convince people to turn off their air conditioners etc. A lot of people are going to make a lot of money out of installing smart grid stuff, but the flip side is that a lot of people are losing a heap too. Ultimately this is doomed to tears.

    • Something I was told in 1988, and still remember, is that digital is just clipped analogue.

    • Pointless, probably but it is a stretch to say twitter is actually evil. The way it is used, and the culture that has built up around it is, however destructive to both information and critical thinking. Here in Christchurch NZ we had some big earthquakes a couple of years back and there was talk of twitter being an excellent medium for the citizen reporter. An event like an earthquake is when a medium like twitter should shine as a way to pass information, but it failed miserably. Unfortunately everyone wanted to be the first to "break the news", so any little bit of rumour got reported as fact, retweeted and gained momentum. Even regular media would be harvesting twitter to show how tech savvy and up-to-the-minute they were and all they did was spread disinformation.

    • I don't think there is any need for a standardised currency any more. The need for a benchmark currency (eg. USD) is based on the obsolete idea that communications is slow and thus people want to trade using a reasonably stable currency. USD no longer fits that bill and it is now easy enough to trade with any currency without involving USD. The second reason why people like stability is to be able to use the currency as a way to hedge against future international purchases. Bitcoin is far to volatile to serve that purpose. The third issue is the speed of transactions. I can execute GBP to NZD transactions shifting funds from UK to NZ faster than BC transactions. Ultimately the USD is failing because it is not underwritten by as much wealth as it was. Bitcoin is underwritten by NO wealth at all. Looks to me like a fad. Lots of people will make a killing and lots of people will make huge losses.

    • That would not be investment. It would be speculating. Investing is when you look at fundamental performance and make an informed decision. Speculating is not. People buying it madly right now are just speculating that it will continue to go up. There is really no reason why it should and nothing to underwrite the value. What goes up based on speculation is equally likely to plummet on the same speculation.

    • "Is the ARM juggernaut likely to displace most of the 32 bit MCU architectures? " That has already happened hasn't it? What 32-bit architecture is even close? Your trusty laptop might have "Intel Inside" but it also has probably 5 or more ARMs managing everything from Wifi to hard disks. As processors become more capable, there's more abstraction between the hardware and the software. That means the architecture is less relevant to the firmware developer than ever before. Once you're on a 32-bit platform it is pretty much C (or whatever), the actual architecture is pretty much irrelevant unless you're fiddling with interrupt service routine wrappers. As for toolchain vendors... the reduction in fragmentation has probably driven more companies to the wall than saved them. A niche product (compiler + debugger hardware) could keep a small company in business. How many of those vendors exist any more? Now you're up against everyone else doing ARM (and there are very few of those left). At the same time, proprietary tool vendors are also taking a knock from open source tools. In theory, proprietary compilers might produce better code than gcc, but not enough better to justify the costs and the hassles of dealing with licensing dongles etc. Spend less than $50 on an USB JTAG interface, download some stuff from the internet and you are developing code for bare-metal ARM applications. Want an RTOS? Download it. Kiel would likely have folded if ARM had not bought them. Meanwhile ARM also pumps development effort and dollars into gcc too. Slim pickings for the toolchain industry.

    • This sort of surveillance advertising is already built into many systems and the potential exists to make many more things happen. It doesn't need video either. Chances are the trolley/cart in your supermarket has RFID in it. While this is not yet tracking your purchases, it tracks traffic flow through the supermarket and helps supermarkets change their layouts to improve revenues (through either efficiency or getting you to buy more stuff). That Whispersync feature on your Kindle keeps tracks where you are reading. If you're reading a chapter with a romantic fireside scene with red wine, they know that and could potentially pass that through to the advertising system to suggest you buy some red wine through wine.com. When Google started harvesting your email for advertisement cues they were just scratching the surface of what is possible. Since just the US advertising industry is around $50bn, you can guarantee that anything that is theoretically possible will be experimented with and will be rolled out if it is shown to work.

    • I guess the best answer is to understand everything that might impinge on system performance. If a serial port is running at 9600 baud then that's never going to be a problem. But if it is running at over 100kbps then it is likely to be a factor. Jitter is important because it can cause noise. I think you over estimate how critically hardware folk look at things. They need to be using whiole system thinking and not just hardware thinking. I've seen glorious screwups because the hw folk expected the software to be able to pick up the slack (eg. calibrate or filter out noise when the CPU + OS is just not able to perform the operation), or err the other way and build an amazingly complex circuit to linearise a sensor when software could do it better and cheaper. As for switch bounce... a few measurements is not enough. I've measured switches with practically no bounce (less than 1 micro second) that became terrible with age. Mechanical wear as will as grit and contact oxidisation turned a pristine switch into something closer to an oscillator. For critical situations I lean towards analogue sensing switches (eg. hall effect, inductive, or resistive rubber) that can be monitored better and can, with a reasonable software wrapper, age far more gracefully.

    • The author fails to explain the biggest down-side of doing this. Most wireless modules are doing a lot of time crtitical processing and have been tested and certified to perform in accordance with the varuious wireless specs and protocols. If you add code at an RTOS level, then there's a good chance the newly added code does something to screw up the module. It at least voids all the compliance testing (most of the reason people use modules in the first place). One way to do this safetly (and I would argue properly) is to use a JVM. The base function and JVM can be certified and tested. Then any application running on the JVM is just "data" and does not impact on the real-time code.

    • I just can't see the benefit in having a trillion sensors out there. What can they do that a binnion sensors can't? The biggest hurdle to sensors is their cost. Even at 20c each, a trillion sensors would likely not provide enough useful information to justify their existence. We had pretty much the smae with RFID a few years back. Remember when RFID was going to replace barcodes on grocery tags. Just roll up with your trolley/cart and everything gets read without even lifting it out of the trolley. It didn't happen because RFID would have needed to replace barcodes which are free. Even if you factor in the time for a checkout clerk, a barcode probably costs around 2c to scan and RFIDs cost at least ten times that amount. The same with IoT sensors. The price vs value curve will change, making it easier to justify more sensors. But a trillion sensors? Naah.

    • Surely it is an unprofessional journalistic leap to turn "as little as a single bit flip can cause the driver to lose control" into "the single bit flip that killed". The first asserts that there is a failure path that could convert a single bit flip into a failure; the second asserts that this did, indeed, happen. While I in no way suggest we should set out to deliver poor quality software, it is interesting to notice that software controls seem to be subjected to higher expectations than the equivalent mechanical controls.

    • As a person working on "professional" level Linux embedded systems, I see this a lot. People will cheaply put together something to demonstrate an idea using a R-Pi or equivalent and then wonder why they get a huge quote to convert the idea into a marketable product. As an example, an R-Pi based "solution" runs from SD card which raises a whole lot of issues about the long-term reliability of the storage and corruption due to power failure etc. A properly designed product will not have these issues. Then there are all the other little features such as mechanical robustness, continuation of supply, power supply tolerance,... All these make a huge difference between a properly designed product and a hobby platform.

    • MBed etc are a fine way for hobbyists etc to get their first embedded ride with training wheels on. They are completely awful for anyone doing real work for any commercial application. There is no control over the environment. Will they uses the same back-end compiler from day to day, let alone over the 10 year lifetime of a product? Heck, will the embed website still be there in 5 years time? You need to be able to bring everything in-house into a controlled environment: toolchains, bootloaders, libraries... This is one reason I prefer to use open source toolchains etc when I can. Any of course if you're dealing with safety critical stuff you're raising it all to a much higher level.

    • All these websites hook in to the same advertising services back-ends. The back-end tracks you from site to site and keeps showing you ads based on what they think your interests are and not on what the hosting webpage is about.

    • There is certainly no one-size-fits-all way to look at clouds and different organisations need to consider different things. Storing code on github seems a reasonable thing to do. Due to the clone nature of git, any one copy is as good as another. There is no "master repository". For most small companies, google docs or some other cloud service will be far more reliable than what a company can do in-house. I know of a few 100+ employee companies that were screwed by a server srash. Their ability to restore from backups was not as good as they thought it was. Any cloud provider would do a better job. Legality is an interesting issue. For years, a fax was considered a legal document but an email or uprinted doc in electronic form was not. That position is changing and, no doubt, the legal status of data stored on clouds will be refined and changed in the future.

    • I can relate to that. Some years ago I did a lot of SCUBA diving. We would put our car keys on lanyards and take them with us. Can't do that with electronic keys. It begs the question: do all these features really bring us more convenience and improve our lives?

    • These Smartwatches seem line a solution for a non-existant problem. The example in the phot is really stupid. Somone is going too fast so the watch distracts them and encourages them to take their eyes off the road to tell them the car is going too fast. How stupid is that? BT-LE has a range of 50m or so and NFC only 0.1m or so. That makes them suitable for entirely different applications. Having a car finder over NFC would be pretty pointless. BT-LE might make some sense. Do we really need all these extra functions (eg. heart rate) built in with car electronics? Perhaps it would be better if appliances stuck to their knitting and did not try to confuse their function by adding trivial and often pointless features. I have a digital kitchen scales that has a built in clock. If you turn it on then it won't weigh anything until the clock time has been set. Annoying! The clock function (which I bet nobody uses) interferes with the primary function of the device. Bad design! I suspect automotive electronics that tell you your heart rate and flash you streams of Google ads are going to be equally annoying.

    • I don't think a true pessimist can survive in this game. We need to be skeptical and us older folks have seen it all before. In the 1980s we had rash promises that superconductors would change our world, yet nothing has changed except a few niche applications. Since the 1960s there have been amazing promises about programming & robotics - pretty much nothing. And where's my flying car! Look out at a street scene and apart from people talking into cell phones nothing has really changed from the 1980s or so. Pessimism gets close to paranoia. The silicon could be broken, the compiler could be broken. Distrust everything... But to have any hope of getting anywhere, you need some level of trust that somethiong can be achieved. Without that you will never get anything done. That's where assumtions are very valuable. We start off assuming that the compiler and the silicon more-or-less work - at least enough to get going. We might be concious that these might be broken, but we tentatively assume that they do work and slowly build our faith with testing. When we're talking about non-critical gadgets (as with most internet of things is likely to be), then does it really matter if there is high variance on battery lifetime and a small % of units die after 2 years instead of 10?

    • The biggest problem with giving requirements the time they deserve is that there is a catch 22. Everyone is already under huge time pressure and percieves that every day you wait until you start is another day that the end date is pushed out. The way to get around this problem is to start working on some aspects as soon as they are nailed down enough. For example, if you know the device is going to have a touch screen, you can start working on the interfacing long before the actual screen contents are finalised. Why do static testing at night? Some very successful projects I have seen put the static testing right in the compilation steps. The same goes for code standard checking. Fixing problems before the code even gets run the first time is way preferable to defering the fixes.

    • I just repartition your hard disk and dual boot. That's what I have done on all my computers and very rarely do I need to use Windows for anything. Most corporate IT folk are used to dealing with Linux since that's what most servers use (and you'll probably find the IT folks have Linux available to them too).

    • I really enjoy engineering and am grateful for living in amazing times and even more amazing developments. I started at 6 or so building crystal sets before I could even read properly. To this day I remember the challenge of making a medium wave aerial coil and being daunted by having to count up to 45 turns. I had to do this all the hard way - on my own - as my parents are both useless with anything technical (my father - a washed-up lawyer - threw away a tape recorder when the batteries went flat). At only 51, I have many years left in me. I sometimes wonder what would have happened if I had been born 100 years ago before electronics and computing. I suspect I would have been involved in some other type of engineering. People who do engineering as a JOB will have a hard life and hate it. You need to be more involved than just 9-to-5. If you are not constantly learning then you really don't belong in this industry or any other technical industry for that matter.

    • "Although the operating systems are stable and change little from year to year..." I am nit sure what OS you are talking about, but the internal Linux interfaces for drivers, file systems, etc change all the time. I generally find that most board/SoC vendors do the minimum to get software running on their devices. It is typically up to the rest to get them going properly and well. Whenever I have written bare-iron systems (ie. those without an OS), I have almost always ended up rewriting all the drivers to get around poorly written code, bugs and pointless abstraction.

    • Not one negative or even neutral comment, when the industry has been giving both Windows 8 and Microsoft tablets a complete hiding. This article is surely a shill at work. "Also, they {PCs] are regularly obsoleted, requiring the user to upgrade every other year or so in order to be compatible with the rest of the world. The tablet ended that tyranny. " So you're claiming that tablets won't get obsolete? Dream on! They get obsolete faster than laptops because laptops have now got to a point where even cheap ones have quad cores and oodles of RAM and have copious ports where you can extend the hardware. There is far less point in upgrading laptops than there has been in the past [typing this on my 5 year old laptop that still has many years of service in it]. Tablets, on the other hand, are not as extensible and are still in their teen years. They will get obsolete quickly. Microsoft and Intel depend on that obsolescence to keep selling us stuff.

    • Even just a remote run/stop button would be a huge step up from where we are now. I once found myself using both hands to hold probes in place and using a pencil in my mouth to press the run/stop button. A voice UI could work, but only if it works well. Yelling "Start. START! No, not #$%^ volts, I SAID START!!" will get the blood boiling. Perhaps a foot switch would be handy?

    • According to Wikipedia, USB 3 spec only came out in November 2008. The first USB capable PCs have only been around since 2010 or so and only commonly available later. While Linux has supported USB 3 since 2009, MS Windows has only supported USB 3 since Windows 8. The Logic 16 isn't all that new, but maybe the latest software is.

    • The main reason Saleae didn't go with USB 3 was that USB3 did not exist when this product was made. The Logic products have been available for a few years now. Perhaps they will make a USB3 one sometime. That LogicPort product looks interesting too. Pity though it is Windows-based. 2k sample depth is also very limiting.

    • I have the older 8 channel version. The best $150 I ever spent on embedded development. I use mine under both Linux and Windows and it works a dream. Sure they can't do everyting, but they do enough that it is an absolute nop-brainer to give every embedded programmer in the company one of these. BTW: The older device is supposed to only handle 3V3 signals. In general though I find it works fine with 1V8 too. When I have had problems it was easy enough to use a resistor setup to level shift the signals enough to make it reliable.

    • In my experience of dealing with the patent system, it is pretty much rigged to generate a lot of poor patents for the benefit of the patent lawyers and expert witnesses etc. In once case I was involved with, a single expert witness was receiving over $400 per hour + expenses. He lied like a weasel and would manipulate anything he could to try to get the outcome his client wanted. It astounds me some of these people are not prosecuted for perjury.

    • "here were not marketing managers around in the age of Volta, Gauss, Ampere". Yes there ware. Much like the current batch they oversold their products. Just look at: https://en.wikipedia.org/wiki/Galvanic_bath and other quackery.... not to mention the worst marketeer/showman ever - Edison! Ok, he was a bit later.... IPV6 has enough addresses to give each atom in the solar system approx 10 million addresses. Overkill?

    • You have to be careful how you measure and how you use those measurements. As soon as anyone has their performance/ego ties to measurements they will start to game the system and you not get the results you wanted. I've worked in places where these metrics got reported all the way through to the division engineering manager who asks why the bug fixing rate has dropped. This gets converted into heat which flows back down to the the development team leader. He is put under pressure to make the numbers look better so he is forced to pull programmers off fixing some of the important but complicated bugs (which will take a week or so to fix) to work on the silly little cosmetic bugs that customers don't really care about but can be fixed in a couple of hours. Net result: bug fixing rate goes up dramatically, but the real important stuff isn't being addresses. What about bugs of omission? By definition they don't take any space so it is hard to tie those to a per KLOC number. SHipping with bugs is not necessarily a problem either. The best answer I have heard on determining when to release software is to release it when it is on net benefit to do so. It does not have to be bug free so long as the value of the functionality it provides is higher than the cost/hindrance of the remaining bugs.

    • One thing to take into account is the current consumprion of the whole solution rather than just the micro. For instance there are some very low sleep current devices with tight Vcc requirements. These need a regulator which can often end up causing far more leakage that the micro itself. Where possible I really like to use parts with really wide Vcc tolerance (eg. some of the AVRs). These parts typically don't need any regulator in low current applications. Low power is seldom a goal in its own right. Instead it is a constraint applied to meet some other goal eg. lower cost, smaller size or ruggedness.

    • Some while back I used a terrible compiler (from HP IIRC) that did volatile on a file basis. If you had just one volatile variable in the file then **all** variables were treated as volatile. Basically what it was doing was switching off the optimisation flag if it saw a volatile keyword then restarted the compilation. Luckily we have choices these days and most modern compilers do get it right. C is just a tool. A sharp tool. You need how to use it.

    • Very interesting article, but a few mistakes. A fA is 10 to -15 A, not 10 to 15. 180 pA is 0.18nA, not 0.18 µA.

    • IMHO the best thing about software simulation is that you are not tied to real time. You can speed up the clock or slow it down. Slowing things down is handy. You can single step debug or add other logging etc that you could not do in a real system. For example, stopping an autopilot control loop on a real plane would cause a crash. In software there are no physical limits. Speeding up is useful too. I once worked on a project which did GPS guidance of agricultural machinery. In the real world, you would have to spend a few hours to test some of the functionality. In software simulation you can speed up the clock and simulate many hours of testing in just a few minutes.

    • "software was the pacing component" It will probably always be that way. Software is always viewed as the stuff that is designed to be changed. You can't change hardware or mechanicals, but you can, generally, always change software so it is almost always seen as the lowest cost thing to change. Whenever a problem arises, the first approach is always to see if it can be addressed in software. Heck, I've even fixed lubrication problems in software.

    • "However, modern implementations of old instruction sets could infringe on techniques that have been patented more recently by ARM or other companies – so I don't think the age of the instruction set is a water-tight defense." This is true not just for old instruction sets, but for completely new instreuction sets too. There are many people who think that software patents should be treated differently because they are just algorithms. But so too are instruction sets, ladder logic chains, or mechanical interlock logic. Software patents are not unique in stifling innovation. All patents do. Ultimately there is no benefit in ARM pursuing these experimenters. People making real product will go for the licensed ARM cores for power, cost and speed reasons. Most, if not all, people developing their own ARM-compliant cores still go to ARM for validation and testing. The last they need is for their processors to have some subtle problem that renders them incapable of executing mainstream ARM code (eg. the Linux kernel).

    • As a file system author, I have read a lot of file system code. I am unaware of any file system that uses 100,000 lines of code that the authors claim - let alone that being anything like typical. Some numbers that I see in the Linux file sytems: nfs:56kloc, ext4: 43kloc, jffs2:18kloc, logfs:9kloc, yaffs2:17kloc

    • When will your comment system be fixed to accept code snippets? A comment system on a website devoted to programming that cannot accept greater than signs is broken..... Talking about code quality is hypocritical.

    • While I find articles like this interesting, these techniques and definitions are often more theoretical than what is useful in the real world. It is incorrect to say that a defect is something that is created by a programmer. Defects are frequently, even mostly. caused by omissions (ie. code that the programmer did not write and thus has not created). A typical example would be forgetting to handle corner cases or handlers for illegal input. Was Y2K really a software failure? One could argue it was a usage failure. The software was designed for use from 1900..1999 and was not intended to be used after that. A car analogy: If a sports car gets stuck in mud, do you grumble that the deigners failed to mak a 4-wheel drive with 12 inch clearance or do you tell the driver off for being an idiot?

    • "The exam ...could ... provide job security..." Any capable person should be able to get and keep a job. "job security" means that that there is some regulation preventing others from competing and you get to keep your job even though someone else should have that job. I want the surgeon that cuts me open to be good. I don't want one with a "job security". The same goes for safety critical engineers too.

    • Writing virtual functions in C is pretty much the glue that holds most OSs together. Just look at the Linux, BSD, VxWorks, or other OS source to see thousands of virtual functions written in C.

    • I don't think you are disagreeing with me at all in your last post here. What I said: "asserts are more useful is in checking for algorithmic consistency" and "There are times when it should be used (discovering algorithmic errors during debug), but there are times that other checks should be used instead (eg. range checks on runtime code)" I think that is what you are trying to say too. Another way to say this is: * assertions should be catching bugs in the code (ie. what I call algorithmic failures). * assertions should not be doing things like range checking on data (222 degrees). That is not a code bug. It is a sensor/input bug that the code should expect and handle by displaying *** or something. That should not be triggering an assert.

    • I disagree entirely with your statement that we are either in control or we are not. We might program digital CPUs but we interact with an analogue world which makes the whole system analogue. For example, a mechanical element can be sticking, or we might be getting low battery voltage, causing the servos to act slower. We are still in partial control, but not full control. There has been some degradation. Many systems will just give up at that point and hand over to the operator. Such transitions can be incredibly dangerous.

    • Now what happens if that temperature of 222 degrees comes from a remote sensor that way? The page will just end up in a redraw loop which might bring down the server. Asserts should never be used as guards on data entering the system or on what is going to be displayed. -32000 feet could be handled by a sanity check that instead displays ******** and resetting the altitude sensor but keep operating with some indicator to the pilot that the system is degraded. That is far superior to taking normal assert behavior. I expect the pilot would prefer to continue flying with ******** on the altitude than the flight system shutting down and letting everyone plunge to their deaths with "Assert in alt_disp.c:line 223" on the screen. Asserts never take corrective action. They just terminate or log. That is all they can do. They should be used for the correct purpose and should not be used all the time. I have seen an embedded Linux system get into a reboot loop due to a damaged sound driver chip causing the sound driver initialisation code to assert. It would have been far better if the assert was not there because the sound driver would have just failed, ahem, silently. The normal operation of the system would have continued just without sound. Similarly I have seen marine steering systems that just shut down if steering feedback goes a bit out of range. That can be fatal. Rather just warn the operator that the steering is degraded but continue to operate. Assert is far too heavy a hammer to use in all the places it is used. There are times when it should be used (discovering algorithmic errors during debug), but there are times that other checks should be used instead (eg. range checks on runtime code).

    • OK, I'll bite.... Many/most modern CPUs and compilers provide various traps that will catch those out of bounds pointers and divide by zeros and trigger various exceptions. These can be trapped and a backtrace (or whatever) generated. This does not need any special assert code etc. Where asserts are more useful is in checking for algorithmic consistency. But still, asserts do not replace other checks for at least the following 2 reasons: Asserts are typically only enabled during debugging and turned off in release builds. Thus they would not catch the problem with Sunday's weather. Asserts typically terminate the program. That is not a very healthy thing to do in most server systems or embedded systems. For example, should the whole web site crash because of the 222 degrees? Do you want your anti-lock braking to reset if it gets an out of bound error? Sometimes just limping along is better....

    • Sorry, GM are doing this to sell vehicles. As transport, all cars look and function pretty much the same. Manufacturers are always looking for low-cost ways to add perceived value to get purchasers to prefer their product over competitors. They know that drive time is down time to many people and that many people want to fiddle away their time on FB. They can't do it legally on their mobile phones, so using OnStar gives them a legal way to do this. It has been well documented that hands free phone calls are not significantly safer than hand held. Law makers are not data driven. Untimately though, safety comes down to individual responsibility. When we try to use technology to replace individual responsibility we generally fail.

    • A HAL that has poor abstraction is clearly just broken. In the embedded world, that is - or at least was - often caused by the low levels being written by EE types rather than software types and EE types typically have very little understanding of good abstraction.

    • "At least one app displays the scene ahead, so those whose heads are buried in the e-world can see where they are going. " Does that really help? When you're watching something then you're ignoring something else that is in your cone of vision. A classic case of this is the "Gorilla on the basketball court" experiment. You can notice this yourself when driving in the rain. Although the wipers are crossing your vision every second, you don't "see" them.

    • Is anyone surprised that California is not on top as a percentage? Perhaps on a population basis it would be. Silitcon Valley is just a tiny part of California. California has more lawyers than STEM people, same too for low-wage agricultural workers. I am not suprised that "physical and life science" rates so high. Healthcare is the fastest growing industry in USA. It is unfortunate that healthcare is a consumptive industry rather than a productive industry. ie it should be a healthy economy that can afford to provide extensive healtcare rather than extensive healthcare providing a good economy.

    • An 8051... Really? I'd rather go with something that has some debug capability. AVR, PIC, whatever... but I guess there are cases where an 8051 is cheaper and just 1 cent is worth chasing on huge volumes. People forget that Moore's Law is a wave you can ride both ways: you get more for the same price - everyone knows that; you also get the same for cheaper which means that every year we can put micros into places we could not before. Of course much of that software is really simple - just basic state machines that can be exhaustively tested. Heck, some of those low end AVRs don't even have RAM - that limits what you can do, but also limits what you can screw up.

    • Vehicles have partitioned system buses for safety reasons, making many of these fears moot. Clearly you don't want Android running any of the body/safety electronics or the engine electronics. It would be unacceptable for a software crash or long boot delays after a glitch to prevent the engine from running properly, brakes from working or doors from opening. But those are not factors in an infotainment system. If the radio stops working I can still hit the brakes. If the GPS/route adviser takes 30 seconds to boot, that does not make the car unsafe. I don't buy the threat of Android not being able to handle CAN. Firstly, CAN high level protocols - such as J1939 - are easily processed by Android/Linux. The traffic levels are low: 1700 or so CAN frames per second. I have reliably processed CAN on a WinCE machine running at 60MHz and on older Linux systems. Secondly, even if the CAN on the infotainment system was hacked to go crazy, the link through to the critical CAN buses is (or should be) through a gateway. That gateway is responsible for ensuring that denial-of-service etc do not happen on the critical buses by limiting the type and frequency of message that can be passed from the infotainment system to the engine management system or body electronics. That gateway serves a similar role to a firewall in the internet. If you don't have a proper firewall, then you're screwed.

    • Jack, I still have some catching up to do at only 0x33, but I really think that us oldies have a huge advantage. When we look at a new technology, we have a framework of experience and knowledge to help understand it. The young-uns, however just see new shiny stuff with no context. eg. While everyone goes nuts about this new "cloud computing" thing and don't know what it means, many of us older people recognise that - for the most part - it is really just the thin client stuff of the 1970s delivered over www instead of X. Sure there are some differences that we need to be aware of, but that immediately gives us a handle on what it means. I think that allows us to grasp new technologies and their implications faster and better. Anecdotally there are also plenty places that look favourably on older and more experienced employees. I've never felt I've been given the "you're too old" line.

    • The interpretation of "runtime errors" is pretty open and it is perhaps naive to think that a switch from C to Ada is going to automatically minimize errors. If an Ada system gets a bounds violation which triggers an exception that system is still failing to provide its required function. It is still experiencing a runtime error.

    • If embedded.com really wants to engage the programmer community, then at least get a comment handling system that handles code snippets properly! Aaaargh!

    • This sounds very dangerous to me. C only has very rudimentary type checking and this does away with some of it. Automatic casting and such is Very Bad, IMHO. It is far more preferable to do something explicit, easily achieved with C's macro system. eg. void lock (Lock *); #define LockContainer(x) lock(&((*(x)).Lock) struct Foo { int stuff; Lock Lock; } *f; LockContainer(f);

    • I somewhat dislike "inheretance by composition" that places the "base object" as the first part of the "derived object" because: a) It makes some assumptions about layout. b) It can't support multiple inheretance (not that MI is a good idea...) I tend to use offsetof() and container_of() as per the Linux kernel since that allows more portable and flexible placement of the "base object" parts.

    • Leave out the app installer etc and people can't add Angry Birds or fart apps to your system or modify the apps in any way. It is easy enough to lock down an Android system. Software is hard, but the Android framework is surely no more complex than many other frameworks. It is getting easier to find programmers knowledgeable about Android than, say, Qt.

    • Car analogy: Sports cars are rubbish because they are useless offroad. They are also no replacement for a 5 ton dump druck. Similarly, it is easy to slag off software systems because they don't do everything that you want. Android is good for what it is good at, but it has not been designed to do every embedded Linux task. Luckily software can often be duct-taped together quite easily giving us a system with a sports car front end and dump truck back end. Eagerly awaiting the next installment...

    • I remember back in the day people writing ++i rather than i++ because the Borland C compiler generated different (and faster) code for ++i. However, any compiler worth its salt should figure that a free-standing i++ means exactly the same as i++ and should generate the same code for both. Of course when you throw your own data types into the mix, all bets are off.

    • While it is indeed helpful if a company encourages and supports employee training, it is ultimately part of being an engineer to spend part of your own time keeping up to date. This is the industry of change. If you are not reading and not changing then you really don't belong. The worst companies for training are those that send the managers away on training courses. The managers come back all fired up with buzzwords and are often not able to convey what they have learned and often have insufficient knowledge to implement any change or get any credibility with the engineers.

    • Can't agree more. Every different job you work exposes you to different ways of doing things: both good and bad. Stuff you just can't learn in textbooks and large well established, and dare I say, stagnant, companies. During my twenties I had 5 different jobs plus moonlighted for three other companies. I learned more doing that than any other time of my life.

    • When filling out a structure initialisation it is better to use the form: static circle_vtbl the_circle_vtbl = { .area=circle_area, .perimeter=circle_perimeter }; That way order does not matter.

    • Jack As always you dredge up some interesting reading and there are certainly some there I will start to read. One you seems to have missed is "Test Driven Development for Embedded C". Is MISRA really that well adopted by industry? I certainly have not encountered its use in the real world. Like any set of rules/suggestions it needs to be taken with a pinch of salt and read with the wisdom of when to use and when to ignore. Many of the MISRA rules are too restrictive to be used all the time.

    • Contracts are definitely a good way to ensure that simple functions actually do what they are supposed to. How are more complex functions handled? I guess the trick is that pre and post conditions handle certain classes of bug really well and others are better handled with other mechanisms such as unit testing.

    • The biggest problem is that UI design is often fad driven and the engineers want to play with all the cool toys, forgetting how the product really needs to work. Similarly there are many GUIs that have cute actions that provide no benefit. An example is the "wobby window" effect that distorts a moving window as if it were jello. No function at all - just the programmers showing off their pixel manipulation skills. As an extreme example, I have worked with some GUIs in high vibration environments (eg. rescue boats pounding through heavy seas where the driver is firmly strapped into a seat mounted on shock absorbers). None of that cute gesture stuff will work there. People also try to pack in far more functionality than is useful. I have some kitchen scales that have a built in clock. It will only weigh something after you have set the clock. AArrggh!

    • While it is important to have enough CPU grunt, I fear that people can get far too hung up on performance metrics when selecting a CPU/SBC. Give them a super fast CPU and they will compensate by writing sloppy code. "Don't worry guys, the CPU has bazillion MIPS".

    • I'm an embedded consultant working remotely and I am increasingly finding that geographically distributed teams are increasing. All my clients are remote and most are in different countries - often in different time zones. I do visit those within a 100km radius on occasion, but I never have to visit the overseas clients. We typically share code using git (eg. github.com) or I VPN into the client's servers. github integrates a wiki and an issues tracker on a project-by-project basis. Google docs makes for a great way to share binaries, data and unmanaged docs. Communications and setting expectations is vital. email, skype et al are very imperfect tools. If you get face to face then use the opportunity to get to know each other. That sets a firm basis for further comms. One really hard thing is building customer confidence remotely. I find it really helps the customer feel that you are serving them well if you hand out your home phone number. It you have timezone issues, then offer to get up at 3 am to make the conference call.

    • Ada lacks C's killer feature - the macro pre-processor. As the responder writes, while C lacks modularity these are implemented by convention (header files etc). The correct usage of header files etc is just part of the culture of writing C. The same goes for the use of valgrind etc to do runtime verification. I am sure Ada can protect against certain classes of bugs, but we must not ever get complacent about what a language can do for you. There are many classes of bug that no language will never catch. It concerns me that many people lay all the blame at the feet of C because it is an easy scapegoat. If they went through all the pain and costs of rewriting in Ada (or some other language) they would be disappointed to find that their code still has bugs.

    • The link wants 10kB/sec. Is that kbytes or kbits?? Either way that would take a big slice out of a J1939 bus. I guess you could perhaps use a limited feature set and load the bus less.

    • I'm not so sure tools like this would really assuage the objections of the people worried by RTOS impact on system behaviour. Very few people are concerned about how the system performs on the bench when everything is flying straight and level, What the nay-sayers are objecting to is how an RTOS might perform under unplanned conditions. IMHO, they are right to do so. I have seen far too many system collapses due to unplanned load conditions. A tool like this will only provide a snapshot of the system at the time the test was run. That does not provide any info about the system performance when bad things happen. Like any tool, it is only as good as the operator. Some extra visibility into the system is useful in the right hands, but the sceptic in me thinks we'll also see a whole raft of people gaining false confidence from running a tool like this. It is actually pretty easy to get this kind of data from most RTOSs in text form. Add a gnuplot script and you have it nicely graphed.

    • C is certainly the language we all love to hate, but C has one killer feature: the macro pre-processor. Until Ada and other pretenders for the crown grow a proper pre-processor language, they will never get very far. The pre-processor is especially useful in embedded and OS work - mainly for reasons whichAndrew raises so well. The only other language that comes close is Forth. Forth has the ability to modify the language on the fly.

    • I'm not sure that cognitive bias is really a problem, so long as we are mindful of its limitations and are open to other methods too. Cognitive bias is built up through experience. It is a set of rules that have served well in the past. Quite likely new events we encounter are addressed by those biases. The scientific method is structured and costly (even if only in terms of cognitive fatigue). The scientific method is, however, far more rigorous and will likely yield results when cognitive biases will not. Thus, when debugging, it is normally most effective to give cognitive biases a chance first, then switch to the scientific method if that fails. For example, when debugging, our cognitive bias tells us that a newly encountered bug likely has something to do with something we changed recently. Having a look at the source control diff will likely help you find the bug. If that does not work, then start playing with the scientific method.

    • "Well, even embedded Linux has some form of package manager. Usually the whole thing resolves to a: apt-get install "package" ". Not so. I've worked on many embedded Linux projects and none used any package management in the delivered software. While some embedded Linux systems do use package management, most do not. Most embedded systems run a predefined set of software with no option for the user to add more packages other than by upgrading the whole system. The exception is in tablets and such but these are really mobile computing platforms and not embedded systems.

    • "When Torvalds created the Linux operating system and made it freely available in 1991, his goal was simple: an open source alternative to the dominant commercial OSes such as Unix and Microsoft Windows in the mainframe and desktop computer market" Absolutely wrong. Linus started tinkering with Linux for his own personal education and curiosity. When he released Linux in 1991 it was so that other like minded people could tinker collaboratively. He had absolutely no intentions or expectations of Linux being an world dominator or alternative to commercial OSs. That came later...

    • Assembly language is the best for certain operations. For example there are many instructions in most CPU architectures that do not have any C mappings (eg. the rotate instructions in x86). Assembly lets you exploit instructions that C can't. Make sure to wrap these code bodies up in special functions so that the main body of your code is portable. For general purpose code, however, C wins out over assemly for at least the following reasons: 1) Portability: These days a body of code is a huge investment and you need to be able to redeploy functionality in different processors, different RTOSs,... Assembly breaks that. 2) No matter how good a programmer you are, C compilers will generally do better optimisations than you can sustain for hours and days on end. This is particularly true with multi-register RISC processors. 3) Assembler makes it harder to test code. I tend to develop code in test harnesses on a PC then plug it into the target. Can't do that with assembler. About the only place I can see where it might make sense to use assembler only are on really tiny projects where the algorithms and code are very tightly tied to the hardware. Small projects on PIC10 or similar come to mind.

    • Pure virtual functions are the only way to achieve clean abstract interfaces in C++. While the shape class is a bit contrived, consider something more concrete to us embedded developers. Look at device drivers which implement open() close(), read(), write(). Every different type of device has a different set of implementations (eg. write for a serial port is very different to write for an LCD).

    • You ask why? Surely the answer is obvious: lock in. These days, particularly with the vast range of Coretex Mx based ARM parts, it is getting easier and easier to switch between silicon from different vendors. Switching from TI to NXP to ST etc is almost trivial. They are all about the same. Loyalty is a thing of the past and designers can act like college students. So what these vendors do is try to build loyalty, of sorts, through encourage people to use tools which are only licensed for their processors. This makes it harder to move to other platforms.

    • C's "killer feature" is surely the macro pre-processor. While this is valuable in other settings, it is particularly valuable in embedded. We might bitch that other languages are better than C, but none have the macro pre-processor. I must say I prefer hex to binary, but then I am mildly dyslexic. 10110101 is pretty much impossible for me to read unless I mask it off with a finger and read it digit by digit. Doing the mental conversion to/from 0xb5 is much less effort.

    • And Microchip will want a cut of that 39c for their ridiculous patent for microcontrollers having a lower pin count that their bus width. http://www.google.com/patents/US5847450

    • "we have not seen ... or real increases in engineer’s income" I guess there are many factors in this: 1) What do you consider the starting point? If you use, say, 1990 then there probably have been some real increases. If however you use, say 1999/2000 as your starting point then you will not. Far too many people got drawn into the industry by the 1999/2000 bubble when everyone got a $300 chair and a leased $100k car as a sign-on bonus. That was a bubble and not the norm and should be discounted. 2) Has your work increased in value? Why should you expect real increases if you are not generating more value than you did before? 3) Globalization has reduced prices. Just as globalization has decreased the real price of sneakers and TV sets it has also decreased the price of engineers. Those that complain about this often only want their sector protected. The best way to find a job is through direct networking with engineers who work for potential employers. That bypasses most of the screening processes.

    • Pyeatte: Don't give up, just try a different approach. I'm over 50, a self employed contractor/consultant and I've never been busier. Find an open source project along the lines of what you want to work in and get involved. There are many projects out there. OpenUAV, Arduino,.... This helps you build experience and street cred and puts you ahead of the competition.

    • There can be both too many and not enough engineers... Too many crap engineers, not enough good ones. The efforts to try make STEM "cool" by having rappers promote science (http://rhymenlearn.com/) might help the masses get more knowledgeable, but it surely does not help get the right people into STEM careers. Real STEM careers are not rap songs. People who are good at STEM are naturally curious and will find their way to STEM without having it dressed up to be what it isn't. That natural curiosity also makes you future-proof. Anyone in STEM has to be a natural lifetime learner. It does not matter what you learned in college/university because chances are it was almost obsolete by the time academia learned about it. All that counts is the right mindset.

    • I'm post-50 and I think the business has changed considerably in the last 30 years. Back then, you could actually learn a considerable percentage of the known universe of computing during an undergrad degree. Now you can only scratch the surface. People coming out of university these days still have not cut their teeth and take a long time to actually be productive. Back then universities were sources of knowledge and did bleeding edge research. Now anyone can learn pretty much anything they want online and industry and even hobbiests lead the way. When I interview graduates I pretty much ignore what they did in university courses - I'm more interested in their extra-curricular learning. Back then the "old farts" of 40 often had no formal software development experience. They were often technicians, EEs and others who had shown some flair and were sent off to get a few weeks of training in COBOL or FORTRAN. Very few knew about how to construct software or had any theoretical knowledge and didn't know what O(n) means. Not surprising that they got blown out of the water by the young-uns. Those youngsters from 30 years back (the "old fats" of today) have both wisdom and the theoretical background. That puts them ahead of anyone else. They cannot be compared with the "old farts" of yesteryear. Age is no barrier to computing. It does not require physical fitness - just a sharp mind. A few years back I helped out a team of people porting Lejos to ARM. Being in my mid-40s at the time I was the youngster. The oldest was in his 70s. I can't see any reason why I won't keep programming well into my 70s either. I think experience is valued and have no problem finding people that do value it. However, if you try chasing commodity jobs posing as a commodity developer (ie. just a lump of meat on a corporate spreadsheet) then don't expect to be valued.

    • To be fair on presidents, Pres James Garfield (1881) is credited with a proof for Pythagorean Theorem. I would settle for having people (voters and politicians) that had any sense of critical thought. A good understanding of statistics is probably far more important than an understanding of engineering. Engineering decisions very seldom influence political policy but statistics often do. Unfortunately politicians do what sounds good to the voters, not what is good for the voters. Take for example debt: the right thing to do is to massively reign in debt and live within the country's means - or at least make a plan to achieve that. But no - that would be extremely unpalatable to voters who would rather continue to spend up large and damn tomorrow. Just charge it! Until the voters are sufficiently educated to make good decisions, you won't get any good politicians either.

    • McSquared: "t, it is difficult to read an LCD". Have you tried fiddling with the contrast? That often helps. I too liked the LEDs but they do have two major limitations: power - the LCD variants will still run for hours once the battery warnings are given. - multi-line and graphing. Much better with LCD Perhaps a better compromise is a backlight?

    • I think the effort in the design was an attempt to get close to the old look and feel but at modern crappy-button costs. While the 35S is a huge step up from the crappy buttons, it isn't quite as good as the old buttons. I can tell the difference between the buttons on new HP35S and my HP19 or any of my HP48s with my eyes closed.

    • "My family members take the attitude that it's easier to find another calculator than to learn RPN." Shame on you! Depriving your family the advantages of RPN is borderline child abuse. My kids were forced to learn RPN at an early age. Once they learned it they love it! My family have a wide range of HPs. Mainly various HP48s but I also have a 35s and HP19 (clamshell) and an HP11C. What drew me to the 35s was the return of the old-style buttons. The new buttons eg Hp49 or 48g2 are horrible. Talking of toughness... Anyone remember the old HP Journal articles during the 1980s where people would write in about how their eqipment got burned in an office fire, or driven over in a parking lot, and still worked? My HP29C calculator survived many falls onto concrete from 5 to 10 metres. It finally failed when it got dunked in sea water and corroded before I noticed.

    • Ok, I'll bite... In the Good Od Days (for some), it was easy enough to get a programming job in USA. In the late 90s pretty much anyone with a pulse could get a job. On top of that, the remuneration was obscene and was never sustainable. That was a bubble. As for qquality.... there might have been a fall in software quality - though I am not convinced. If there was, then why should that have been caused by outsourcing? Many other things have changed over the last few years too. The worst software degradation I have seen over the last 5 or so years is in Apple products. As far as I am aware, that is all written in USA. I have worked with people from all over the world. Some are good, some not. Location certainly does not play a factor. There is nothing in the water or gene supply that makes Americans superior to anyone else. If you are good to excellent then it is still very easy to get a job in USA.

    • This news comes at the same time as the news that TI is retreating from the phone market. That is good news to anyone trying to use OMAPs for the industrial space where, unlike cellphone space, we expect parts to be available for a long time.

    • "In the early years of this century Western engineers were jarred by waves of outsourcing. A lot of good people lost their jobs to lower-cost providers." or "In the early years of this century engineers in other parts of the world were finally recognised as being capable. A lot of good people gained jobs previously limited to over-priced Western providers." It's all in the perception.

    • You've got to be kidding... Unless there is a way for people to replicate or locally host the cloud, what organization is going to put their code in a vendor-specific cloud? What happens when this goes down for maintenance? Uncle Murphy says that will happen when you're pulling a weekend rush job to fix something. What happens when you want to brush off an old project in 5 years time? Will everything still be there?

    • Thanks for that in depth analysis. The only benefit of mbed seems to be how easy it is to get going and that is nowhere near as important as it used to be. While it might have been a big deal to get an IDE up and running in the Bad Old Days that is no longer the case. These days it is really easy to set up open source IDEs. I recently set up a whole full IDE/toolchain for an STM32 Discovery board. It took me 20 minutes over a slow broadband connection. No restrictions, fully open source.

    • From the description I assume there is no debugger - which many people will find very limiting.

    • Like most engineering decisions... it depends. A web interface is certainly an excellent interface for some products - especially headless products like printers or such. But imagine something like a hand-held two-way radio. A web server would be terrible for that.

    • Like Detmoon I would be stunned if automotive software engineers are not already doing this. Virtual prototyping - or whatever you want to call it - has so many benefits. For example: * You can run it without blowing up expensive hardware. * You can do testing before the hardware is even available. * Tests can be run faster or slower than real time. Tests that would take hours on real engined etc can be run in minutes. Equally, in software you can "stop the clock" and single step through code - something you can't do when attached to most real-world devices.

    • This is a single report written in 2000, based on data collected during the previous 3 decades (ie 1970-2000). Is that really a valid body of evidence for making comments on the current state of the industry? Much has changed. What was once cheap is now relative expensive - and vice versa. Methodologies and tools are vastly different and the problems and system complexity have increased dramatically. The rules of thumb are likely different too. CMMx is nothing to do with quality of fitness of purpose or safety. As a wag once said of ISO9000, you can still make concrete life jackets but you can guarantee they will drown you consistently.

    • Lies, damn lies and data sheets! Jack you are correct in your observations that designing in a femtoAmp micro will contribute little to a circuit power consumption. Unfortunately our whole industry is drawn to shiny magic bullets from micros to languages and debugging equipment. We are by nature optimistic and still hope even when we've seen the hollowness of these promises for years. I am curious why you would want a 32 bit micro for deep sleep. 32-bits is faster than 8-bits most of the time, except when you're just waiting. If you want to deep sleep a 32-bit circuit then just use a 50c 8-bitter as a wake-up micro. Design is the art of compromise. To achieve any one goal you typically have to give up on another. Achieving very low power you need to give up something else. The mix of properties beneficial in a 32-bitter probably make very low power very hard to achieve.

    • Jack, I agree with your basic statement: it is rather amazing that devices like these exist at all. Similarly, it is pretty amazing that even simple insects like bees exist. But do I queue overnight to see another bee? No. Having seen all the previous iPhones and other similar devices it is really hard to get excited over the latest phone from any vendor. Is it really such a huge step up? Having once worked for Apple a few years back and seeing the commitment from both developers and management, it astounds me how poor the software has become. Mainstream Apple apps (eg. music player, podcast and browser) crash for me on almost a daily basis.

    • I see others feel the same as me. The best $150 I every spent: http://www.saleae.com/logic I have one of these. Handy little device that I can carry around in my laptop bag without even thinking about it. It doesn't do everything, but it does enough for me to suggest every embedded programmer have one.

    • Fully tested? Really?? Just executing each line of code is **not** fully tested. You surely also have to test different state. Cyclomatic complexity is interesting. When you call a function should the complexity in that function be included too? Even this seems inadequate though. A function with a single path (ie. v(G) = 1) might behave very differently for different inputs (eg. rollover of an integer) and a single test won't cut it. v(G) does help drive home a good design approach of refactoring code to simplify code flow.

    • It seems rather strange to disallow a monopoly on ideas expressed in software when ideas expressed in many other forms (eg. mechanical) can be monoploised through patents. Why should software be treated differently? It makes no sense. Either give software IP the same standing as mechanical IP or do away with all patents etc. For example. it makes no sense to allow the reverse engineering of software while allowing Apple to patent their connector.

    • No matter what this specific case is, it is important to understand that there are certain types of bug that Ada can trap and some that it cannot. Just picking through my mental list of different bugs, they can be classified in a few ways: 1. Software that was not designed or specified right so that software implemented according to spec would not function correctly in the real world. 2. Algorithmic errors. Poorly implemented algorithms. A sort that doesn't, for example. 3. Oversights of the metres vs feet variety, or poor calibration/filtering etc resulting in incorrect calculations. 4. Timing/locking issues. 5. Incorrect hardware handling. For example using edge triggered instead of level triggered interrupts resulting in peripherals going to sleep and product failure. 6. Buffer overflows, poor pointer handling and the like. Ada can only address (6). Valgrind etc can help to find (6) too when these turn up in C. The other 5 categories are purely human issues and cannot be fixed by changing to a different language. Unfortunately the examples of success/failure we look to are purely anecdotal. When a problem was fixed (or introduced) it is often easy to praise or blame the tools when the real cause was the people that were involved.

    • I agree that much of software is a mess. Well we can make C a process too. Add Coverity, add KLEE, add Test Driven Development, add valgrind, add all the other checking tools and it becomes harder to make the errors that people blame C for. Train all those C programmers to Ada-level ability and cull those that can't make it. Do all those things and a C-based industry can be almost as good as an Ada-based one. Magic-bullet thinking gives Ada the appearance of super-human powers. Ada can't fix buffer overflows (it can detect them and throw an exception which does not fix the problem). Nor can Ada fix algorithmic problems.

    • I do indeed believe that Ada can be used to great effect, but I am still compelled to call you out on something. "There is a small community of Ada users who routinely crank out code an order of magnitude less buggy" Do not for a moment think that Ada is *the* "magic sauce" in this. The people using Ada tend to be excellent software developers working in environments that are heavy on design and testing and formalised methods. They'd still produce much less buggy software even if they were using C or assembler. This industry has been prone to magic bullet thinking pretty much for ever. Use tool X and all your problems will go away. The problem is primarily in the people (both the programmers and the management wanting to do everything too fast/cheap) who stay the same. Give your average programmer Ada and you will still achieve average results. Woodwork analogy: A good craftsman will be able to do almost as good a job with a $5 chisel as with a $200 chisel. Putting a $200 chisel in the hands of a DIY-numptie such as myself and you won't achieve anything better than with a $5 chisel. Ada has some interesting features and applicability but it is certainly not for everything. It is telling that many Ada runtimes, such as Ravenscar, are written in C - presumably because Ada lacks the flexibility to do the things that a runtime needs to do. Size is another issue. Ada is just too fat to run on many of the micros of interest so many embedded systems.

    • I'm not to sure what would be added here off the top of my head, but things that have bitten projects I have worked on in the past include: - constructors and destructors - copy contructors Many of these only bite when a programmer tries to use operator overloading or other features. In the most extreme case I have seen, the simple and innocuous-looking: a = b + c resulted in 28 constructor calls and a whole lot of fiddling. That caused a very hard to spot performance bottle neck. Of course none of that would have happened if operator overloading had not been used.

    • Even if automatic code generation from a specification is ever possible, you will still have to use some formalised language to write the specification. Isn't that just another programming language? That FORTH-style development methodology lives on. Many python programmers work that way.

    • Polymorphism allows pluggable functionality and implementations. That is a fundamental need when you write something like an OS where you can plug in different device drivers, schedulers, policies etc... Have a look at the Linux code (or FreeBSD, or eCOS, or even WinCE) examples. As for attending to all the details, I must say I find it way easier to attend to the required details with C than with C++. C++ does things automatically - both the things you want it to do and the things you don't want it to do. I find it way easier to write a few extra lines of C code than it is to find all the C++ things that you often need to disable so as to not get things you do not want. Or to put it another way, C only does what you tell it. C++ does a whole lot more. That often results in C++ doing unintended stuff behind your back - a situation that is not easy to debug and is often unacceptable in embedded systems.

    • Epic and Microsoft just no longer go together... Microsoft and $#!t certainly do.

    • I should have been clearer. The rule they were given refers to incrementing integers etc. I don't think they mean for it to be used with pointers.

    • My son's university thankfully does things differently. They disallow the use of a++. Students must instead write a+=1 or a=a+1. Assignments within an expression are severely discouraged.

    • Leaning not to abuse C is part of the skill that is developed with time. [Analogy: an unskilled driver can easily crash a car. Is that really the car's fault?] There are numerous "safe" programming languages out there for those that have not developed the skills to use C. Python for example.

    • Yes I know that is was used in the original K&R book as a way to show how the operators could be used. I still have that book floating around here somewhere. That does not make it right though. K&R C is rubbish. People should only use ANSI C and pretty much everything assocaietd with K&R C should be ignored - particularly the "best practices". These days compilers will generate the same tight code without the "clever" code. Far too many people give Kerningan and Ritchie too much credit for C. They invented that K&R monstrosity which is pretty close to unusable. It was the ANSI-C people that actually hammered C into a useful language. Neither of the original gents participated.

    • a = ++b + ++c; is certainly well defined in any workplace I have any influence in: Write crap like that and you are fired! That's what it means. Same for rubbish like the "clever" string copy while((*a++ = *b++)); Write ((*a = *b)) { a++; b++; }

    • I always thought a[i++] = i was well defined because the i++ means increment after use (ie. after the dereference). Since the RHS of an expression needs to be evaluated before the assignment can be made, this should be safe. On the other hand, a[i] = i++ or a[++i] = i, might be unsafe. I don't know and after almost 30 years of C programming I have little intention of finding out. What does a[i] = i++; really mean? It means the programmer should rewrite that code: i++; a[i] = i; which is far clearer. Any time you need to slow down to think about it, or need to reach for the specs, the code should be rewritten. If people avoid the sequence point problems then it really does not matter if they pass sequence point tests. What matters is that they know that there be dragons there and stay the hell away. I would be much more concerned about a smug ego-driven programmer that 99% - or even 100% - understands these issues leaving an incomprehensible mess for others to work on. Another type of sequence point that is missed here are the issues that can be caused by pointer aliasing.

    • The data sheet is still very important. That has to serve when you want to do something that the supplied code does not. But data sheets need to be accurate. I have recently been trying to get a device booting from NAND. The documentation for the internal boot ROM (hard coded into the SoC) is wrong. I ended up reverse engineering some of the terribly written software that is supplied with the part. That unfortunately only partially works. I'm currently waiting for the engineer that wrote the internal boot ROM to come back from vacation to say what is really happening in the device. Keeping the datasheet docs up to date would have made this all a lot better....

    • I must be missing something. Aren't they doing this? Most vendors provide eval boards, reference designs, BSPs, compilers, drivers, ports to Linux/WinCE, parts libraries for schematic capture, SPICE models,... If you're designing around a SoC then it is quite often easy to take the reference board schematics (provided), strip off bits you don't want and add stuff you do want. Take the reference software (eg. Linux BSP) and tweak it. Many prototypes these days are just reference boards with custom circuitry on a daughter board. If you're designing with a switch mode chip, then most vendors provide calculators that pretty much design and tune the circuit for you. AVR data sheets include code snippets in both assembler and C. Granted, vendor supplied code is often terrible. That's often where open source code helps out. Like the story of stone soup, the vendors crappy code starts the pot boiling and the community will often then make it way better. As for PDFs, well they're still a great way to provide the documentation that is required to underpin everything. A bit like music score, which has survived about 1000 years, is there anything better for the job?

    • Graphical environments are are also far too hard to understand and program with. For my sins, I help teach robotics to kids using Lego Mindstorms and the Lego graphical programming environment. It is just horrendous when compared to textual coding. Anything but the most trivial program ends up being pages and pages of pictures which are difficult to follow. Picture programming might make it easy to get from zero to "doing something", but the language is not the barrier to writing good software. I like to use the analogy of playing music. Music illiterate (like myself) are stumped by music score. If I wanted to learn to play a really basic tune, then one of those kiddy keyboards that use colours would help. But to get to the next level (ie. something worth listening to), you need to learn to read music score.

    • I think there is at least one thing we can all agree on.... We engineers could all do a better job of making laws than a lawyer could of making software!

    • It is certainly true that the ARM offerings are getting cheaper and richer. These have certainly killed off 16-bitters. There are, however, two factors that keep 8-bitters alive: 1) The same forces (Moores law etc) that give us sub-dollar ARMs drive down the 8-bitters too. There are 8-bitters in the 20c region. If ARMs get sub-50c, 8-bitters will become sub-10c. 2) Many, many 8-bitters can run on wide voltage rails (eg AVRs run on 1.8 to 6V). That means you can easily build battery powered circuits that don't need any regulators. I have yet to see any ARMs or similar that can do the same.

    • There is no substitute for the software we have. Without software many of the things that we take for granted would not be possible. So, to extend your analogy: People would eat a few bad chocolate bars now and then if the alternative was eating grass and sticks.

    • I too have found I write code like this more and more often. I recently wrote some data queuing software that used different protocols and different data streams. By using function pointers I could just plug in new transports and new protocols. Really easy to expand and modify once set up.

    • Sure software does crash etc, but on the whole it is darn good. Sure, my ipod touch crashes often but it still performs a great function most of the time. Even in the really bad cases of software failure, Therac for example, the software probably does a far better job than what it replaces.

    • For some excellent examples of how this works and can be used effectively, have a look at the Linux kernel - particularly the vfs and driver handling. In C, the functions can be changed on the fly if desired. This is quite useful for doing things like writing state machines. Instead of having a state state variable and a big switch statement to perform different actions, you can have multiple state_processor functions and just set the current state_processor as the state machine transitions.

    • Life is too damn short to work without source code. Having spent the last 15 or so years mainly working with open source, and occasionally working with closed source there is absolutely no comparison. Find a bug in a Linux driver (or other open source code) and there are patches and bug fixes available and people to discuss with. Find a bug in Windows CE and you're screwed. Send Microsoft a fix or problem report and you're really lucky if you see that fixed within 6 months. On top of that, open source projects tend to come with standardized ways of managing source code, applying and sharing patches etc. This tends to make it much easier to share code and fixes. That 25% number from NASA made me roll my eyes. Changing 25% of a brought-in package is ridiculous. That is a rewrite - not a set of patches. If you end up changing a lot of code then do two things: 1) Break up the changes into multiple independent patch sets. 2) Push the patch sets back to the project owners to incorportate. That means you won't have to redo all that work when you pick up the next version.

    • That is exactly what many ECUs do. For example car engines these days will typically have a Mass Airflow Sensor (MAS) to measure airflow. They will also model the airflow based on other parameters. If the MAS reading deviates too far from the modeled value then the MAS reading is distrusted and not used by the ECU and the modeled value is used instead.... and the check engine light is turned on. A software shutdown is something you typically want to avoid. What you want is that the system is capable of continuing degraded operation even if part of it fails.

    • I think the point Wilco might be trying to make is that C++ provides only one mechanism for achieving two things: polymorphism and providing an abstract interface. This is what often forces multiple inheritance even when it feels unnatural. Java, and some other languages, makes a distinction between inheritance and interfaces.

    • Dan Consider what happens when you do not have abstract base classes: 1) Consider a board that has two different types of UART (eg. the two UARTS on a micro and some expansion UARTS on the memory bus or SPI or whatever). If you don't have an abstract base class "UART" then you can't share the common code that calls the UART. You need the virtual base class to give you the abstract interface to "any kind of UART". 2) As soon as you directly access class with all its guts defined (ie. a non-abstract base class), you have to recompile the client class every time you make a change. This breaks the whole idea of modularity. It makes it hell to integrate 3rd party libraries and achieve code reuse. I fully acknowledge that the non-abstract examples will work, and even work well in small systems, but they are not instructive or good advice for the role OO, and C++, are supposed to play: well abstracted systems with large code bodies.

    • I think there are some errors in the table. SRAM is surely capable of being unified. Flash endurance will depend on flash type and can vary form sub-10k to greater than 100k. FRAM certainly has its niche, but is very expensive and low density.

    • Aren't generic units very similar to C++ templates? While these reuse the code text, they do not reuse the binary code. Instead new binary code is generated for each specific "class". If so, then it is really just swapping one runtime penalty for a different penalty.

    • If you want to make the site more useful, fix the comments section. The current comments can't even accept code snippets. Pretty useless when so much of the discussion is about code.

    • This is exactly the point I was raising with the proper use of abstraction in previous articles. A C++ interface to a timer or UART or whatever should always be through an abstract base class. That allows you to reuse the client code with different UARTs etc. However, abstract base classes add vtables etc and thus break the ability to use memory mapped objects for UARTs. It is important to undersyand all these hidden things that C++ does, but do not ever depend on fixed vtable mappings. These can change from compiler to compiler.

    • It's a free market. Vote with your feet! This is really no different to any other product not living up to expectations. No vendor is legally obliged to provide support forever and support costs a lot. However the flip side to that is that if the instrument was sold as having feature X and did not, then in most countries you would likely get the tool company for misrepresenting the capabilities of the device. I suspect though that feature X works, but not well, due to the bug. No different to my ipod which does play music, but crashes and gets into a state where functions do not work properly. It is exactly this sort of issue with a printer that spurred Richard Stallman to start GNU. If the scope had open source software, you could fix it yourself.

    • My son is studying Comp Sc and is taking a course in embedded systems programming. They deliberately choose a very simple CPU architecture (AVR) to make it easier to understand the CPU from the ground up. He is also studying the Linux internals in another course. Anyone who has worked in the Linux kernel *as opposed to just writing code that runs on Linux) should have an excellent grounding in embedded systems. The kernel includes stuff on drivers, multi-threading and related work.

    • You do not need to be a Linux guru to build free tools from scratch. Can you click a link? http://www.yagarto.de/ You can even use Windows or Mac. Using Linux, an RTOS or bare metal will depend very much on the type of functionality you need. If you get jingoistic about this, you very much limit what types of solution you can deliver. If your application just monitors some serial ports and twiddles bits etc, then bare metal is a good way to go. If your application has complex networking or file system requirements then Linux is probably a good idea. I would shy away from writing much code in assembler except for very extreme applications. If you write code in portable C that gives you options to move your codebase from one micro to another without having to rewrite.

    • Have you got any statistics to back up your claims that old 8-bitters have worse code density than 32 bitters? I've never shifted a design from 8 to 32 bits and see the footprint go down. I am not saying it is impossible, but I would like to see some supporting stats.

    • Ok Jack... please never call me Chuck again! While ARM's slice out of a Coretex A or so "top end" part might be tens of cents, that sure is not the case on the Coretex M parts. Sub-dollar M3 parts have been available for a while and M0 parts are targeting lower pricing than that. Sub-50c ARM parts must surely show up in the next 5 years. You can be certain ARM is only getting single-digit pennies out of that. In the future ARM will have to adjust their royalties to fit market trends. If you are worried about tool costs then try the free gcc-based ARM tools. I've been using them for years and they work great. All you need to spend is less than $50 for a JTAG interface and you're set. Even though ARM bought KIEL to keep them alive and provide a premium development product, ARM actively contribute to the development of open source tools too (gcc etc). The gcc toolchain for ARM has improved to the extent that the difference in code quality between KIEL and gcc is inconsequential in most applications. I rarely use an IDE, but if you must have one, Eclipse seems popular. YAGARTO makes it easy even for Windows people.

    • I will have to investigate this KESO thing a bit. It looks very interesting. Java has been around small micros for a long time now. One of the more open projects is Lejos, which I helped to port to the Lego NXT robotics system (ARM). The NXT has 64k or RAM and 256K of flash, but the whole Lejos environment can comfortably run in less than 64k total. The previous version of Lejos ran on the older Lego Mindstorms which only has an H800 micro with less than 32k of RAM and no user flash. Sure Java might not make best use of a CPU, and has some deficiencies, but it has some huge advantages for some purposes too. Most importantly, the JVM provides a way make it easy to partition a system and provide an environment for rapid development of customisation. If customised code crashes, it does not bring down the whole system - it just crashes that JVM thread. The rest of the system can continue to function.

    • Many source code management systems provide hooks for running tests during check in. These can make it trivial to perform various compliance/style checks before allowing a check in to commit. That is a good way to force code standardisation from suffering "bit rot". As for all the standards out there... many are just arguing over syntactic sugar: Is blue better than red? Are square headlights better than round? Is the K&R coding style better than Linux? Even MISRA has its pros and cons with enough cons to make it unusable in its entirety on most reasonable size projects. Except for very few exceptions, I don't buy that "you get what you pay for" talk. I use Linux as a primary development environment. Compilers, IDEs, various source code management, splint and more... all free and, IMHO, mostly superior to the commercial offerings. The quality of open source tools is so high that most vendors seem to have ditched anything else. Get a Wind River Eval DVD and it is just a Linux distro patched with the VxWorks toolchains - all based on open source underpinnings such as Eclipse IDE and gcc.

    • C is not going to be dead for multicore because there is so much C code that people want to run. Anyone remember the Transputers? While not multicore, they had a silicon scheduler which meant that they could be thought of in similar ways. The native programming language was occam. http://en.wikipedia.org/wiki/Occam_%28programming_language%29 which has CSP constructs meaning the threading control is built right into the language. While many transputer programs were written in occam, many were also written in C with scheduling wrappers written in occam. occam was great for inherently parallel programming but most programming is easier to manage as independent but parallel sequences. For that C does fine. Ultimately modular programming means that, except in extreme cases, you want to write loosely coupled sequences and finer grained parallelism is seldom required.

    • "sensitive analog designs" and all digital signals are just a sum of analog signals. We often forget that digital signals are also prone to all these problems too. Yes, probing a good circuit can make it go bad, but far more frustrating is probing a bad circuit and making it good. If you could just ship with the probes attached...

    • Stuxnet, and similar, attack the toolchains in the developer workstation. When you start to consider the resources of the rumoured attackers (USA/Israel) and toolchain vectors then you are quickly on a slippery slope to paranoia. What if the compiler has been infected to generate bad code for certain code sequences? What if the OS has been deliberately compromised by the vendor? It is pretty safe to say that the only way to prevent something like Stuxnet is to shy away from Windows and other closed source development platforms.

    • Being able to iteratively build software will depend very much on the type of project being developed and how well you can partition the software. Sometimes those big "from the ground up" developments are very hard to break into iterative developments. Sometimes - particularly when you are building a framework based system, you pretty much have nothing you can demonstrate until right at the end when everything pulls together in the integration step. When that happens, it is really helpful to use TDD so that the integration does not become the first time the code gets a good testing.

    • 8-bitters will keep their niche for a long time yet. They remain cheaper. They use less memory. Many 8-bitters are available to run on wide voltage ranges which means there is no need for a regulator in many circuits. But the role of 8-bitters is certainly being eroded. No longer would you choose to use an 8-bitter except for the most simple tasks. Those applications continue to grow. I prefer ARM-based systems for most work, but if I just need to build something to toggle switches or read some ADCs and send to a serial port I'll grab an AVR. I believe 8-bitters do have a useful place in education, but only in a limited way. They are so simple that it gives a really simple introduction to programming micros, interrupt handling and similar. But I agree than once you get beyond a simple bump-and-turn robot it makes sense to go with 32-bitters.

    • Stuxnet has been used as FUD by tool vendors. The only honest words I have heard on this was from a Wind River representative on an embedde systems ecast. When asked if VxWorks could have prevented a Stuxnet-like attack he said no. And nor could any other RTOS.

    • Keep the sextant Jack. Not only for its potential as a backup but for the sentimental reasons. When technologies become ubiquitous, and their reliability hits a certain threshold, we become over dependent on them. If the technology fails we find ourselves in the soup. Take for example electricity. In cities where power outages are very rare, few people have any back-up plans and when the electricity fails they have no lighting, heating or cooking. I live in a rural area where we tend to lose power with every big snow dump or high winds. When we lose power we have he propane cooker and kerosene lamps running within 5 minutes. A note on GPS accuracy/precision: while most cheap units are only good for a a few metres, RTK (real time kinematic) processing can give sub-inch results. Quite amazing when you consider this is all done by listening to some flying garbage cans 26,000 km away in space!

    • I've been in embedded for about 30 years now. During that time we've seen scores of silver bullet programming technologies that over promise and under deliver. Our industry is pretty much like the diet industry. Buy this protein shake or ThighMaster and you'll be in better shape in a couple of weeks. For us, it is the promise of magic tools. State charts are no magic bullet. They seem to have a lot of downsides too: 1) How do you diff them? Different versions of source code can easily be diffed to help find changes. Is that possible with state charts? Without that it is very hard to verify the impact of changes. 2) They get big and cumbersome. 3) Ultimately they are just another language. If you draw the wrong picture you will get the wrong result... just like regular programming. 4) They cannot capture all the details required to make a program. Other tools and methodologies are still required to capture other behaviour. 5) The idea that drawing pictures is easier than writing code misses the point. The challenge of good design is not in learning to write and read code. As an analogy, consider music. It would be a lot easier to make a start in music if you didn't have to learn music score. That's why those toy painos etc have play by colours. But if you want to be a great nusician, music score is not the barrier - it is the thousands of hours of practice as well as the innate ability.

    • Nope. It is not about the hardware or the software. It is about providing the utility to the user. When the system comprises a 50 cent 8-bitter then it is mostly about hardware and the software just joins the dots. That is a hardware dominated system. When the system comprises a lot more than that, the software provides the vat majority of the utility and the hardware is just life support for the software. These days, more and more of that utility is provided by software means rather than hardware means. That makes it more important that embedded developers have a stronger software skill set and the hardware skill set is getting less important. Your engine control example is vastly broken because modern engine control systems are not just a few kbytes of code. Some comprise hundreds of thousands of lines of code. A friend of mine is involved in ECU algorithm design and I can assure you it is all about software. He hires out his services to tune the software and can improve fuel consumption by 5% or so, or boost horse power. That is all done with software without touching a screwdriver, or anything. I do agree with your SLAM example though. SLAM code can be quite small (there is even SLAM code written in Java to run on Lejos). What limits the size of the map is the amount of RAM or storage. If that storage is a file system, well you then need a flash file system (or similar). As an author of a flash file system I can assure you that is 100% software. While I agree than 32-bits does not imply the need for megabytes (as you say, 32-bitters provide huge performance benefits), the opposite does not apply. If you manage megabytes of data then you don't want to do that with an 8-bitter. Yes, we did that in the 1980s with banked memory etc but there is no need to do that any more.

    • Your statements would seem to indicate that you think that software errors are there because of negligent management. I doubt that is the case. There are very few people involved in critical systems that don't care about what they are producing. Most are trying to do everything they can to make the best possible products. If you want to put this requirement on anyone, put it on the CEOs and board members. They often pressurize the project managers to develop under unrealistic timescales etc.

    • I'm sorry I just don't buy the relationship implied that C should be for 8/16 bitters and C++ for 32/64-bitters. Very few commonly 8 bitters run real C well. PIC certainly not. AVR only marginally better. As for those C++ benefits:classes, namespaces, virtual functions and templates Everything classes and virtual functions give you can be easily implemented in C using function tables etc. That's how Linux etc drivers are constructed. C works fine. Namespaces... With well constructed modular code namespaces are not an issue and C copes fine. Again, refer to the Linux kernel. Templates... That's really the only tempting feature I see in C++ and that has largely been supplanted by code generation. Dan, if you're going to tell suggest people use C++ then also please warn of the downsides too. C++ is an extremely complex language and can do a lot of really horrible things behind your back. Most C++ texts do a terrible job of showing how to do encapsulation and data hiding. Putting your privates in a class header file for the rest of the world to see is NOT data hiding! Sure, C++ does a few nice things for you that you'd have to do manually in C. But C is more controllable and I'm prepared to write this(ptr)foo(this,x) rather than foo(x) to maintain that control.

    • Jack, surely CMMI is not really a development approach. It is rather a development approach for developing process. It does not really say how to develop, but rather how to improve the processes themselves. Latent bugs in embedded systems will often be inconsequential to system performance (though there are clearly some that do have consequence). Does it really matter if an embedded system watchdogs so long as normal operation is restored within an acceptable time? "going just from CMMI1 to CMMI2 saves about $800k in support for a 1 MLOC project". Sure, but it probably blows out development costs by an equal amount and delaying shipping might result in a few million dollars of lost revenue. Often a slightly flakey product is still of net value to the customer. I worked for a while on embedded systems for agriculture. The products have to be ready for the planting season (or whatever). If you miss the production target by a month then you might as well miss by a year. Making the quality vs shipping call can be pretty tough. Even the classic bugs are not so easy to call. Patriot Missile bug: A known bug that could be worked around by periodically resetting the system. On an occasion that the system was not reset a Scud got through. Still a net win: the Patriots still nailed a lot of Scuds. Therac: Sure, 6 people got nuked of which 3 died. But the product still treated thousands of people during that period saving thousands of lives. If Therac had been delayed by 6 months for software quality issues, 3 lives might have been saved but thousands lost. It certainly isn't black and white.

    • You do not have to deliver quality, just value. And value is determined by the customer. I recently bought my first iOS based product; an ipod touch. I was shocked at the number of bugs (crashes, prompts that don't show, inability to delete some podcasts...). But hey, it's shiny so people will still see the value and will continue to buy.

    • This is correct. There is a cost and a value associated with safety. When the perceived value is greater than the perceived cost, we want safety. I say perceived, because we are irrational animals. That is one reason why the second world war was so good for aviation advancement. For example, the value of having an edge in combat was far more valuable than aircraft safety. The RAF Hurricane was a complete death trap with a fuel tank just behind the dash. One bullet and the pilot got 28 gallons or so of burning gas dumped in his lap. Yet the performance of the Hurricane was such that it was still a safer bet than flying other planes.

    • I am not at all saying we don't need any safety systems or procedures. There are clear examples of safety systems helping. What I am saying is that safety systems, and even improvements in general, do not just act incrementally. There is a tendency for a negative consequence due to the people involved becoming lax. If people drove exactly the same with or without the safety gear then the safety gear will improve things. However that is not the way we humans operate. Once something does the work for us to a certain level, we abdicate responsibility. Example: Here in NZ, and probably in the rest of the world,there is a huge variance in electricity supply in different parts of the country. Cities tend to get far more reliable supply than rural folk. I live rurally and my family is well geared to power outages. We have lights handy. We have a gas cooker handy. If we lose power we can restore normal living conditions in 5 minutes. Not so the city folk who are not prepared. The same goes for the Air France flight that crashed into the Atlantic. The pilots had got so conditioned into believing that the safety systems would prevent a stall that they stalled the plane and literally flew it into the sea while ignoring the stall warning alarms.

    • Safety certainly is hard and is probably unattainable. One problem with each layer of safety systems is that it makes the user feel safer and thus less attentive. People switch off their normal vigilance and expect the system to do things for them. When I was a kid in junior school (9 or 10) we had woodwork classes where we used a table saw under very loose adult supervision. We all understood perfectly that the thing was to be treated carefully and nobody ever got hurt. Such equipment would be banned from schools these days and workplaces would fit all sorts of safety kit. Does that really make them safer or does it just lull people into a false sense of security?

    • K&R C is not a high level language. ANSI-C is. K&R does not have any prototypes, type checking or anything else that helps built multi-module systems. K&R was rubbish for building embedded systems, or any systems for that matter. ANSI C variants are useful though.

    • Sorry but there is no way you can put Java, C++, Ada and python on an equal footing. Each of these has differences which make them interesting. Python is unlike any of the others in that it it is fully interactive. Python does not have static typing etc. Sure many of the ideas are portable from one language to another, especially the algorithmic stuff. But then you can theoretically teach algorithms in assembler if you so chose. Then of course there are many other languages formulated since the 1980s. occam, Limbo and Go - all post 1980 languages - natively support parallel programming. As an extreme example, look at BF language - created in 1993! But that is an aesoteric language not intended for any real work.

    • It really does not matter what C was originally intended for. We no longer use the original K&R C. As for Ada handling hardware. Perhaps you are correct, but most of the Ada systems I've looked at use runtimes written in C.

    • Note how that interest did not come from the education system. These days we abdicate all responsibility for the development of young people onto the school system. We expect the school system to teach the kids to be creative. So how do they do that? They squeeze out natural curiosity and replace it with a fake, standardised, "No child left behind" creativity. Kids are naturally curious about how things work and naturally inclined to learn - the most important part of being good at STEM. Yet schools suppress that natural curiosity and spoon feed the kids a fake curiosity.

    • Although I think it great to provide school kids STEM opportunities, I doubt very much that these are effective at getting the right sort of kid into science/engineering. Surely anyone that wants to get into engineering can do so without help from schools. If you are interested in, say, programming then it has never been easier. Google "learn how to write programs" or go to the library. We managed to do that in the 70s and 80s with limited resources (no Google and only very few town libraries had programming books). If you need to be spoon-fed material by a school or university then you will be a hopeless engineer. Universities only provide a framework - the real education comes from what you teach yourself. Engineering requires a commitment to a lifetime of reeducation.

    • Of course we are creatures of habit. Even in our rapidly changing industry we need to organise and proceduralise development and knowledge. No wonder then that our methods get out of date pretty fast. We do respond though. The idea the industry, and change, belongs to the youngsters is a myth. Sure, the youngsters might embrace change for changes sake, but us old timers handle change too. A few years back I was involved in the development of the Lejos Java system for the Lego NXT robots. There were 4 or 5 people involved from across the world. I was the youngest at 46 and the oldest was in his 70s. What the youngsters see as new and shiny, us old-timers often see through our "been there, done that" filters. Bell-bottom pants might be the latest, but we've seen them before in the 60s and 80s.

    • I tend to agree. There are a lot of things that you CAN do in C that you certainly should NOT do! For example what does the following mean? x = 4["abcdef"]; According to C, that means the same as: x = "abcdef"[4]; or x = 'e'; But if anyone ever wrote code like that they should be taken around the corner and shot.

    • I've been hearing the 32-bitters are taking over story for over 15 years now, but the number of 8-bitters continues to climb. Sure, not all the large SOC processes are immediately applicable to 8-bitters, but they do have an effect. As 32-bitters migrate to newer production equipment, the older machinery is obsoleted and is close to "free" for anyone than can use them. Manufacting of non-bleeding edge 8-bitters is a way to use some of this. That reduces the expense of setting up a factory and reduces the amortised cost of production. Distribution is not an issue. Why should it be harder to distribute micros than. say, resistors which cost less than a cent? Further, the licensing costs and development costs for 8-bitters are a lot less than for 32-bitters. 8-bitters have far less complexity and can surely be tested far faster (and thus cheaper) than 32-bitters. If 32-bitters can be sold for 20c, you can bet 8 bitters can sell for less than 10c. Many/most 8-bitters seem to come in wide voltage ranges (eg. some AVRs will happily run on 1.5-56.0 V or so). That means there is no need for a voltage regulator on many battery devices. I don't know of any 32-bitters that can do that.Adding a low power voltage regulator circuit increases both the standby power consumption and the build cost for those 32-bitters. 16-bitters are pretty much squeezed out of existence, but 8-bitters have a niche that they can hold onto for a long time yet. Sure we will pretty much see the demise of high-end 8-bitters (eg those with 32k or more RAM), but I doubt we'll ever see the bottom end go away completely. I well remember the days of building 8051 circuits with banked memory to give address spaces of 512k. Those days are over.

    • Liberal Arts Bah! I have no objection to liberal arts per se. I enjoy reading history books etc.... but forcing science/tech student to take social sciences and humanities classes at university is a waste of resources. It seems to me that the only thing you can really do with a humanities degree is teach humanities. Therefore the universities (typically biassed towards humanities) force other students into humanities classes to keep the humanities classes full and the staff employed. It makes as much sense as forcing the students to litter to keep the janitorial staff busy. I've been in embedded systems of all types for nearly 30 years. I've very seldom used any university level math beyond a tiny amount of calculus and some discrete algebra. I think the role that math plays is that historically new students had seldom encountered computing and the only way there was to assess them in any way was to judge their math performance.

    • Dave... Interesting discussion. I am sure we are talking about different levels. I have tended to be self-teaching rather than formally taught person. I taught myself about electronics long before university and was able to tutor EE dorm-mates on things like how transistors work. My first exposure to programming was with an HP-29C calculator: 100 instructions + 16 registers. Basically a PIC with a lobotomy! That is what really taught me to work with tight CPUs. For really, really small systems (50cent micros and such), you're not really doing any software engineering. You're really joining hardware dots with software. However when things get a bit more complex (ie. more layers of software abstraction), the need for well honed software skills starts to come to the fore. I'll tell you about the first occasion I realised the need for good software engineering in embedded space. I was working on an access control system written by an EE. The main function of the system was to read an id card, look up the user record and allow/deny access. This needed to be done within a millisecond or so or the system would crash. The EE used a linked list which worked fine with only 5 users but the search ran too long with more than 50 or so users and we needed to be able to handle thousands of users. Worse still, all the code was written with very little abstraction - all the code directly accessed these data structures. It was easy for me to figure out a far better data structure using trees and a hash table that was fast enough out to 10k or more users. However it took a long time to refactor the program and insert layers of abstraction to make it easier to plug and play different algorithms.

    • The same power saving and scaling that can make those 32-bitters cheap enough for web enabled greeting cards will also make 4 and 8-bitters cheaper and lower power and enable them in even more products. Imagine what you could do with an 8-bitter that costs 2c and runs for 10 years on a small coin cell.

    • There is no need for every part of the system to be hard real-time. A hard real-time system means that part of the system has hard real-time features, not that all system sub-components are hard real-time. For example, any of those examples you mention will have control loops (hard real-time) and control UIs (soft real-time). So long as the system design can achieve that then you are OK. Of course you would be nuts to write the hard real-time parts in Java. You could instead run those as stand-alone C code and just use the Java for the soft real-time code. That does not get you away from any kernel issues though.

    • I think I disagree with this. I came from completely the other side. I studied Computer Science (COBOL etc). I had taught myself about electronics from an early age though starting with crystal sets at age 7 or so and I could read schematics. It might be easier for a person with a HW background to get started, but I think a HW background does not give enough SW development maturity to enable development of modern embedded systems. I took over quite a few embedded systems that had been started by HW engineers that had failed to architect the software correctly and did not understand how algorithms would degrade under load - let alone Big-O notation

    • I absolutely agree with this. Each different programming language provides a different way of looking at problem solving. Some are only slightly different (eg. the algol family languages) and some are pretty radically different (eg. Prolog). My absolute favourite course at university was a 4 month tour through 5 or so different programming languages: Lambda calculus, SNOBOL, LISP, Prolog, RTL/2 and perhaps others. We had already used FORTRAN and Pascal. That was back in 1983 and of those only LISP is still used. These days I would select Prolog, LISP, BrainF*ck, Erlang and something like awk for the esoteric languages and at least Java and python for the "regular" languages. For a great read: http://pragprog.com/book/btlang/seven-languages-in-seven-weeks

    • This architecture is gaining traction in many areas. For instance the new OMAP5 SOCs have dual core application processors, typically running Linux plus ARM Coretex M4 core(s) to handle hard real-time IO, supervisory tasks, etc. Heck, even disk drives have multiple cores these days

    • Of course Linux is overkill for some cases. You don't want to use Linux in that electric toothbrush or a mouse or some such. A control system inside a missile perhaps doesn't need inodes and virtual memory though high-order functions such as terrain following might. Other devices in the system (consoles,...) likely do need more sophisticated features. Linux has many features that can be configured and disabled if not needed. There is no need to use swap etc unless you need them.

    • Trying to make blanket statements about embedded is even more pointless than making them about wheeled transport. Which is better: A Porche or a bike? It depends on what you are trying to do. Linux s certainly displacing RTOSs in many applications and it is getting harder and harder to find application areas that can't use Linux. Linux has certainly become the default choice rather than the whacky choice. Much/most of the code in most significant "hard realtime" embedded systems is not "hard realtime" and it is becoming increasingly desirable to partition the system in some way so that the "soft realtime" stuff can run under Linux and the small hard realtime portions can run an RTOS or bare metal. One this is certain though: RTOS vendors have really needed to get far more focused and understand their market better. Too bad though that far too many RTOS vendors are resorting to FUD to sell their wares. If I hear one more vendor tell us about Stuxnet....

    • "Trend 1: Volumes finally shift to 32-bit CPUs" This is a wave you can ride both ways. Sure, 32-bit micros are getting cheaper and are now used where, in the past, you would have used an 8-bitter. But 8-bitters are getting cheaper too and you now find them where there were previously *no* micros: razors, blenders, lighting, sensors in tyres,... And we're not just talking about the cost of the micro. Most, if not all, 32-bitters require relatively clean power with tight voltage rails. Many, perhaps most, 8-bitters will run on a wide power rail (eg. some that will run on anything from 1V8 to 6V). These often don't need any regulator. 8-bitters also have a huge advantage for low power applications. The 32-bitters will certainly squeeze out 16-bitters and the top end of 8-bitters, but 8-bitters will dominate for a lot longer at the bottom end. As for the prediction of code generation.... That's something I've been hearing for the last 30 years and I still don't see it happening. Some code generation is great and applicable to some areas, but it is hard to see it ever replacing hand coding. You still need some sort of language or mechanism to express and capture the system design accurately enough to then generate the code. Pretty pictures only work to a certain extent. The language is not really the issue anyway. The biggest barrier to software design is the skill set. No matter how systems are defined, you still need the skills to define them properly.

    • This is exactly right. Many of these fancy new "safety" technologies are fine when they are used to augment normal human control and help fill in the gaps when humans make mistakes. Unfortunately that is not human nature. As soon as a technology hits a certain reliability threshold, people just use the technology and switch off their brains. As an example, take those reversing radar things that help identify obstructions before you bak over them. That's a great technology to help you from backing over a kid or a bike. But if the technology gets too good, people won't check themselves. When a kid gets crunched, the lawyers won't come after the driver - they'll come after the engineers. The same happens with traction control on ice. As traction control gets better, people increasingly just put their foot down and expect the traction control to sort it all out. As for making it idiot proof.... no chance! There is a power struggle between technology and idiocity. As soon as you make the cars technically safer, the idiots upgrade their firmware to Stupid V2.5. If you really want to make the road safer, then make drivers focus on safety more. Take away the driver's seat belt, airbags etc and put a 6 inch steel spike right in the middle of the steering wheel. That should keep their minds on the job!

    • Online news is fine because you don't need to look back much. The biggest problem with anything online is that links go stale and articles get lost. I cannot find many articles/white papers etc that were online ten years ago because either web sites were cleaned up or companies went bust. My pile of photocopies from the 1980s is still there. The second biggest problem with a shift like this is editorial. As Jack G raised in his posting, in the days when paper column inches were scarce, the editor had a vicious pen and only the best would bet through. With online media the whole purpose of publishing shifts and quality can often suffer. I really hope that does not happen here.

    • Softcores on FPGA cells also typically have worse power consumption numbers and have more clocking than hard cores. Surely this is nothing new, except maybe that Xilinx is doing it. Various vendors have been doing hard cores for quite a while using everything from 8-bitters to PowerPCs. While ASICs might have better costs for very high volumes, they also have very high NRE costs and it is very hard to get the volumes and sizing right. This is particularly important in the current economic environment where nobody wants to commit to an order for hundreds of thousands of units and huge NRE. Even designing around SOCs is problematic. Will that fancy OMAP part still be available in 6 months or two years from now? FPGAs on the other hand are a very different matter. TA single part can be used in so many different industries that it is easy to justify keeping making them. When parts do get discontinued, it is often easy to find a pin-compatable family replacement. While this might be more expensive than ASICs at high volume, it does give flexibility and some degree of certainly - both highly valuable at the moment.

    • There is often a lot of ego on the line. When you speak the truth, choose how you do it. Doing so too publicly (facebook, media or via an intra-company memo) will ruffle lots of feathers and is likely going to be ineffective. Do so discretely and in a supportive way and you've got a far better chance of both being listed to and being seen as a helpful problem solver. Remember that engineers solve problem - they don't just identify them. If you embarrass people then you are forcing them to make it into an ego issue. That does not solve the problem. If you do this right, then you will both be successful and remain obscure. Mr. Boisjoly failed and became famous.

    • In the 1950s you could pretty much guarantee anyone getting into EE had built a crystal set or valve/tube radio. Radio receivers were about the only way you could scratch that itch on a hobby budget. Later, with cheaper access to power electronics, ham started making sense. Now there are hundreds of different things people can do for their kicks. Arduinos, 3D printers, robotics... Times change.

    • There isn't a problem with incrementing an integer. It is just a problem if you take some arbitrary integer and incrementing it pushes it out of bounds. As the author says, always do bounds checking on any values you get from anywhere. If sensors fail they will often give out of bounds values.

    • Here we go referring to Stuxnet again... None of the measures or examples has anything to do with Stuxnet or Stuxnet vectors. But it seems that both authors and vendors love to latch on to FUD to sell their products. Car analogy alert: It is like saying bald tyres are dangerous when trying to sell car alarms. We have a tough time making high quality software in the embedded business. It is really helpful when vendors show us how their products can help. But it is no help at all if they just confuse the picture with irrelevant FUD.

    • It is unfortunate that many "secure OS" vendors are using Stuxnet as FUD to promote their products. The truth is that Stuxnet would not have been thwarted by using any RTOS/OS on the PLCs that were attacked. If you actually read up on how the Stuxnet works. Stuxnet attacked the host PCs that were used to write/develop the PLC code. Stuxnet modified the code that was sent to the PLCs. That's a bit like modifying a host compiler/linker to generate code loaded into a target. The only way to thwart something like Stuxnet is to completely secure the development toolchain. It would help to not run Windows development hosts, but OSX and Linux might not be immune either.

    • SCR latchup is indeed spectacular, but is it still real for modern devices? As one of my mentors told me in 1988 or so, digital is just clipped analogue. As we drive up speeds this is getting more true every day.

    • Some of these are rather contrived in my opinion. Using an ADC as a code generator is going to be a huge challenge. Even with an 8-bit ADC you're going to be really hard pressed to modulate a sequence of values that will be read reliably. It you are that close to the system that you can fiddle with the ADC then just hook up to the JTAG port and have all the control you want. Using alloca()/malloca()??? Really? I have been writing embedded code for 30 years and have never used malloca()/alloca(). I can't believe that anyone would - unless they are going out of their way to write a contrived example. How is clearing out resources going to help? That might make sense in a multi-user scenarios but does not make sense in an embedded system. If an attacker is running code on most embedded systems then they have the machine. They don't need this info. Overwriting files on flash file systems does not necessarily zero out file data.

    • Sure if you don't design reliability in it won't appear by magic. But if you don't test then you can't verify that the reliability is actually there or that the design actually does what it needs to do. These days only the most trivial systems can be fully designed up front. Mostly, it takes a whole lot of testing to actually ensure that the design works. Just like you have to debug and refactor code, you have to debug and refactor designs to ensure they do what is required.

    • Some of this I agree with, some not. For example, I really don't think that heavy math is a requirement for computing. To extend your analogy, a doctor needs to understand physiology but does not need to understand the underlying biochemistry. Same deal for queuing theory. An effective software designer needs to understand the characteristics of queuing and does not have to be able to link that to a mathematical proof. Problem solving is the number 1 skill. Specific knowledge of a language is not important. That can be learned. Specific knowledge has a short shelf life in this industry. I think it is important to first find out whether students have the "programming gene". Python is a pretty good environment for that. However they should also be taught assembler etc (to show the deep workings) as well as Java (and Prolog, Lisp and some others) to show them different approaches to problem solving. The goal is not to teach the language, but rather to expose them to the thinking. One of the best embedded programmers I have every employed came to the company with no embedded skills. He had written Visual Basic apps for business. He did, however, have the ability to learn fast. Within a month he was programming C and reading schematics. Shortly thereafter he was writing interrupt service routines etc. Sure the first few cuts were poor, but he soon got the hang of it. Within a few months he was excellent.

    • The classic I just picked up recently is "Threaded Interpretive Languages" by R.G. Loeliger. I read this once from an employer's technical library sometime back in the 1980s. The Christchurch library had one which I borrowed a few times. Unfortunately it was lost in the earthquakes. I finally bought my own from a second-hand dealer. Although I don't program in FORTH any more, it is sometimes refreshing, and nostalgic, to remember the old days when your whole programming environment could live in 8k bytes or less and a 64k byte system would be enough for a multi-user system. Loeliger wrote his own TIL (FORTH clone) in 6 weeks of evenings. Oh well, off to my day job now to debug some problems in Linux running on a multi-hundred-MHz, multi-hundred-Mbyte SOC.

    • If a hiring person is reading 50 resumes, then don't expect expect them to read a 10 page resume and find what they need to see deep in page 7. If you don't get the reader's interest in half a page then forget it. If you have not 90% convinces the reader by the end of your first page you've lost them. Think of the cover letter as a one-page resume and the rest as an appendix of backup info. That one-page cover letter should show that you are a match for their needs and how you can add value.

    • I think that far too many firmware people work on so many different things that it is often pointless to give your whole life history. Nobody has time to read through all the cool Z80 stuff you did in the 80s (unless it is relevant). Rather custom write your resume for each pitch to show that your skills are relevant to their needs. Stress general problem solving skills and ability to learn on the job. Times are always changing. Selling yourself as the "technology-x guy" puts you in a box and makes you obsolete when technology-x is replaced by technology-y. Skills are not everything. Show that you can add value and are flexible.

    • This is surely not so much about standardising the RTOS as standardising the HAL. If the software interfaces to the resources - especially the peripherals - are standardised then it becomes easier to migrate software from one chip to another. Sure that means that RTOS vendors using these interfaces will not be able to differentiate based on what chips they support, but so what? Surely this has got to be a good thing for the industry. Increasingly, the investment is in software and it makes sense to be able to migrate that software from one part to another with the least possible pain. This industry is all about change. Those that facilitate change will win. Those that resist will suffer.

    • I think your statement that the model need not be rooted in physics is very true. Sometimes a completely unrelated physical analogy is a good model. One thing though: ensure that the models actually do relate mathematically to the Real World physics. For example, if you are modeling something analogous to acceleration, ensure that your model looks something like x/t^2. If you find you have a cube in there because it seems to work, then expect some instability down the track.

    • My cell-phone fear is what it is doing to our thinking. I guess the rot really started with TV that broke programming into 7 minute chunks between ads. That soon had an impact on concentration and schools had to structure the delivery of ideas in 7 minute chunks to prevent overloading the kids. Then came our instant communications technologies of texting and tweeting, as well as the emergence of touchy-feely education that encourages kids to feel and express their emotions rather than develop and express coherent and rational ideas. It is impossible to develop and articulate an idea in 140 (or whatever) characters. That is just impossible. Even thinking for 140 characters has become too hard and now most opinions and comments are expressed in ten or less keystrokes: LOL, re-tweets and memes (canned ideas-bites). Why think when you can just pull a predigested response out of the bucket? The ultimate degradation must be the single-click responses of down-voting, up-voting, Like and such: just feel the emotion to reading the first sentence of an article and make an emotional response. They have no meaning, just emotion. Is it at all surprising that critical thought is waning and "common sense" is no longer at all common?

    • Exactly the same is possible in C. For a good example, look at the way Linux works.

    • Yes, it was a great engineering achievement. The point I was trying, but failed, to make was that Apollo really was not a leap forward in technology. In many ways, the V2 rockets of WW2 were more "bleeding edge" and that program could have delivered people into space just as easily, 20 years earlier.

    • Using volatile where possible is just such terrible advice. You suggest that using volatile can make all sorts of problems go away. Just spreading volatiles everywhere like magic fairy dust is an incredibly bad way to engineer code. Just making stuff volatile without due cause forces the compiler to make sub-optimal decisions and defeats optimisation. Volatiles can also increase memory footprint. The correct advice should be: Understand volatile and use where appropriate.

    • It is not really technology that makes going to the moon happen. It is having a purpose. The technology needed to get people to the moon was pretty primitive by today's standards. Just a huge tank of kerosene and an oxidant. Contol and telemetry was pretty straightforward as was the materials engineering. What there was though was a huge political will and a nation that got excited about achieving something that was thought impossible. After Apollo 11 though the program had reached the goal set by politicians and the moon became boring with nothing more to prove. If there is to be a resurgence, then it needs a new goal. What makes a sufficiently ambitious goal? Mars, or other space, is pointless. In the public eye, going to Mars is like going to the moon, just a bit further. And anyway, Mars has already been done to death with rovers.

    • Indeed a commutator is just a rotary switch that is turned by some mechanical force. Commutators were very commonly used for multi-channel telemetry links until they were usurped by digital switching. A simpler example of how this works is the telemetry on a radiosonde (weather balloon). Each sensor is a capacitor that changes value according to the parameter it is measuring. Hook it up in a VFO and you have an oscillator that changes frequency based on the capacitatance changes. For multi-channel muxing you need some sort of switching system to select which sensor/capacitor is wired in to the VFO. Until the end of the 80s, at least, it was common for the switching on radiosondes to be performed by a rotary switching commutator that was rotated either by vanes in a wind-mill configuration or a slowly rotating "friction motor" powered by the mass of the radiosonde hanging on its line. The beauty of this method is that it is so simple. No fancy kit on the transmitter side, just the VFO. The receiving side just receives a frequency and decodes that to get the info. Phones actually used things called uniselectors (or stepping switches) that would take a step per pulse.

    • Hard-wired passwords are generally a bad idea, but they do have one benefit: When the people at the plant forget the password (or the only guy that knows it gets hit by a bus or goes on vacation) the engineers can still get in and fix it. However that should also require some sort of mechanical security (eg. need a button press on the PLC). All SCADA kit should be networked by VPN, making the passwords moot. As for Stuxnet.... Well that was really a vector on the programming computer and not on the embedded device per se. I don't think there is really any embedded OS that can prevent a Stuxnet-like attack. The only way I can really think to do it is to run the programming software on the PLC and downplay the role of the programmer-PC to being just a dumb web interface.

    • XMOS looks a lot like the Inmos Transputer. Not surprising given that the architect is ex Inmos. The biggest issue with these off-beat parts is that they require special coding and tend to be used in a way that exploits the architecture. The architecture becomes a major deign dependency of the solution. Your future gets fundamentally linked to the future of the silicon vendor. If they go bust then you need to redesign your whole solution. On the other hand, there are a whole slew of very similar ARM parts from Atmel, ST, NXP and others. While these parts are not footprint compatible, it is easy enough to migrate from one to another. You can even lay out boards to accept parts from different vendors. Sure, the peripherals need different drivers, but quite often they are pretty close and it is even possible to write one set of firmware that runs on different parts from different vendors. Should one vendor go bust (unlikely) you can still source parts from another vendor and keep production rolling.

    • I think perhaps you misunderstand. From what I read it looks like the step up from 4 to 8 give no appreciable value. The reduced payback benefits you get from running problems in parallel come mainly from the relationship between the problem itself and the hardware. A poorly written OS scheduler can make things worse, but cannot magically make the fundamental problems go away. I don't for a second believe that Windows 8 will provide a solution where Linux does not. Linux works fine on many multi-core systems and most of the world's supercomputers run Linux. The 12 core Mac Pros work well with OSX and Linux. I don't know about Windows. I would not be in a hurry to drink the Windows 8 Kool Aid until Windows 8 has actually shipped and proven itself.

    • I think it can be misleading using Amdahl's Law in typical embedded systems. Amdahl's Law was really derived for Big Iron systems with Big Iron problems. Multi-cores can be effective where you can break the problem up into independent (or close) execution units. The Parallax Propeller, for instance, has 8 CPUs that execute independently and at full speed so long as they keep within their local memory. They are terrible at sharing memory though. That is pretty good for many situations like software peripherals, but is pretty useless for breaking down a single problem into sub-problems.

    • Quad speed is far too slow for most direct execution. On a 100MHz CPU executing thumb, the SPI would need to clock at about 2GHz to keep up. This mechanism is only useful when you are prepared to take a punishing performance hit.

    • Doing so is actually pretty easy if you do it on a file-per-file basis. For an example of how I did this using gcc for ARM look at http://lejos.svn.sourceforge.net/viewvc/lejos/trunk/nxtvm/platform/nxt/ The basic idea is to make different types of .o file: .oram for stuff in RAM, .orom for stuff in flash. You can use the same idea for thumb vs ARM code: .016 and .o32 The ldscript then figures out the placement based on file name. The way I did this is pretty rudimentary. It is also possible to do the same thing with #pragmas. GreenHills tools provide mechanisms to help you to selectively compile stuff and figure out placements.

    • IPV6 does not just give enough addresses to support every mobile phone in the world. It gives enough addresses to assign miliions of addresses to each byte of RAM or flash ever made. The address space is approx 3x10^38. Heck, that gives enough space for every atom in every human body in the world to have its own address. The main purpose is to be able to use the space as a way of routing traffic more effectively and that is where the headaches will come from. Too much flexibility just turns into an unmanageable wild west. Current routing is already far too complex for many people to understand and the IPV6 routing, hence the need for routing architectures/

    • These are not at all like OMAP processors. The OMAPs are designed to be run with many MBytes of RAM and flash. Which is better between a 3 ton truck and a Ferrari? Well it depends very much on what you are trying to do. While SPI access to flash memory and SPI booting are not new, their particular way of mapping an SPI part into the memory space might be. Clearly this mechanism would not be intended to provide the main execution space for code since the access speed is far slower than the CPU speed. It could still be useful for boot loading and such.

    • Because the PIC was actually first released in 1975. http://www.ami.ac.uk/courses/ami4655_micros/u01/micro01PIChist.asp

    • Creating subroutines without any call/return structure is still very common in one area of computing: BIOS code. In the early part of BIOS booting, there is no RAM available and thus no stack and thus no subroutine calls (on x86 anyway). The way around this is to use a "fake call" mechanism which is typically written as a macro which generates a sequence of jumps that does the same thing. There are of course limitations - like only one subroutine depth.

    • Interesting article. Back in days gone by it was a bit more common to see people design their own CPUs. One place I worked for in the 80s built a vector display processor themselves using an EPROM and a PAL or three. As generic micros have become more capable and FPGAs have become available it is easier to just use generic components.

    • One of the main reasons to put micros in the sensors is to simplify wiring harnesses. For eample, look at the complexity in the driver-side door. There is likely all of: a lock, multiple door locking buttons, door open detect, airbag, multiple mirror alignment buttons, the mirror, window up/down for all windows and one or more lights. If that was done with wire, we're talking somewhere near 20 conductors just for one door with all those signals having to be routed around the car to where the signals are needed. With a micro that can be reduced to five or so wires going to a far simplified harness. I forget the exact number, but an experiment in the 1990s which replaced a conventional wiring harness with body electronics stripped over 70 pounds of weight from the car. 70 pounds of copper is worth quite a bit - not to mention the cost of the connectors and building and fitting a complex loom - a very expensive business.

    • Jack. surely you jest about wanting electronic - and - software - in your visual loop. When the battery goes flat you'll be groping around trying to find the charger. Every morning you'll have to wait an extra 5 to 10 seconds while your glasses boot. Having to deal with tech support to fix problems. Does that sound like progress?

    • "Disposable" just means that the sensors are cheaper than the cost of trying to recover them. That is not too hard to achieve these days when some satellite positioning sensors (eg. cheap GPS) costs less than $10.

    • I'm picking that those $105k stating salaries will be for very few people with very specialised knowledge. For example, take someone like Robert Love (http://rlove.org/). While he was at university he did some work on the Linux kernel making some significant improvements. By the time he left university he was already in demand. Your general-purpose programmer with non-specific experience and skills won't be staring off at $105k.

    • There is still a responsibility for the designer to have tested the bolts and deemed them suitable for the task. The bolt manufacturer would potentially be liable if they delivered a bad batch. These days software is seldom a component. It is more like an assembly of different components from different vendors. Liability is a lot less clear cut. A bolt does not change its properties much and can be exhaustively described in spec sheets. Software properties, however, can change under different usage patterns. What happens if a memory allocator fails or becomes very slow under certain usage scenarios resulting in a system failure? Do you sue the malloc library writer? What happens if a bug evades all sorts of testing but eventually shows up under slightly different load? Who is liable then? A very real case is one of the Atmel libraries that used a level sensitive interrupt for UART processing. Under most scenarios this works fine. However under certain load conditions this can result in the UART hanging. Apart from the fact that Atmel publish their code with disclaimers, would they be liable?

    • Juries respond to emotion. They don't think. Perhaps the defense lawyer took spoke arrogantly or just said the wrong things. I can't see how you could blame a company for safety issues when the product had been disassembled to remove safety features. What next? Are we going to see a car company in court because the owner cut the brake cables and then the brakes didn't work? However the driver makes a wad of cash because the car did not have airbags.

    • I know you get upset when I make comments on your postings, but here goes anyway... Have a good read of http://en.wikipedia.org/wiki/Rooting_%28Android_OS%29 There are four rooting mechanism described here, only one of which has anything to do with the OS and has since been fixed.That was the keyboard handler running as root user - outside of the kernel so is not really an OS failure. All the others are done by interfering with the bootloader or firmware upgrade process (by signing with a leaked key) which happens when the OS is not even running. From what I have read, the Playbook root is one of these, but I might be wrong there. It would thus seem incorrect to suggest that none of these issues would happen on a microkernel device. If I had a device running a microkernel (or your choosing) and upgraded it with a firmware that had been rebuilt with root access, surely that would have just rooted the device? Can a microkernel prevet that? If so, how. If you could point me at white papers to back up your claim I would be most interested.

    • "Who would be liable, for instance, if vendor-A's software made a one-off error because of an issue in vendor-B's antivirus software?" I would blame the system designer. If a critical system was designed using an OS platform that required antivirus software then that is just a grossly bad design decision. Secondly, it is a really bad design decision to hook critical systems up to the wild internet. Stuxnet only happened because Siemens used Windows and the customers/installers then hooked the systems up to the internet. If Siemens has used a different OS, or the systems had been installed in private networks then Stuxnet would not have happened. If you need to make an industrial network available remotely then use a VPN. This is easily achieved by using cheap VPN routers.

    • "Ironically, in no other industry can one get away with shipping known defective products" C'mon Jack, you know that isn't true. Pretty much all products have defects of one sort or another, be those the use of inferior (cheaper) materials that wear out or operator manuals with spelling errors. Almost all products are designed to meet some minimum requirement and outlive their warranty and that is about all. Sure, cars could be built out of titanium, but who would pay for that? Pretty much the same deal with software. In general, software is much the same. So long as it performs adequately to generally do what it needs to do, then isn't that good enough most of the time? Sure there are tiers of mechanical products and software. Compared to a $19.99 household product, you would expect a mechanical widget for aerospace to be built to better tolerances and use more expensive materials and cost a few orders of magnitude more. Same deal for software. If an idiot uses a mind steel bolt from a hardware shop instead of a high quality stress tested steel bolt and the bolt fails resulting in death (yes, I knew a hang glider pilot that killed himself like that) then that is not the fault of the bolt manufacturer. Same deal with software. Just as we can't afford to build everything out of titanium and certified parts, we cannot afford to develop all software to DO178B.

    • If you are going to be that paranoid then consider that your Big Bad competitor could be getting your employees to be injecting hostile code. All it needs its a little bit of bribery or blackmail... I think though that there is very little deliberately hostile code injected into embedded systems. For the most part it is just bad code.

    • There are at least two real problems with this approach: 1) This is an abuse of C++. The whole point of C++ is to provide abstract interfaces. To do that you need to have a base timer class and then provide a derived class. If you did this properly, using a base class, it would not work because the vtables etc would mess up the placement. 2) This is a very contrived and over simplified example that is unlikely to work for real-world examples. Most peripherals also require some extra state stored in RAM. That can't be put in the same object as it needs to be placed in RAM and not mapped over the registers. You can easily do OO in plain old C. You don't need C++. There are numerous examples of this in the Linux kernel. I personally find this far cleaner and easier to understand than C++.

    • Releasing early is fine so long as you can manage the expectations well. That is often an easy enough thing to do with more traditional software. eg if you want the customer to play with a UI to verify a screen layout, forms etc. Managing expectations in an embedded system is a lot more difficult. That is particularly true if the software can cause catastrophic failure (eg. a too early release of an engine control system that destroys an engine will likely earn you an undeserved reputation as the cowboys that destroyed the test engine).

    • Well sorry to sink your boat, but microkernels don't really protect against worms etc. Microkernels do provide some protection against crashing (eg. a buggy driver can crash and restart without taking down the whole running OS). But they don't necessarily prevent running worms. Typical microkernels run all OS processes in a privileged mode which allows worms to propagate. Linux pretty much does have the flaw you mention because the Linux kernel community understands the foolishess of running PDF converters in the kernel and just would plain not do it. Why Microsoft would do such a thing is beyond understanding. Perhaps their process switching is too inefficient and the code was stuffed into the kernel for performance reasons. It is somewhat funny that while Linux started off monolithic, there has been a tendency to more towards more service daemons - slightly towards microkernels.

    • Very, very generally speaking, DSPs tend to be "deep embedded" and tend to be tied to general purpose CPUs. The general purpose CPUs tend to run the network stacks, file systems and other code one generally associates with an OS. In that model, the DSPs tend to use either no OS or the very slimmest of RTOS/scheduler framework. Of course the real-world picture is more complex. These days there are many high-end DSPs with a rich peripheral set that are perfectly capable of running a reasonable OS. Heck, there is even a port of Linux to the Blackfin DSPs!

    • If I gave you the impression that I think agile is chaotic but other more traditional method are all purity and light then I wrote poorly. What I tried to convey is that these are the perceptions that can be gained from someone looking in from outside. If a critical product fails there will be lawyers involved. They will circle looking for a weak spot - trying to find a way to convince a jury that you are a cowboy outfit and have been negligent. An articulate lawyer could easily take the Agile Manifesto and make it look like a recipe for screwing around in an uncontrolled manner. Having said that though, the Agile Manifesto seems to be primarily written for groups doing contract programming for clients and for software that evolves with time (eg. the software running your favourite web site). The Agile Manifesto has less relevance when delivering much deep embedded software: * Individuals and interactions: Very meaningful in a system used by people. Almost meaningless in a the software controlling a throttle control system. * Working software over docs and specs: What does "working" actually mean? In deep embedded it generally only means conforming to spec, having safe failure modes etc. * Customer collaboration: There are seldom customers. Even if you're selling your throttle controller to people to integrate then they want specs on how it performs - not warm fuzzies. *Respond to change: Yup, makes sense. But it better be well specced what those changes are and you'd better not change certain aspects. Critical embedded developers care far more about whether a Spin model shows a component interaction to be safe than a quickly scribbled component interaction diagram. Software development, particularly embedded, has always been challenging and I doubt will ever "work well".

    • Are these real snippets from working code? Don't you need a volatile? I struggle to see the point of such a convoluted way to do this when there are are far simpler approaches.