Charles Manning

image
Firmware/Software Engineer

Biography has not been added

cdhmanning

's contributions
Articles
Comments
    • This is increasingly becoming the lot of the 8-bitter: some non-CPU circuit needs a small amount of processing power, so put in an 8-bitter. This PIC is basically an analogue circuit with a small amount of digital processing, the same can be said of some other circuits such as the lower-end Cypress PSoC and EZ-USB parts.

    • Some of the TI OMAP parts (and probably others from other vendors) are three layers: CPU, DRAM and NAND flash in one POP stack. Very dense. This simplifies tracking and reduces the number of layers a board needs. Makes it easier to pass emissions tests too. Surely core is RAM? You can read/write it randomly. I too used a Univac at university... punch cards and everything. I once tripped going down some stairs and dropped a 2000 card box containing source for a compiler... took a while to get that in order again.

    • So what is "the right thing"? Sure, the Therac 25s nuked 6 people over a 2 year (or so) period - of which 3 died. But what about all the lives the Therac units saved? If the product release had been held up for 6 months, the downside would have been thousands of lives lost. The same goes for the Patriot missiles. Yup, it is a really stupid bug that ended up in 28 people being killed. But the Patriot's SCUD-busting probably saved hundreds of lives. We all freak out when software fails, but seemm to mind less when other stuff fails. GM's switch failure was way worse in all possible ways than Toyota's code failures.... and we still don't actually have a smoking gun. Stuff breaks and people die - whether that is metal fatigue or corrosion or software faults under stress. Ultimately we're always playing cost vs benefit vs risk. Without that we'd have no chainsaws, electricity, matches, ice skates,... Sure, Barr showed some sub-standard engineering practices at Toyota, but he failed to find an actual fault that caused a problem.

    • It seems to me that adding more layers, particularly "intelligent" layers, makes these systems more vulnerable to security issues. What were just dumb peripherals under OS control (such as ethernet) are becoming communications "subsystems". Since these are frequently bus masters, they can often access the entire system.... a gaping security hole if those with creative minds get to dabble with them.

    • Hi Jack These days many of the IDEs are perched on top of GDB. This makes it possible to run GDB scripts either in the IDE or alongside it. GDB scripting is a pretty featuresome scripting language that allows you to do all sorts of things such as traversing OS data structures (tasks, resource lists...), monitoring watch points,... An example of this is the Apple OSX scripts at: http://www.opensource.apple.com/source/xnu/xnu-792.13.8/kgmacros

    • "methodologies as sophisticated as those used in many hardware disciplines" OK, I'll bite. I really struggle to see how hardware engineers have better methodologies than software engineers. How about some concrete examples? Hardware engineers use DRC. Software engineers use source code checkers (lint etc). Hardware engineers design a circuit, then test it and tweak it until it works. Software engineers design software, then test it and tweak it until it works. Software engineers do many things that hardware engineers don't do (or only do very rarely): * Good revision control. Most hardware designers use terrible revision control. * Automated testing. * Continuous integration.

    • Jack This is a very interesting take on software costs. Unfortunately the cost/benefit is normally done at the start of the project rather than after something like this happens. While Toyota undoubtedly made some mistakes, I am not aware of any "smoking gun" identifying a particular failure path. Are you aware of any such findings? Until we find a "smoking gun" we won't know if $80/line, or even $1000/line, software would have fixed the problem. Violating MISRA, or some stack guidelines, does not inherently mean the code fails. For many cases these are just "taste" issues where some person claims their coding style is better than someone else's. The hardware also raises some interesting issues. Who's to say the micro does not have some "interesting" failure modes of its own and the issue might not be caused by software at all? All we know is that there were potentially issues in the software because the software could be reviewed. Just because the hardware is impossible to review does not make that immune from problems. In many ways what we have here is a scapegoat. What is much more disturbing, IMHO, is GM's current airbag saga. 303 deaths due to a mechanical switch issue which anyone can understand and verify. This is much easier to verify than software problems, yet went unsolved for ages. Perhaps it is not so much a software issue as a basic problem with vehicle failure analysis and increasing expectations from car owners. http://www.usatoday.com/story/money/cars/2014/03/13/gm-recall-death-nhtsa-airbag/6401257/ As for software defects killing people in aircraft, a lot will depend on the definition of "defect". If you consider a "defect" as failure to meet specification then you are probably correct. If it is, instead, "failure to perform in a good way", then one could say that some of the Airbus crashes linked to the stall override were defects.

    • I think it is unlikely that we can assume IPV6 everywhere. I can't see the need. Let's say I saw enough value in IoT to have, say, 200 "things" in my house. Do I really need every lightbulb to be individually addressable from across the world? No. I would have some sort of "house controller" gateway that would be visible to the outside world, and the rest can be done on a private network with 192.168.x.y addressing, just like I use for all my current 10 or so computers, 4 ipods, printers, development boards,... all up maybe 30 devices. If there is ever an IoT network in every house, it will be via hose gateways (that do NAT) and then cloud services like Apple's "back to my mac" service which basically provides a way to access stuff using a new namespace. That kind of networking does not need IPv6 to work. IPv4 is working fine. Adding IPV6 as a requirement for IoT will just be another hurdle to make it harder to achieve. Many, if not most, houses now have infrastructure of sorts in them (even just wifi routers). How many are IPV6 ready? Can you really expect people to throw out existing kit to get their lightbulbs going? Like IoT, IPV6 is another solution without a real problem (yet).