cdhmanning

image
Firmware/Software Engineer

Biography has not been added

cdhmanning

's contributions
Articles
Comments
    • All these layers are designed with the idea of one kernel that runs on different sets of hardware. This is the sort of thing that is important for distribution oriented software eg. Ubuntu or having one Android release that boots across a wide range of different phones with different hardware. For these platforms, flexibility is more important than bloat. However it does nothing for a wide range of Linux ARM devices - the sort I most often deal with - where the kernel is custom configured and built for one particular board. In these cases, the flexibility has no value. The FDT does however make the layout easier than the platform description code that went before (and was always changing). The real benefit seems to be in having a description that is reasonably static.

    • The level of control required surely depends on the role the device is playing. If the device is just providing information or is a passive output (eg. showing your current blood pressure), then it really does not matter that much. If, however, the device is directly controlling an insulin pump or such, that is an entirely different matter.

    • 2 duff cells out of 42? I guess it really depends on your warranty policies as to whether they make a useful power source or not. I'm not sure why you'd design a coin cell driven system using a micro that chomps through 10mA active. Surely the vast bulk of these applications only require the smallest amount of processing. You are not calculating pi - just looking at a few inputs and setting a few outputs. A quick look at the Atmel tinyAVR suggests that running at 1MHz (that's faster than an 8051 at 12MHz - a fast micro in the 1980s) only uses 200uA @2V. Some of the tinyAVR devices will run down to 0.7V (but are doing a built-in boost, so will need more current).

    • This is increasingly becoming the lot of the 8-bitter: some non-CPU circuit needs a small amount of processing power, so put in an 8-bitter. This PIC is basically an analogue circuit with a small amount of digital processing, the same can be said of some other circuits such as the lower-end Cypress PSoC and EZ-USB parts.

    • Some of the TI OMAP parts (and probably others from other vendors) are three layers: CPU, DRAM and NAND flash in one POP stack. Very dense. This simplifies tracking and reduces the number of layers a board needs. Makes it easier to pass emissions tests too. Surely core is RAM? You can read/write it randomly. I too used a Univac at university... punch cards and everything. I once tripped going down some stairs and dropped a 2000 card box containing source for a compiler... took a while to get that in order again.

    • So what is "the right thing"? Sure, the Therac 25s nuked 6 people over a 2 year (or so) period - of which 3 died. But what about all the lives the Therac units saved? If the product release had been held up for 6 months, the downside would have been thousands of lives lost. The same goes for the Patriot missiles. Yup, it is a really stupid bug that ended up in 28 people being killed. But the Patriot's SCUD-busting probably saved hundreds of lives. We all freak out when software fails, but seemm to mind less when other stuff fails. GM's switch failure was way worse in all possible ways than Toyota's code failures.... and we still don't actually have a smoking gun. Stuff breaks and people die - whether that is metal fatigue or corrosion or software faults under stress. Ultimately we're always playing cost vs benefit vs risk. Without that we'd have no chainsaws, electricity, matches, ice skates,... Sure, Barr showed some sub-standard engineering practices at Toyota, but he failed to find an actual fault that caused a problem.

    • It seems to me that adding more layers, particularly "intelligent" layers, makes these systems more vulnerable to security issues. What were just dumb peripherals under OS control (such as ethernet) are becoming communications "subsystems". Since these are frequently bus masters, they can often access the entire system.... a gaping security hole if those with creative minds get to dabble with them.

    • Hi Jack These days many of the IDEs are perched on top of GDB. This makes it possible to run GDB scripts either in the IDE or alongside it. GDB scripting is a pretty featuresome scripting language that allows you to do all sorts of things such as traversing OS data structures (tasks, resource lists...), monitoring watch points,... An example of this is the Apple OSX scripts at: http://www.opensource.apple.com/source/xnu/xnu-792.13.8/kgmacros

    • "methodologies as sophisticated as those used in many hardware disciplines" OK, I'll bite. I really struggle to see how hardware engineers have better methodologies than software engineers. How about some concrete examples? Hardware engineers use DRC. Software engineers use source code checkers (lint etc). Hardware engineers design a circuit, then test it and tweak it until it works. Software engineers design software, then test it and tweak it until it works. Software engineers do many things that hardware engineers don't do (or only do very rarely): * Good revision control. Most hardware designers use terrible revision control. * Automated testing. * Continuous integration.