Advertisement

GordonScott

image
Design Engineering

Electronics and software engineering for a long time. Extends into acoustics, optics, mechanics and some other engineering areas like naval architecture. Significant management experience, though now working semi-independently.

GordonScott

's contributions
Articles
Comments
    • Many years ago I did some tests with an MC68000 compiler to see how much faster I could make an interrupt handler than the compiler itself managed. I failed to equal the compiler. It used some techniques with the CPU's instruction set that I'd never seen and, in honesty, didn't fully understand. From then on, mostly I trust that the compiler does it's job as it should and only occasionally check to see if I can do significantly better. Of course I use code that should optimise well, but I don't get precious about it. Hopefully I also suffer fewer nasty surprises. I think this article mostly demonstrates that the X8 compiler could be better rather than that the techniques are necessarily helpful. Sensible to check occasionally, though, both for compactness and for speed of execution. Mostly, one should be impressed by, or at least satisfied with, the compiler.

    • There are other aspects to this. Sometimes it's necessary to check that the PCB fab has put the layers in the correct order. I remember a colleague spending several days trying to work out why a 2.4GHz radio design wasn't working as expected. Despite the layers documentation, the fab had built it layers 1, 3, 2, 4. Ouch. After that we put a layers staircase at the edge of the boards to aid visual inspection. Also, unless one can afford/justify one of those thermal prediction packages, thermal features still need need visual. As I guess also does symmetry in high potential noise plane areas. "Low skilled EEs with almost no experience" ... those are the low-cost ones, of course. You usually get what you pay for. IMHO.

    • Maybe it's the anarchist in me, but I always thought the perfect user interface was a hammer :-)

    • I've just realised that you may have thought I was poking fun. Well, I guess I was a bit, but I meant no offence and I worry I may have caused some. Sorry if I have. Meaningful flashes and beeps _are_ a way to impart information and a reminder of that is useful and probably timely. People get seduced by technology, when sometimes a simple method really _is_ the best way. Now, if I can just send Morse with a button, I can have a fully interactive interface :-) Sadly, reading Morse in software is much harder than sending in :-(

    • I think possibly both @K7IQ and I were being ironic. There are new things in the world, or at least new things for us to learn. I've always been in thrall of the progression of electromagnetic propagation from DC through stripline, slotline and waveguides to light. I've just been reading how various glass weaves in PCB substrates can affect impedance and propagation. New to me, though with hindsight inevitable. FWIW, I've been (occasionally) using Morse like this since around 1975. Navigation buoys and beacons may well have been using it longer still. I had a fault some time back in a car, where a dash-panel LED was flashing. I reported to the garage and they said "the flashing gives a message, but you won't be able understand it". So I told them the Morse and they said "Ah, OK, yes!".

    • Then we could move it to infra red and maybe call it IRDA or something :-) Yes, I too have used Morse code from LEDs and beepers. The first time I heard a mobile 'phone go ... -- ... I laughed out loud.

    • HeHe, as someone who was working in radiopaging from back in the '70s, I can say with confidence that the quest for ultra-low power consumption and long battery life has been around a long time and will not go away. Making sure that Off really is Off. Tiny SMPSUs that switch on and off quickly and draw virtually zero current when off. Leakage current through 'Off' discrete components. And so on. A significant issue then was always how best to 'Off' the radio. It has to listen, but leaving it on permanently is obviously catastrophic for power consumption. I haven't done much recently with radio ... I presume BlueTooth, ZigBee, et al all have some suitable method. I wonder whether it can be improved still further for IoT.

    • I've always found the typical 'for' hard to read. Too much clutter for me! counter = 0; step = 1; while( counter != 10 ) // won't accept left-angle-bracket { ///// do the work counter += step; }

    • This is all personal preference, of course, but I've always preferred == and = to := and = as I think == is easier to see than := That colon is fairly small. I miss it sometimes in Python and I sometimes miss semicolons when swapping between languages. It's a shame and frankly rather silly, that we don't have completely different single symbols for assignment and equality. All those years of years of computer science and experience and we still have a stupid syntactical construct that's appears almost designed to trip us up. I've also always preferred {} to words. I think the shapes are easier to match visually than the words. In terms of non-programmer readability, Cobol takes some beating, looking something like this: Multiply ListPrice by DiscountFactor giving DiscountPrice Don't trust me on absolutes .. it's a long, long time since I touched Cobol. Notice even there, though that if I discount by 30% and use 30 as DiscountFactor, that line is broken. Ho Hum. But Price *= DiscountFactor; Is perhaps more readable if one understands the language.

    • I agree entirely about that style for multiple tests (etc.), in particular also the use of each test in its own parenthesis. Without those, some tests may well fail subtly and may well be missed. FWIW, for readability and intelligibility, I virtually never use for() loops either. Far too much going on in too small a space for my liking. Spread it out into an ordinary while() loop.

    • Open plan offices and championed by people who, in my firm opinion, have never actually worked in one. They sound a great idea, it's easy to talk to other people, there's a buzz. It's almost inevitably also very, very, noisy and distracting.

    • I was involved in an 8008-based system in the early/mid 70s and remember it well. The 8080 was released during the later stages of development and we asked Intel then for price predictions. The 8008 was £165, the 8080 was £250 and Intel stated that there was "no possibility that the 8080 would ever fall to the same price as the 8008". £250 then is equivalent to something like $5000 now. Interrupt latency was about 1ms as all the registers were pushed/popped to external latches. We wrote our own assembler on a Nova 3, as AFAICR there was none, and used a paper tape bootloader on the 8008. Despite the puny processor the machine was extraordinarily capable, mostly by having the fastest bits in hardware. It used a state machine that had two main paths at 40ms intervals, each of which handled one 'major' function and then some secondary functions. All hardware had to buffer that 40ms, using latches, shift-registers, or whatever. We bank-switched some of the memory and each task had a 64byte workspace, including state-machine, configuration, data and scratchpad. It was a paging system that handled up to 10,000 digital pagers and (IIRRC) 256 input devices, each with it's own I/O board including dumb keypad/display units and multiple telephone lines. The largest we ever needed to build was less than 64 I/O boards. Working in that space was a bit like trying to play one of those sliding-tile puzzles. It still feels extraordinary what we managed to get out of that CPU.

    • My telephone has caller-ID and and plays both the outgoing message and the incoming messages out loud. My outgoing message says: "Welcome to Gordon and Sue's cold-call filter. If you really think we might talk to you, just keep on talking and we may answer. If we don't, we may just not be near the 'phone, so leave a message" Callers whose numbers I don't recognise usually hang up around one of those full stops. Occasionally I answer and give them a major run-around.

    • FWIW, I tend to think of the minimal loop as an event switch rather than as a task or thread switch. Each called function first checks if its event has happened, and just retuns if not. For the following, I've translated taskcounter to eventcounter. If an event needs to schedule a further event, after setting the next event handler's flag or counter for that event, it can (but need not) also do: eventcounter = EVENT_X before it returns to the main loop, which will then call event_x() immediately. Important .. ensure you manage the increment of eventcounter appropriately. The given minimal example doesn't need the for() loop, just the while(1) and: if( ++taskcounter >= NUMTASKS) taskcounter = 0; that will, of course, also auto-initialise when first run, though very likely to a random valid task/event index. The arrangement can also be adjusted to take events from a queue/fifo instead of just cycling through the list. Important .. make sure you know what happens when there's no even to run. Whether they're tasks or events, any can have their own state-machine. As you say, these are not poor relations. They're very frugal relations. Many useful libraries have presentations that may be used with these tight loops. lwIP and uIP, for example.

    • Have you noticed how often we solve a problem by explaining it to someone else? Sometimes the very fact of raising a concern that things are going awry is sufficient to fix the problem. Advice or resources early enough usually helps. Too late and it often just makes things worse. Small fires are much easier to extinuish than large fires.

    • As a boss, I agree entirely and always took that attitude. Most engineers and programmers know what they're doing. If I think they may be making a mistake I'll ask them how they plan to deal with the issue. Generally they'll either give an answer I hadn't considered or they'll go away and think about it, maybe asking me for advice. I virtually never tell them wahat to do or how to do it. very occasionally, if I think they may be heading for a crash, I'll ask them them to put in some hooks that I believe will rescue them if it happens. And, if it does all go wrong, I never say "I told you so", or give them a thrashing. I immediately support them in trying to get whatever it is fixed. Almost my rule No1 is if things go wrong, I want them to come and tell me ASAP. If they're frightenned of my reaction, they won't do that ... and that costs so much! Most people absolutely(!) resent their boss doing their job for them, or at least feel they're wasting half their time because the boss is always telling them the answers. Curiously, many other 'bosses' think I'm mad. Well, until they think it through properly.

    • Good article. Most or all of that, though, _should_ be covered by ordinary eduction in electronics, though a reminder never hurts. Something I've been caught out by a few times in the past is the relationship between those fast edges and some low-speed "nothing to worry about" lines, particularly system-wide reset lines. These are often a long slightly rambling path around the board. They have a capacitor to ground typically at one end and they sometimes pass close by those sinals with fast rise times. Now let me see, longish trace coupled to ground at one end and driven part-way along it's length via capacitive or inductive coupling from a signal with high frequency components, the rest of the trace a high-ish impedance. Yep ... that's definitely an aerial/antenna. No wonder we have a strange EMI spike up around xxxMHz. Doh! Some distibuted capacitors usually quieten it down.

    • FWIW, and sadly that's not much, I believe the original intention of the Unix time was that it should be unsigned and at least 32 bits. However because machine architectures vary and to allow for larger integers, the error value was defined as '-1'. That of course led to many people using signed instead of unsigned.

    • Many years ago I was in a company that had a good record for product development, both hardware and software. Around 95% of our projects were delivered, normally with somewhat better than 80% of the features fully as expected .. remembering that some features were late arrivals and some were 'if we can'. Our main worry was that we always overran the target timescales & budgets. Studies at the time were suggesting an industry-wide performance that was appallingly lower than that. Only 10% of projects that delivered ever worked properly, many only after extensive rewriting, many projects failed to be delivered at all, everything overran. "By applying the industry best practices" they said, we could improve that to 60% successful delivery. Well by those metrics we were already pretty good, but we began to use the "best practices" to make ourselves even better. Sure enough, within a couple of years we'd improved to delivering about 60% success, just like the other 'best practice' groups. Bizarrely, upper management could not see the anomaly. Of course I realised later why we always overran. Sometimes, of course, its because they can never be anything more than our best estimate. But almost always it was also because the estimates were declared 'too high' and we had demands to reduce them somehow. Interesting that they then so often secretly double that resulting estimate "because engineering always overrun". Doh!

    • I shall get and expect to read this book as it looks as though it will offer useful data. But just Amazon's 'Look Inside' raises a couple of doubts, the first in the very first paragraph of the preface: "When it comes to computing, it often seems that a couple of glasses of beer and an anecdote about a startup in Warsaw are all the 'evidence' that most programmers expect.". Glossing over for the moment that that comment is offensive both to me and probably to Polish programmers, it seems also of itself to be entirely opinion, with none of the rigour that the author demands. I also wonder at quite whom the book is aimed. That preview reads in parts like a "managers' Guide to Herding Cats". But then its a small preview.

    • Yes, well, I preferred the indenting I typed :-( But it's intelligible.

    • I hope my format tweak works .. 'comments' doesn't accept angle brackets :-( As most CPU manuals have tables for registers showing the bit number, I tend to code it this way: #define MY_CONF_BIT_NUM 5 #define MY_CONF_BIT (1<

    • I presume the old saying is "A good workman never blames his tools" That of course is because the good workman uses good tools and keeps them in good condition. I'm not sure that saying transfers as you suggest. FWIW I've been agile-ish most of my working life, but I always try to get the initial specification close enough that there are only small-ish loose ends to tie up, not serious game-changers.

    • I keep seeing that old line "Garbage in, garbage out". All my software is designed as far as possible to tolerate garbage in and reject it, ignore it, or at least stay within reasonable limits in its presence, especially run-lengths. It's not a panacea, but it helps greatly. I remain shocked by the number of significant programs that have buffer overrun issues. IMHO there's no excuse for that.

    • Let me get this right. I can source ARM-based microprocessors from a variety of companies with a broad range of different peripherals and I will have to do some porting between them as they're not all identical and the CMSIS abstraction layer is not great. Or I can buy from Microchip and I won't have any portability issues because I can't buy similar parts from anyone else? Or is it perhaps that I will always have portability issues even if I buy only 32-bit Microchip parts? FWIW, so far I've had fewer nasty surprises moving between ARM processors from Atmel, NXP, STM and TI, from Cortex-M0 to Cortex-A8 than I used to have getting some of the old 8-bit PICs to work across different parts. But then that's just my personal experience.

    • Another vote here for "If you start/need to worry about order of evaluation, unwind/simplify the code". It never ceases to amaze me how many C programmers seem unwilling to add more white-space. Quite a bit of my effort when coding goes towards avoiding mistakes, so I like to do things that are clear and easy to understand, and that protect me from myself. In the case of: a = ++b + ++c; whilst I'm happy that it's behaviour _is_ defined and predictable, I find the number of + symbols in close proximity a little worrying, so would anyway quite likely write: ++b; ++c; a = b + c; Another attraction with breaking calculations into bits is the greater opportunity to catch overflow mistakes in the middle. Similarly with for() loops. For my tastes, there's just too much going on in a small space for me to feel comfortable with them. I personally would much rather write the wanted behaviour with a while loop.

    • Hm, well, actually it came just a few weeks after the RBS debacle where their retail banking computers were down for a couple of weeks, though I'm not so sure that wasn't just that somebody deleted the data, rather than a software error per se.

    • It's simply not true that C has only global. Anything declared 'static' is available only within the file that contains it. In addition, anything not formally declared for reference should (if sensible warnings are enabled), at least throw up a warning. Personally, I try very hard to deal with warnings. A warning says that what I'm doing is suspect. The language is our tool. We should understand it and manage it, not blame it.

    • :-) "Except for minor details, C++ is a superset of the C programming language" Bjarne Stroustrup; preface to the first edition of "The C++ Programming Language"

    • FWIW, I've always been a bit uncomfortable about LOC as a metric, which implies that... if( ( counter != 0 ) && (--counter == 0) ) do_this(); will have fewer bugs than if( counter != 0 ) { if( --counter == 0 ) { do_this(); } } (I hope that formats passably) .. which is clearly and demonstrably not the case. And the second is also so much easier to test/debug.

    • In much embedded work you are by definition close to the metal. C is a pretty consistent language that works well close to the metal on just about every processor there ever was, and it's fairly consistent across most of those processors. C++ is an attempt to get C further away from the metal, but for embedded work that's not necessarily very helpful. It always seems to me that C++ is too far from the metal for deep embedded work and too far from a nice high-level OO language for more user-application work. On embedded stuff I almost always use C. On user applications, unless I'm constrained by the environment, I'll more likely use Python, Ruby or Tcl. For me at least, typically working deep inside the machine, OO has doubtful benefits, and data abstraction _can_ be a positive hindrance. I need to see what's _really_ going on, and for me personally, C++, Java, et al tend to make unnecessary obfuscations. One other aspect that may be significant. If one writes C code, it can be put in a wrapper and used with C++. If one writes C++ code, it can't be used in pure C environment. I've never consciously made a decision based on that, but I'm sure it's often in the back of my mind in that busiest of places, the "what if?" department. In reality I think there are few pure C environments now, anyway. C++ is available even for many small PICs.

    • :-) I believe "PIC" as in the ubiquitous microcontroller family was an acronym for "Peripheral Interface Controller". Their intended purpose was to offload and decouple time-critical I/O processing from the main batch-processing CPU. The original PICs date from the early/mid 70s.

    • For at least a good many years, I think it will not stop those, just add to them. I do agree entirely that there are some very stupid and inconsiderate drivers out there. But I'm not convinced that autonomous vehicles would change that much unless all vehicles in the domain are autonomous.

    • Quite a few tricky and interesting challenges... Pedestrians! In all their guises. Wild animals, Dogs, etc. Large birds Roadkill Motorcycles Cycles Mobility scooters Articulated trailers on bends Lane switching by others Lane straddlers Pot holes and subsidence Repairs of same Speed bumps and pinch points? Snow Ice Flood Fog Obscured entrances/exits Debris, rock falls, etc. Stuff dropped from bridges! (An increasing malicious problem here) Suicides? Road closures Spilled Diesel or oil Roadway maintenance cones/lanes Narrow roads with passing places On which side of the road, to drive? Visibility issues like dips in the road? (I know how my brain feels about vehicles 'rising up out of the ground') Anticipation. That ball may have a small child running after it. A couple of 'improbable' instances from my own experience: Under 10mph in snow on a straight narrow road, a speed pad knocked first front, then rear wheels sideways and I only just stopped the car from spinning. A sheet of metal lying in the road (I was on a motorcycle). A detached wheel rolling diagonally across the carriageway towards me, from ahead and at speed. IMHO, autonomous road vehicles are probably not going to happen.

    • As far as I can see, the only way autonomous vehicles could possibly work is if _only_ autonomous vehicles were allowed to operate on the highways in question, possibly with limited exclusion for maintenance. No non-autonomous vehicles, no people or animals, no other unexpected risks, so they'd likely all have to operate on separate autonomous highways, too. Actually we already have a system somewhat like that. It's called a railway :-)

    • I tend to be amused when I read threads like this. The meaning of the terms 'embedded' and 'real-time' have a _huge_ scope, from an 8-pin/8-bit signal conditioner via Tablet-PC to multi-PCB, multi-CPU, multi-core DSP systems. And from payroll (not everyone's perception of real-time), to sub-microsecond or sub-picosecond deadlines. The choice of OS varies hugely across that range, from simple event-driven loop, (which personally I don't see as an OS, though some disagree), through a plethora of scheduling algorithms with or without pre-emption, including various different trade-offs between response time, throughput and determinism. Comparing 'Linux' and 'an RTOS' is not so much comparing apples and oranges, as comparing bunch of bananas with a bowl of mixed nuts. I've seen much worse English than Le's, from Technical Authors and Marketing People, all of whose first language was English.

    • As so often one person's embedded is not another's. Embedded ranges from 6-pin PIC processors with a few hundred instructions and a few bytes of RAM, through smart-phones and iPads to, for example, multi-board VME-bus machines with multiple multi-core processors per board. Just like real-time programming varies from getting the data away before the EMP hits, to payroll. If using C for embedded more than other languages makes me more Real and less Virtual, I guess that's a plus :-) Gordon.

    • I accept that a good hard look may be worthwhile. But when I looked, I followed the link "Oberon at a Glance" and read the very first line: "No access to variables that are neither global nor strictly local" Which combination of syntactical clumsiness, double negative and no full stop made me immediately walk away.

    • Another vote here for focussing first on the algorithms and readabilty over any kind of compiler hand-holding. No amount of code fine-tuning is going to make a bubble sort as quick as a b-tree, or a b-tree as compact as a bubble sort. I've often been impressed at how good a job compilers can do under the surface. So I don't plan to go there unless something quite special is needed. But sometimes the right algorithm does want shifts or 2^n or whatever. The important thing then is to make sure that's clear in the code: //-------------------------------------------- // CAUTION: This code uses 2^n for efficiency. //-------------------------------------------- #define SAMPLES_N 4 /* by which to raise 2 */ #define SAMPLES (1<

    • Hehe. I worked with the in-between 8008 around the time the 8080 was released. A lab book from then showed the 8008 at UKP160 and the 8080 at UKP250 (hm, maybe $3000 today?) and a statement from Intel that "The 8080 will never fall to the same price as the 8008".

    • Any large-scale GPOS must almost by definition contain too much code to be fault and vulnarability free. Add to that that the main GPOSs today have their foundation in the PC market, where new hardware appears about every waking hour, for much of which new software support is needed, and it's clear to me that these OSs are at best fundamentally challenged in even their _chance_ of meeting high security needs. And that applies to Linux, the BSDs, Windows and almost certainly some of the less popular/prevalent OSs. The situation is better where these OSs are cut-down embedded implementations, but even there, the pace of change is concerning, possibly alarming. IMHO, putting "GPOS" and "High Security" in the same sentence is an oxymoron.

    • Well, whilst it isn't quite the same thing as meant by the statement/report, forged components are certainly around! We here have seen significant numbers of them. OK, these have all been simple rip-offs. Lead-frames in epoxy with no silicon, lead-frames in epoxy with the wrong silicon. All that we've seen have simply been fraud, and usually with their origin in China. It doesn't take a big leap of imagination to see that forgeries could be more sophisticated and malicious in ways other than just stealing money by a simple supply-chain fraud.

    • Well, I think I may well have been suckered by the sun not rising, though I also think that if it was my code the display colour would be wrong, rather than the program crashing. Look through most stuff I write and you'll find error traps that say, typically. "something's wrong .. try to recover". Of course what one does to recover depends quite a lot on the seriousness of the situation. It may not help much if I reset a rubbush airspeed back to it's initialisation value of 0kts! I have a couple of caveats, though, about how one handles fault/error conditions. * Where I can I try to keep recovery as simple as I can, because the more checks, balances and error-traps there are, the more code there is and the more likely are errors within that code. * Don't just assume that the error recovery will catch and fix a mistake. I think that's where we came in.

    • Hm, goto. I rarely use them, but when I do, it's usually to exit an outer loop from an inner loop or switch. In that circumstance, it could be aliassed to break_to if the word goto offends too much. I usually comment boldly: // TRICKY EXIT or maybe // AAHOOOOOGAH, AAHOOOGAH :-)

    • People are at their most productive when they enjoy what they do. So finding out what people best like doing and least like doing, allows one to offer tasks to get the best out of them. Putting people under high pressure to deliver, for most people reduces enjoyment, reduces their morale, increases the number of risks on which they take a gamble `if we're to have a chance of meeting that deadline' and ultimately hits either productivity, quality or both (IMHO). Even in engineering, I believe there is an important place for subjectivism.

    • I think the complexity of projects rises roughly exponentially with their size, as do the cost, the man-hours, and the risk. Specification creep is often also a product of time. Once a project starts to run late everything snowballs. I can give a classic example of that from here in the UK. Our Nimrod early-warning aircraft. From the initial specifications, engineering estimated the amount of CPU power they would need. Doubled it just in case, doubled it again for contingency, added a zero for the MoD's (cf DoD) likely changes and then another zero `for luck'. Sure enough, by the time the project was cancelled many years later, that 400x CPU budget had been _way_ overrun. It isn't just NASA that struggles with large projects, though the problem in these cases is probably agravated by lack of a proper commercial focus.

    • Indeed. I often wonder how on earth some patents get granted. Ive been using digital supplies controlled exactly like that for high(ish) power analogue amplification for years, partly because in my context the amplification has to be effectively class A to function and I need to keep the dissipation as low as I can or fry the circuits. Cascode Vce buffering in the voltage gain stage of an audio amplifier is ususual, but by no means unheard of. Doug Self discusses the technique in his audio power amplifiers book. What they're patenting of course, is the particular combination of these things in that context. In my case the voltage gain is just an op-amp as I only need distortion below 1%.

    • "The comparison of an RTOS with something like Red Hat Enterprise Edition (EAL4+) is surely broken." Indeed. It seems to me that we have rather mixed up the terms "embedded" and "turnkey". That's rather inevitable as embedded systems grow larger, but it does, for me at least, confuse the issue.

    • Personally I think what gave C the boost was that it was _close_enough_ to assembler without having to learn all the quirks, intricacies and syntactical variations of dozens of different CPUs and assemblers. I've always considered C as a `high level assembler', not a high level language. When I want a high level language, not for embedded, I'll more likely opt for Ruby, Python or Tcl.

    • Whilst I agree philosophically, with enums, I often have the last entry as, e.g., NUM_ERRORS /* must be last item. */ This presumes, of course, that the enum begins from zero and is contiguous (a must!). That way my `overrun' trap values are defined and will adjust automatically if new items are inserted. This is especially useful if the enum values are then used to define arrays, where the array is then automatically sized and the out-of-bounds condition is clear.

    • Hmm, well, Vim will do those things after a fashion. It'll fold text in a structured way, but doesn't of itself show graphics, etc. I can, though, be made to open a browser with a URI ... http://vim.wikia.com/wiki/Open_a_web-browser_with_the_URL_in_the_current_line Personally I'm a great fan of Vim, but editors are very personal things.

    • In many contexts the most important thing to know is the data and how that fits together, yet it's startling to me that so often the data is totally undocumented. Program/message flow and algororithm stuff usually wants some explanation. I quite like doing this with ad-hoc UML-like diagrams. I'll happily do those with paper and pencil and then scan them into a document (not normally into the code, though URIs are useful). If the code is written simply and clearly and with decent amounts of white-space, much of each chunk should be self-explanatory. If it isn't, it's likely either a complex concept or not coded the best way. The one thing we usually don't need to know is what the language/compiler is doing. C.A.R. Hoare is right on the money. It's great when someone looks at a sophisticated project and says how simple it is. It's darned frustrating when the view is "it's so simple, anyone could have done it", so your skills and and expertise can become devalued :-(

    • Me too. You'd think EETimes would resist trolling. Next week's issue should be on "the best editor" :-) There is a whole bunch of dichotomies with C and C++. Do I want it for a large or a small project? User-level or embedded and/or realtime (OK, that's a trichotomy). How critical is determinism? Why is C++ "a better C" if C++ is essentially the same as C. Abstract or not abstract? And In the design or in the code? Now I'm a whole-hearted supporter of OO principals, abstraction, modularisation, scoping, defensive programming and all the other stuff that helps make code `friendly', so it disturbs me when I spend hours just tring to find where the entry point is of a program, or when the code starts calling function after function and the debugger can't tell me why, where, or whether the code is supposed to be going where it is. How am I supposed quickly and intuitively to spot that some high-level code had an overloaded something-or-other? Abstraction can be nice for design, but _can_ also be a nightmare for debug and maintenance. I'd be interested in objective figures for maintenance costs for C versus C++ (or indeed the other `better' C derivatives). Last time I looked, a few years ago in fairness, the `easy to use and maintain' C++ was costing three times as much to maintain as was C. Hopefully that's improved!

    • : ic-%gt;clear(button); : : Can you write a function similar to that in C? I can write _exactly_ that in C. Whyever should I not? It's just a pointer to a handler for the device identified by 'ic' and you'll find precisely that arrangement in countless device drivers.

    • There are ways to wrap FSMs into a more procedural style. SICS.se offers "proto-threads": http://www.sics.se/~adam/pt/ using the __LINE__ pseudo-variable from gcc and other similar compilers in wrapped switch statements. FSMs can offer a good small-footprint way achieving a cooperative multi-thread-type operation with fewer of the region and locking issues that an RTOS can have.

    • Good article. My methods are pretty similar, though I don't test quite so much so low. I do program fairly defensively, though. sizeof() used extensively, for example. I too work `outside in', though I usually start with a pencil sketch (literally!) of what I think the system/dataflow/etc. is likely to look like. Then I'll move on to outline the boundaries .. the user interfaces and the hardware interfaces and so on. I'll choose the likely operating method .. event-driven or RTOS, then I'll start nailing up a top-level skeleton, which I'll use to start building and testing the low-level drivers. By the time I'm filling in the middle, usually my internal data structures are gelling nicely. Small is good. If speed's an issue, well, that's what `inline' is for :-) One place where I may differ from you is that since the early 80s, I've liked messaging as a means of intercomunication. I don't much mind if the messages are a parameter or a message string, but I do like to be able to take my modularity to hardware if I wish. As an example, in a new derivative of an existing product, we just ran out of I/O in a big way (for the scale of product, anyway). No problem .. I just pass the messages down to a new backend CPU that handles the new I/O and indeed takes over some of the old I/O. Job nearly done and everything outside the box looks pretty much the same as the previous version.

    • In this particular circumstance we also have to conclude just what _is_ guilt or innocence, so this is not a 'simple' court case. Clearly there is no such thing as a perfect cable, so our 'innocent' cable must be less than perfect, but how much less then perfect? Is 'guilty' a nearer to perfect cable that's just a silly price? Some cheap cables that are notably less then perfect are 'guilty' because they do give noticable degradation. Microphony and poor connectors, for example. I'd willingly pay for good tightly constructed low-oxygen cables ith conductive plastic interliners and good gold-plated connectors. I might pay for silver plated conductors, though I suspect the benefit is doubtful at best. Would I pay for cables at hundreds of pounds/dollars a pair? No way. Not guilty, but neither am I that gullible. Would I pay for cheap thin cables with pressed nickel-plated connectors for an audiophile system? No. And in that case I think a guilty verdict might well be reasonable.