DutchUncle

image
Firmware/Software Engineer

Biography has not been added

DutchUncle

's contributions
Articles
Comments
    • Where have you been doing an ostrich imitation? This has been going on for years, since well before the last time I got laid off (nothing wrong with 1/2 of the company, they just didn't need us here in this state anymore). My son certainly had no inclination to go to engineering school after seeing the feast and famine in my career.

    • Multiply and divide, yes; addition, no. And even multiply and divide are different if you were working (and thinking) in Pascal on another project, where the evaluation is right to left. Tell me, do you get charged extra for parentheses by your compiler? Think of them as a comment. :-)

    • Avoiding something subtle and silent that can take 100% of your debugging time for a day or two is worth it. I'd say more coding errors happen because people don't know how to touch-type, and are looking to save a few keystrokes, than anyone would care to admit.

    • I think this was done better in "The Art of Computer Programming", volume 1, originally published in 1968, currently in third edition.

    • We also pay IAR. The debugger works just fine on optimized code; it's human brains that don't follow the optimization, and e.g. get upset about not being able to distinguish breakpoints because the compiler shared the common code - just like assembler coders (like me) used to do. Test what you ship, and ship what you test. If the low-optimization code fits and does the job, ship it. If we can't fit space or time constraints and need to do higher optimization, build and test and ship it that way. I keep trying to convince management that we should do away with the debug/release build dichotomy completely.

    • I'm assuming that this: C++ language: const int DRIVE_SHAFT_RPM_LIMITER 5 1000 was supposed to be const int DRIVE_SHAFT_RPM_LIMITER = 1000; This comes from C, along with the expected optimizations. C does NOT require all definitions first; in fact, counter to some coding standards, it is *better* to declare a temporary variable within a structural block to clarify locality to the compiler and optimizer. Inlining comes from C too. If you're going to make comparisons, play fair.

    • I was pointed to this article again by one of my colleagues who seems to think it says "nobody ever needs an RTOS", and anything more complicated than a top level polling loop is unnecessary. If we had a complete system-level understanding of our system before starting any work on it (by the way, just how complex *is* this system under discussion?), maybe we could do this kind of analysis. If we had fewer comm channels (remember the EEPROM and the display and everything else that has latency is a comm channel under the hood!), ditto. If we didn't have people changing feature specs the day before we're supposed to do releases, it would help too. And I agree with one finding - I would expect more small tasks with proper segmentation to schedule better than large complex multi-purpose tasks. But for flexibility (including iterative development), and adaptability, and more convenient handling of asynchronous events and protocol timings and interval timings, without investing as much time in analysis of a system spec that's going to be invalidated by change, I think I'll work with an RTOS, thanks. I don't expect the hardware designers to build their own processors out of 7400 gates, and I see no particular reason to hand-create my own scheduler every time I need one. Hey, I know, let's buy an industry-standard, well-tested, well-understood integrated package and get a head start on the project that people are actually *paying* us for.

    • "... smaller tasks without preemption provided the best overall performance. ... allow preemption for only extremely rare tasks. " So does your system have preemption, or not? Are you just implementing a polling loop, or do you really have an OS that's stripped of all nonessential components?

    • "Writing drivers for each and every one is a task too daunting ... " But that's not the task at hand. A hardware company can choose one system, one model, that they're going to support, because they have to write some level of driver in order to prove that the hardware works. The real problem is when you get a chip with a spec sheet and some "reference" code and after a few hours it becomes clear that nobody ever TRIED the (expletive-deleted) thing using software, certainly not that so-called reference. If I have to get out a scope to debug their code . . oh, wait, that doesn't even work anymore because everything is on-chip and I can't get to the intermediate signals. No, sorry, if the reference code doesn't work, it's just as blatant a failure as the pins not being standard size - I'll find a part from someone else.

    • C deterministic? Have you been reading the articles in your own emails? C was ludicrously limited from its first release. It's a language that doesn't even know what its variables are. We were learning block structure and data-driven design and structural concepts that would become OO design, and here came a brand-new low-level language - except its macros didn't even do as much as IBM 360 assembler and because it tried to avoid attachment to any particular processor it was uncertain just how it would perform *anywhere*. C only persists because people are lazy.

    • Static analyzers were not new in 2011 when Jack wrote this, and the concepts were not new when Lint was originally developed for C. Part of the reason we need them is that C leaves so much undefined. The IBM compiler for PL/1 gave error messages that obviated half of the checks in MISRA. The language C, which was designed to be barely above assembler, was never designed to become the lingua franca for all work at all levels.

    • I would not want to use this in running code, but I *would* like to see ccassert()s (compile-time checks) that confirm any arbitrary assumptions about record layout. Unfortunately, as defined, this macro only works with runtime generated (and linked) code, because it calculates a real address for a record located at 0. The compiler already has all of the information it uses to generate optimized accesses for the various record member offsets, so this should be knowable at compile time. Too bad C still can't do what IBM Assembler could do in the 1960s; and too bad that we're all using a language in which we need to do absolute addressing to make things work.

    • "A small company building a small product can't afford to be rigorous with its processes..." Turn that around. A small company with small resources can't afford to be wasteful or make expensive mistakes. Consider the physical world "artist/craftsman" making jewelry or woodwork. One of the reasons people seek out craft fairs and individually-made items is the presumption of individual care and attention, from choosing the raw materials through final polishing. The difference between a "craftsperson" who can make a living selling high-end items, and a "fine artist" who sells impractical-display-only-level items, may well be enough process to be more productive and bring the typical product level up to almost-fine-art without spending a lifetime crafting each piece. Certification is as much of a buzzword as "process". The goal is to produce product of high quality, consistently, at reasonable effort yielding a reasonable price-point. Certification does not *produce* that goal; certification should be an award saying that you can *achieve* that goal.

    • OTOH, the C pre-processor is pathetically limited when compared to *real* macros in most decent assemblers, going all the way back to IBM mainframe assembler in the 1960s. Because C only does text manipulation, without knowing the semantic context, *and* without being able to nest things (like putting #if/#ifdef inside a #define function-like macro), it cannot really check or test anything. IBM Assembler could do things almost to the level of C++ template metaprogramming. Considering that these advanced concepts existed that far back - and *well* before the advent of C - it is unfortunate that it has taken so long for them to return to common use.

    • The last four times someone said the software was broken, it was really a hardware problem. Wrong clock; wrong resistor; bad contacts; dead processor. My degrees say CS; I've spent 34 years trying to make metal work. If it's not a team effort, with recognition on BOTH sides that BOTH sides could be wrong, it's not gonna fly.

    • I've spent all of my career doing embedded systems with tight timing requirements. I know about hardware realities. I AGREE COMPLETELY with your points about C being indeterminate re: bit order, data element size, etc. These undefined (sorry, "implementation-dependent") issues are why everyone in the mainframe world I was in back when C was introduced thought it would go exactly nowhere. Sure, the guys at Bell Labs who introduced it on PDP/8s and PDP/11s knew their environment, but the whole idea of "high-level" language was to work in MULTIPLE environments. I thought I made the point that some hardware (the ARM example) *is* defined in a structured way that *lends* itself to proper abstraction. If anything, you're making my own point even more strongly - I understand you to be saying that C is *unable* to reliably define those structures to reference, so even if the designers *wanted* to communicate that structure they can't (at least not in code or definitions). Perhaps it begs the question: Why do we specify things in terms of a language that is itself indeterminate?

    • It's not about blaming the tools; it's about having tools with safety guards. A chainsaw doesn't know the difference between a log and a leg, and you *can't* make that part safer without interfering with its function; you CAN, however, put a guard on the activator so it's less likely to start unexpectedly. Won't stop you from cutting the wrong thing, or cutting a tree so that it falls on you; but at least you can prevent stupid accidents like "it started when it hit the floor".

    • The problem is design, and the biggest part of that problem I've seen is that so much embedded code is written by hardware-trained engineers who learned to write code "by ear". Consider the clever ARM processor constructs, in which repeated instances of similar things (e.g. ports) are mapped as structures. Then look at the header files provided by the hardware vendors in which all of the registers (including the repeated ones) are #define-ed to absolute numbers, rather than being defined in a structural fashion. The hardware vendors don't know any better, and neither do most of their users. Computer science training includes emphasis on abstraction. If I need two of something, I define ONE structure and declare two of them. You would expect that EEs who use ICs would think the same way - but rather than focusing on the abstraction that the two ICs look the same, they focus on the physical fact that they would need two physical ICs, so they declare two separate structures with two separate sets of names that lose the parallelism. This also forces two separate sets of code, one to handle each structure, rather than one set of code that could handle either one as a parameter. You can write good code in *assembler*. IBM did it for years, and led the mainframe generation of software people to do the same, by having well-defined structures and consistent conventions and clear code examples. Good high-level language just makes it easier. An artist can make a sculpture with a screwdriver. Tools are just tools. Ideas win.

    • It's most bothersome in context. If this were the motto of a college-dorm or frat-house startup, nobody would blink. If it were the motto of an open-source project, it might start being a problem for business (I remember the problems convincing managers that our CP/M Z80 system would work better with a board from Wild Hare Computing). But Microsoft? The archetypal corporate behemoth? You can just hear the MBAs at the planning meeting: hey, let's convince people that they're a "skunk works" project by imposing cool-sounding slogans from a skateboard advertising firm, so maybe they'll innovate like they're in a dot-com. We'll plan some spontaneous demonstrations, too . . .

    • But it's the same operator with the same syntax, having a very different meaning. Tough to distinguish "good" from "bad" uses (just don't cross the streams). I stand by my core position: C is a weak tool because of its vague definition.

    • Except sometimes that isn't very clear either. (type) * pointer; pointer = pointer+1; increments by the size of whatever type the pointer is pointing to - and if you actually say pointer = pointer + sizeof(*pointer); then it increments by MORE than you want. The increment operator is clearer here. Per analogy: if the unskilled driver crashed the car because the steering was bad, then yes, it is the car's fault.

    • I'm forced to disagree - in that you didn't go FAR enough. C is *still* a monstrosity. The computing world (not only academe) was moving from Fortran towards Algol-style, Ada, Pascal, MORE type consistency and MORE clarity, and then C came along as a deliberate step backwards towards assembler. I would rather have a Pascal++ today. But per "*a++ = *b++;", I'll point out that this is ONE OP CODE on the Motorola 68000 (admittedly with extended addressing modes) precisely because it's something that people want to do all the time. Remembering the syntax of the string copy subroutine is just too difficult :-( (/sarcasm)

    • People are expecting the wrong thing. The code is right either way, and the compiler is also right. Having written some Pascal compiler back-end optimization, I can read what is WRITTEN (not what is intended or expected) and find that: None of that code has any effect. Nobody uses it in this routine; the routine is "main" and there are no other routines, so this is the entire universe; the data is neither volatile nor external (not specifically placed at an address that might be a hardware register). If a variable is incremented in the forest and nobody can hear, does it make a sound? So as a good optimizer, why bother generating any code at all?

    • Sorry, but what you call the "clever" string copy comes right out of the original C books in 1978 or 1979. It was the demonstration example for precedence and PROPER use of those operators.

    • Second the motion. Add these two integers - no, I don't know how big they are, so I don't know their capacity or their overflow behavior, just add 'em.

    • What's wrong with pointers? other than C defining them in terms of raw memory, rather than Algol or Pascal or anything that constrains a pointer to a particular *type*. The problem isn't pointers, the problem is pseudo-random conversion between pointers and numbers and other pointers.

    • I was in grad school when C was introduced. We were using Fortran or PL/1, or Lisp for AI programming, and uniformly agreed that C would never become popular until they standardized such basic concepts as the size of an integer. Here we are, over 30 years later, still pointing out that C is still not standardized. OTOH I have to disagree with the initial example. "a = ++b + ++c;" *is* specific, because unary operators associate before binary operators. The fact that "a[i++] = i;" is not specific, because you can't know whether the address computation on the LHS happens before or after the value computation on the RHS, is - to be blunt - silly. While the code does risk being clever-for-its-own-sake, at least it should *mean* something consistent.

    • At a previous job, a big place with a separate IT department, they ran a backup every night. Simple system, 31 tapes - one per day of the month. Every morning the operator took the tape out of the drive and put it back on the rack. Apparently NO ONE NOTICED the message saying "Insert next tape" because the backup was INCOMPLETE. And apparently no one had ever tried to recover from a multi-tape backup (because they didn't know we had one) and so they never found that you couldn't even recover most of that first tape. So when the CEO's laptop died and all of his emails were lost, the company had a problem. Is that buggy software? A "computer glitch"? Human beings made a policy decision to release something with inadequate testing. Or maybe they *thought* it was adequate but they were wrong. That's why there are product recalls.

    • Yes, it's not global in the system, but it *is* sort-of global in the module, so unless you have a policy of one subroutine per module you create possible access from the wrong context . . . oh, wait, if you have one subroutine per module and you factor code into a subroutine then you should be putting it into its own module, so it's global again (which is why that coding rule is ridiculous). Algol and Pascal allow declaration of a local subroutine within another subroutine, just like a local variable. This *still* doesn't enforce the full requirement in my stated example, but it's better (more constrained) than local to the module. (And it allows for statics etc. to be more tightly constrained as well.) I'm not blaming the tool; I'm pointing out, as many have before me, that C's flexibility and openness come from simplicity, and while you can program anything with it, sometimes it doesn't give you much help. That's why people added to it with C++. Sometimes a crescent wrench is handier and more portable than a full set of exact-sized wrenches; but its very adaptability means that it *can* come loose in situations where the exact-size wrenches would hold.

    • Sorry, but I think in "factoring" that paragraph you lost the context - I was talking specifically about factoring repeated code into a function, and while you're right about the function being scope-able to one module (rendering the attribute "static" ambiguous re: memory or scope), I stand by my contention that it still exposes this functionality to being invoked from different contexts than originally intended. Of course it is possible to write clean code. Programmers usually say "You can write Fortran in any language" as an insult to what they're reading, but the early satellites were put up with Fortran, and the IBM operating systems that ran most of the computers in the world for years was written in assembler and was positively bullet-proof. I think we are really in violent agreement - your comment re: the difference in seriousness between listening to music and programming avionics. In the physical world, we could compare the purity standards of manufacturing medicine vs. the cleanliness standard for manufacturing food vs. eating food from a street vendor at a parade. We have different expectations for different circumstances. And we're willing to spend different amounts of money and time and effort to achieve them.

    • One company activated one configuration without proper testing. My wife's company runs their commission software upgrades side-by-side for THREE MONTHS before using the new package as the real-cash-real-check-printing version. On the other hand, many banks and brokerages handle many millions of transactions just fine every day, faster and cheaper than ever before. Even Knight Capital was running OK before they destabilized their own system. People screw up. Sometimes complicated but direct like Chase's Whale trades, sometimes indirect like choosing to release without proper testing.

    • Look up the American vs. Swiss definition of what can be labeled as "chocolate". But to follow the analogy: The chocolate at hand does not taste bad; it tastes just fine, and at an attractive price. But on occasion there is a piece of nut-shell in it. How often and how bad does this problem have to be before you stop buying that chocolate? Do users feel that there are enough whiz-bang features in their new toys that work most of the time that they are willing to put up with the irritations? Or, as I have long suspected, have they been carefully *trained* by the early history of PCs to accept this lower standard of consistency as normal for the field? One would think that a radio or CD player should "just work"; except people DO accept car stereos that change stations on startup, or continue advancing the CD while the sound is off.

    • )))....every major subroutine in many significant projects would fail in some manner with improper or incorrect parameters passed into it. Probably true, but it may say more about the poor tools - especially C - than about the developers. If you have an oft-repeated sequence of operations within constrained circumstances - where supposedly you've already tested the context - you want to factor that sequence into a subroutine. Say, half of the cases in a switch have some common code. Problem is, C only has one level of definition - global. But you don't want the space and time overhead of re-doing the context tests (more formally, parameter verification) when the subroutine is clearly intended to be used only in an already-tested context. If only you could constrain the routine's use! Well, with only one level of definition you can't, and that's why some people *discourage* factoring and reuse - which in turn leads to maintenance and parallelism headaches later.

    • Both sides are right, and for different reasons. Ganssle's troll has the core of truth that modern devices with software are many orders of magnitude more capable than their predecessors; the opposing view bemoans an absolute number of "bugs" without factoring in the percentage of quality against that increased capability. We must also distinguish fitness for purpose. Medical and aerospace applications have ultrahigh standards; in finance, on the other hand, despite the possible costs as indicated by the Knight Capital situation, there is nobody in charge of certifying the application except the user - so we have to assume that someone at Knight Capital did a bad job of testing in parallel before using the code on real money. As engineers we naturally want to be proud of our work, which for most of us means releasing it at the top quality possible. Sometimes, though, "good enough" is good enough (if you don't have Belgian chocolate to hand, you'll eat American); and sometimes "available now" is better than "available later with more features" (American chocolate now is better than the promise of Belgian chocolate with hazelnuts next week).

    • We're running a 32 bit processor with polling loops. An RTOS is too scary. You want people to change languages too?

    • When C first became known outside of the insular Bell Labs UNIX world, the crowd of teaching assistants I was part of thought it was laughable to call it a high level language. How could you have a language where you don't even know for sure how big the numbers are, and that doesn't bother type checking things you specifically defined? You might as well write in assembler. Heck, even IBM mainframe assembler had better meta-information, and its macro language in the 1970s had the kind of power that C++ users now associate with template functions. As computer science students, we didn't realize that EEs would go the cheap and dirty route every time rather than have to *design* a conceptual framework first.

    • Of course you can work without an RTOS. You can design your PC boards with tape on mylar, too, and assemble them using all hand soldering.

    • CMSIS has a lot of overhead (part of this is because C doesn't do type-checking on typedefs). After some profiling, I replaced some of our CMSIS HAL calls with the equivalent single line of code that actually gets the work done. I would worry about their RTOS having the same problem.

    • ))) What does RTOS size have to do with Linux cache misses ? This is why punctuation was invented. In speaking (and thinking), the OP would have said "... with Linux (pause) cache misses will be abundant", and any listener would have known which clause held together. The sentence is correct as written; however, a comma would clarify the issue. Think of it like coding with full parentheses rather than relying on the order of operations.

    • Anyone remember Pascal variants with a type tag? or IBM OS/MVS assembler data structures with control keys? None of these concepts was invented yesterday. The newer languages include the concepts, and make it easier to use them and enforce their consistency, but they're not exactly new.

    • Perhaps we are secretly in violent agreement. 1. C volatile won't be wrong for your cases, but I agree it will certainly be suboptimal. 2. You achieve your desired result through *typecasting* - using the *data* attribute that C provides, since there is no other way to achieve what you want in C. You bring a more sophisticated approach to thinking about the operations, yet you have to work with the tool available. 3. The fact that this could be much better described in a C++ template still doesn't help you achieve it in C. 4. Your approach of detailing the accesses explicitly is more vulnerable to error and/or omission. Making the operations automatic by declaration is IMO preferable. Since the all-or-nothing keyword is all I have in C, I'm going to use it. :-)

    • Please describe how you make your accesses explicitly volatile in C, or specify accesses that are volatile in one part of code and non-volatile elsewhere, or volatile on writes but not reads. I could have done this years ago with fetch-associations and store-associations in Modula or SAIL or a handful of other languages, but I don't see any way to do it in C. "volatile" is a type qualifier, which is an attribute of the *data* declaration, and for this reason I continue to feel that the C approach does indeed have a concept of volatile /data/. Certainly the EFFECT of this definition is on the generated code, not on the data ... except that some fields really *are* changeable. I work in embedded systems. The examples explaining "volatile" in three books on my shelf (and a quick search online) all give hardware examples like a UART port in which each access will read a different byte of data even though it appears to be reading from the same "variable". Of course, it's *not* a RAM variable, and that's exactly the problem - we need some way to tell the compiler that this DATA ELEMENT is special, and must be touched precisely when our code says so (and ONLY when our code says so, because otherwise it might have side effects, like the UART losing the current byte and accepting another one). The use of "volatile" to reference a RAM variable used by multiple tasks, which I believe is the context you are coming from, is telling the compiler to treat the RAM variable like a hardware port, and for the same reason: it can be changed by outside influences that are not specified in the local program context. See also http://publications.gbdirect.co.uk/c_book/chapter8/const_and_volatile.html The C Book, second edition by Mike Banahan, Declan Brady and Mark Doran, originally published by Addison Wesley in 1991.

    • No. It is the *data* that is volatile. A quick search finds "volatile" defined as "evaporating rapidly", "changeable", "ephemeral". Declaring data volatile means that it may change without action in your local code context, so you had better reference it every time you want it and not expect that you can load it once and re-use the value; and, more importantly, that the compiler optimization better not "improve" your code's efficiency by doing the same thing.

    • Quick search finds "volatile" defined as "changeable", "transient", and "evaporating rapidly". The *data*field* is volatile, because it might change without action by the local code context. With optimization turned off (or in assembler), the compiler would translate your accesses exactly as you write them; the "volatile" keyword is telling the compiler that the *data* might change so it cannot economize on your accesses.

    • The declaration is shown as an isolated line. There is no way to know if it is global or local to a subroutine. If global, then I agree with you; if local, then it's up to the compiler.

    • Yes, there are cases where it makes sense; and no, I never said otherwise. What I pointed out was that the declaration as written in the article does *not* demonstrate the intended combination of attributes which is the focus of the article. Your declaration *correctly* sets the const attribute on two different things - the pointer and the target - and also makes the target volatile, which I believe is the point the author intended to make.

    • You have a few examples which confuse the assignment of attributes. ... (const) variable will exist in memory at run time, but will typically be located, by the linker, in a non-volatile memory area ... Not necessarily; it may NOT exist in memory at run time. If the instruction set supports immediate operands, the compiler may choose to create the value each time it is used. That's what optimization is for. uint8_t volatile * const p_led_reg "Volatile" applies to the uint8_t value. "const" applies to the pointer. They are completely separate concepts.

    • Let's all reinvent paging! Or bank switching, or whichever similar technique you used back in the Z80 days. The problem is that the techniques have fallen out of use as address spaces widened, and the support in old linkers for things like bank-switching subroutine calls has to be done by hand now. There is always a bottleneck. Some part of the system is always the slowest, and the one you have to work around. As CPU performance and memory capacity and comm speed improve, the *ratios* are what matter; and they sometimes come back to revive the same problem faced long ago (with different absolute values, of course). The lesson that I've taken over the years is, Don't forget old techniques - they'll come back around.

    • I seem to remember this from hard drives back in the 1970s. Run-Length-Limited (RLL) coding expanded every 4 bits to a 5 bit code such that there could never be more than 2 zeros in a row (if I remember right), which allowed shrinking the timing by 50%, yielding a net of 62.5% of the original length (50% of 125%). With the 4-to-5 expansion there was one extra valid 5-bit code that became used as a sync/command code. This is why it's worth remembering history; there is always a bottleneck somewhere in the system, it just keeps moving around, and ideas that were useful in one area are often useful again.

    • I grew up in NYC. If you don't know the person who's passing by, ignoring them is the *polite* thing to do. Imagine trying to greet the n-thousand people passing by . . .

    • Re: not reading documentation: That's the whole idea. A well-designed UI should enable the user to do basic functions with only some common basic concepts (like menus). Yes, complex functions need explanation; yes, settings that seriously affect the operation of the device need explanation; but the basics should be basic.

    • Another thought - Many of the definitions for hardware devices come in header files from the hardware vendors. They want to keep things very simple and very transportable to the oldest C compilers, and besides they don't care about complicated software constructs anyway, so they tend to do the simple one-at-a-time declarations. I could change them, but my manager would wonder why I wasted my time. We all should be leaning on the vendors to provide a better starting point.

    • I dislike the x_enable() and x_disable() approach (using C) because the state is often reflecting a variable (particularly true for outputs, maybe not so much for the timer example). On the other hand, that means an if statement for every operation. In 40-year-old IBM assembler, I could have written a macro that tested whether the specified parameter was a known constant and generated either (a) only the single fixed line required or (b) the function call to test the value. C can't do that. I haven't gotten C++ to do it either. It is disturbing that the linua franca of current practice is nowhere near the best possible and does not encourage the best practices.

    • I believe that Mark Twain's "rules governing literary art" are even more useful in programming (see Twain's essay "Fenimore Cooper's Literary Offenses"). Twain insists that "a tale shall accomplish something and arrive somewhere", and that characters should be consistent and should have a reason for being in the story; and he urges the writer to "Eschew surplusage", yet "Not omit necessary details". Replace tale with program, and characters with variables, and you have guidelines for *any* writing.

    • Yes, there is. It can be as simple as choosing meaningful names. I could declare bool Motor_flag, or bool Motor_is_running. With the first, people will forever guess and hope that TRUE means active; with the second, I don't need a comment, because the name explains itself. Of course, a few lines now and then with an *overview* of what's going on help keep things clear, like the title cards in an old silent movie.

    • Verizon Fios claims over 3.7 million customers. We have two cable boxes in two rooms, either of which could be asleep for more than 3/4 of the typical day. Just at one box per household, asleep for half of the day, there should be some worthwhile savings.

    • Verizon FIOS DVR - turn on the TV when the box is supposedly off, and you'll see that it is displaying a pretty logo screen saver telling you to turn on the FIOS box or turn off the TV. By all means I expect the unit to wake up whenever a recording is scheduled, and maybe it needs to check in every half hour for listing updates, but there is no need for it to be on full power ALL THE TIME. If I explicitly turned it off, and the indicator light on the front is off, I don't expect a screen saver - I expect it to be in a low-power sleep mode like my computer which turns off the disks and display. This is lazy design to a lazy spec that doesn't think about power because it's always plugged in (and the designers aren't paying the bill). All of these computing systems should be designed with a "laptop" battery-powered attitude. And it's probably possible with a software change: lower the clock multiplier, focus on minimal realtime clocking for wakeup calls, and stop doing wasteful things.

    • Sure, a dangerous tool in knowledgeable hands is safer. OTOH when I bought a chef's knife, I didn't buy just a sharpened piece of metal for its edge; I bought one with an edge *and* a nice ergonomic handle that would also survive repeated washing. Using C is like a sharpened piece of metal that's sharp all the way down . . . including where you intend to hold it.

    • No. I'm not running a TECO macro or a Grep script to rearrange text; I'm trying to generalize code. Using IBM Macro Assembler in the 1960s(!) had macro concepts almost as sophisticated as C++ template-function concepts; as the simplest example, you could reference a macro parameter's physical attributes (number of letters, text string, etc.) or its logical ones (what datatype is associated with that variable name?). You could also make recursive and conditional macro expansion (try putting a #if inside a #define and see what happens). The C macro language was a hack from the beginning, and the publications at the time (I was in grad school) made no secret of its afterthought-ed-ness.

    • I agree with both of you, because I think your points are looking from opposite perspectives - and, in Larry's case, from older tools. I coded assembler and channel programs on IBM 360 mainframes and agree 100% that designing the algorithm around what the the hardware supports is clearly going to get better results. On the other hand, right now I'm using IAR's compiler and IDE, and it will optimize for whichever processor I select - and whichever optimization level I select, too. The point where these cross is that if I need to divide over 10 samples vs. dividing over 8 or 16, there is no way to optimize the divide into a shift. We come back to the importance of design (and requirements) *before* optimizing the code. Hoare and Knuth pointed out that "Premature optimization is the root of all evil". Jon Louis Bentley pointed out that data structure and algorithm design are almost always more important than coding optimization. Making your design clearer to people often results in it being clearer to the compiler and the processor as well. Overly-clever-looking coding may prevent the compiler from using even cleverer processor tricks that you didn't even know about. The folks at IAR (and I'm sure at other compiler companies) sometimes generate code that makes me go back to the reference manual - they've studied the processors and use the kind of tricks that separated the hackers from the Cobol programmers in the old days. You wouldn't build a processor out of 7401 NANDs; you might as well use all of the power tools you can get.

    • Rather than "everybody pick it apart, tweak it, etc." how about having one shared copy that MANY observers have picked apart, tweaked, reviewed, used, tested, examined, etc.? That's what Linux is supposed to be. With your argument, everybody should blow the glass for their own light bulbs, after digging their own silicon and fabbing their own chips.

    • I ask, Why do you need an interrupt routine guaranteed to execute in less than 15 cycles? Who cares about cycles, and how long is a cycle anyway? OK, maybe it makes sense to say "I need an interrupt routine that will trigger an output within (insert very tight timing) of the input IF other conditions apply"; but if the processor doesn't run fast enough to guarantee the service, then maybe the timing doesn't belong there - maybe you need an outside part which is just *gated* from the microprocessor. I've been through multiple generations of hardware and software, and each time most of the hesitations were either overcome (like better code optimization for high level languages) or misunderstandings (like thinking you need to do something the old way rather than use the new features/hardware/whatever to do it better the new way).

    • IBM OS360 default structures used BCD for two decimal digits in one byte. Since the processor had BCD arithmetic (up to 254 or 255 bytes long), this was handy. The more serious problem was that OS360 control structures used a 32-bit aligned fullword for addresses, with the top byte used for flag bits because it would be ignored in the 24-bit hardware addressing. After all, nobody was going to install more than 16 meg of magnetic core memory, because it sucked up power like a sponge and returned it as heat (yes, you really could cook an egg on top of a mainframe cabinet). This led to major issues moving to OS390-XA (eXtended Architecture), and also led to the selection of the 20-bit Intel architecture rather than the full 32-bit Motorola architecture for the IBM PC (after all, they couldn't be selling PCs that could have bigger address space than the mainframes).

    • Considering the oft-quoted (though possibly apocryphal) statistics on how much internet traffic and video technology is driven by porn (including the song from "Avenue Q"), perhaps fembots and other toys are the *main* profitable business that will drive and support all of the other practical uses. :-)

    • Buckminster Fuller pointed out in the 1950s that "there IS enough to go around". 1950s science fiction *and* sociology figured that society would have to change to redefine the value and amounts of work, since productivity was expected to grow. One story (sorry, name eludes me) was about the man in "our fair city" who had been chosen by lottery to work this month - an exhausting 2-hour morning of checking the settings and switches in the local MegaFactory. Instead we we out-consume our productivity, and we waste productivity on useless things - not just non-practical, like music and art and entertainment, but totally non-useful objects that consume resources to make, are amusing for 30 seconds, and take resources (or spoil others) to clean up and be garbage. Plus the costs are rarely inclusively accounted; nobody accounted for disposal of production waste, or disposal of used-up product, which is why we wind up with environmental cleanup costs that become a "general expense" rather than being charged back to the company that made a profit by not having to pay the costs at production time. (And that's why I don't rely on good old capitalism - it is too short-sighted and too likely to abuse common resources selfishly. If only DeToqueville's comment about American *enlightened* self-interest were truer.)

    • Sorry, what I've seen is that the different job is usually *not* better paying. The free market means people will buy the cheapest that barely meets the requirements for the minimum time. (Which astronaut said something about realizing that he was sitting on top of the world's biggest bomb built by the lowest bidder?) "Less expensive to use people" - If machinery is rare, expensive, unavailable, inaccessible, and/or human power is the only way to get to the site: sure. If humans are being paid barely enough to eat, with no health or emergency care, so that it costs less to use and discard them than it does to maintain the machinery: it's not technically slavery, but it's not much better. Of course, that describes most of human history in most of the world, and it also describes subsistence farming to keep oneself and one's family alive. Civilization starts when productivity of the subsistence farmers increases enough so that they can support specialists like tool-makers.

    • Not trying to live in the past; rather, complaining about the shortsighted American behavior that got us to this point. Everyone else worked to get better; American executives worked to divest of real production in favor of making money by bookkeeping.

    • The problem is reversed. Why should it be that buying American is expected to get you lower value? I remember when "asian made" meant that something was cheap, flimsy, and substandard. Someone made a lot of money exporting our entire industrial base overseas, and in the meantime damaged our economy as effectively as old-time colonial powers stripping countries of natural resources.

    • Well-said. The same is true of many supposedly labor-saving devices, like voicemail and "self-service" customer (non)service systems: all they do is transfer the responsibility (and work) from one place to the other, and in most such "deals" it's the party with more knowledge and power and authority that is trying to economize and divest itself of the work by leaving it to the other party. In this case, the compiler - which could insist on range check coding, insert checking code early and/or integrate it into the running code, etc., at the expense of being well-written once - abdicates its authority and leaves the work up to every single application to be done right. If we stepped back and looked at this as a manufacturing system in which we planned to put an inspection point, the optimal location would clearly be the narrow point that every single item has to pass.

    • Tradeoff. At some level, absolutely - the range should be checked. Perhaps that can be done in the very front input driver. But after that check, one shouldn't need to recheck and recheck all the way through. Plus, per the original topic, C doesn't give you much help. If you could use ranges in a switch statement, like Pascal, the validation would be easier to write as an intrinsic part of the code - anything unrecognized is the "default" case.

    • "Static analyzers are relatively new ideas ..." except Lint was already around when I was in graduate school in 1978. Not that Lint is the best, mind you, but the idea has been worked on as long as C itself. C is part of the problem. I've used Pascal for one major project, PL/1 for another; both had more structure and less ability to shoot yourself in the foot.

    • It's not a question of the *number* of include files; it's a question of whether each module is pulling in information that it really doesn't need, and thereby becoming coupled in ways it doesn't need to be. If someone is going to use one big include list of all include files, why not similarly just code one module with all of the subroutines in it? After all, one source file should be easier to deal with than dozens of little source files! (Heck, I started with decks of punched cards.) That's not considered good practice any more; we segment code and decompose functionality. The same should apply to data and definitions. It keeps the scope of comprehension more manageable in a person's head, which is much more important than in the compiler.

    • Until someone changes a variable's meaning, but DOESN'T change the name (especially the prefix) because that would involve changing text all over the place . . . No thanks. Turn on the compiler checking (and if possible promote warnings to errors), and use a static analyzer, and make the names meaningful instead. "Words have meaning, and names have power."

    • I use uC/OS, and Mr. Labrosse's product is coded beautifully. That said: Sounds like a quibble, but in the directory tree, Object and Listings are level with Source. Every system I've ever worked on put them *below* the Source. So even with respect to such a basic decision, there are clearly different "standards". (Don't get me wrong, I love standards, and there are so many to choose from . . .) I concur with others that header files should be broken up by function, particularly if there are distinct subsystems or tasks. Only the display modules need to know all of the display codes, only the comm modules needs to know the comm codes, etc. To the person who says "not all caps" - sometimes that's part of the convention (constants in caps, variables mixed or small). As long as a practice is *consistent* it can confer meaning. File names with blanks were created with Windows purely to destroy compatibility with existing DOS and any-other-OS applications. Space doesn't work in variable names, so don't expect it to work in file names, and learn to type an underscore. :-)

    • I've seen the same problem. Situation: Subroutine fills buffer. Pass the address of an uninitialized buffer; get an error at the invocation because there's no way to *specify* that the address will be used to place output rather than read input.

    • Mark Twain discusses "rules governing literary art" in his essay "Fenimore Cooper's Literary Offences" (and it's funny too). For example, "... the personages in a tale, both dead and alive, shall exhibit a sufficient excuse for being there." "(The author shall:) Eschew surplusage; Not omit necessary details." Replace "personages" with "variables", and suddenly Twain could be giving a design seminar.

    • Rules are too simplistic. Any set of rules will have something objectionable, or even silly, to people trained differently. As a simple example, to my computer science background (as opposed to the usual EE background in embedded systems), recursion *is* a simple flow construct. The subject of the recursion should be no more complicated than a do-while loop; yet somehow one is frightening while the other is acceptable. Similarly, "use no more than one level of dereferencing" is overly simplistic. If you're manipulating data from one area to another, you have two pointers to two different area information blocks, each of which has a pointer to its current buffer; so just for the basics you have two levels of dereferencing (areaptr-)dataptr-)data). This of course assumes that you are NOT "limiting the use of pointers" because you somehow think that array subscripts are less arbitrary and less subject to computational failure despite involving more computation at runtime. The point that everyone aims at is writing clearly enough to be understood without any possible misunderstanding. Donald Knuth's "Literate Programming" (1984)focused on writing the documentation *more* than the code. But just as the English language can be used to write Shakespeare's sonnets or outdated technical papers, any coding language or approach can only be as good as the effort made by the coder to be clear and descriptive. These ten rules don't even include something like "Use names that mean something, and mean what they say, and are appropriate for the things they are naming."

    • As a software developer since the 1970s, I have always rejected the waterfall approach. I understand the solid history, and even validity, of the approach: when we did home renovations, we didn't cut off the roof until we had complete plans from an architect, and a project plan from a builder, and approvals and variances from the town planning board, and deliveries scheduled for building materials appropriate to the plans and project needs; yet even so there were changes during construction as realities were uncovered in the connecting structure. My hardware collaborators have the same problem with board layout; it has to be right the first time, or after a minimal number of prototypes. Software can be truly different. A partial product may be a product in itself, perhaps as the lower end of a product line. The feature set that is present must be right, but it may be incomplete for the total project. (And perhaps that's not completely different from home renovations; when the outside shell was complete, we were camping inside even though much work remained to be done.) You can't partially deliver a TV. You can't partially deliver a car. For embedded systems, the software project used to be a replacement for state-machine hardware logic, which could be clearly stated. Now that embedded software systems are well up into the feature range formerly considered mainframe business applications, the project scale similarly enters the range of uncertainty (and, partly because of its longer timeframe, creeping featurism) to which the embedded world is unaccustomed.

    • The key line here: "...thinking in terms of objects." Many of the things in OO languages are about ways of thinking, assisted by ways of notation. We used to do many of the same things in Fortran - just not as neatly.

    • C doesn't define the size of its basic memory unit. When C was first introduced, everybody I worked with saw this as a core weakness: how could you possibly know what the code would do if you didn't know what size the ints would be? And by extrapolation, how could you possibly trust a system made by people who wouldn't even define such a basic unit? The creators wanted code to work on 32-bit systems and 36-bit PDP-10s without change. This article shows that we're still paying for the confusion of that flexibility long after 36-bit systems have been abandoned.

    • Why would templates be restricted in embedded applications? You wouldn't restrict macros in assembler, and C++ templates are more like assembler macros than C macros are (because C macros are simplistic text substitution while templates have semantic context). I agree, it's the time-wise unpredictability of many of the advanced constructs that is dangerous. Dynamic allocation, for example, no matter *how* it's named and invoked. I would always rather declare a fixed pool of whatever records I need, with enough calculated to handle the situation, and manage the pool rather than allow dynamic memory management to decide when it needs to garbage-collect. I remember the days when it was assumed that performance code had to be in assembler, and it was a hard sell to engineers that a compiler could optimize on the back end. I'd like to think I helped convince a few people on a project switching from Z80s to 68000 using Pascal; when you have a few extra registers and addressing modes to work with, you can generate cleverer things than most people will code by hand. Today the ARM instruction set and a good compiler will produce *much* better code.

    • How does one get valid market share numbers for Linux - any/all distributions - when it can be downloaded for free and installed without notifying an authorizing server? Microsoft can tell how many boxes with disks have been shipped, or how many installations a PC vendor or IT department has paid for; same for Apple. There's no valid way to know how many Linux systems are out there.

    • Low savings rate: (1) Every financial institution in the country was encouraging higher leverage for a period of 10-15 years. Suddenly they're all changing their advertising to "what really matters" and "invest for the long term", mainly because all of their previous recommendations fell apart. (2) My wife and I save; and then we wonder why, when savings interest rate is nil, and the supposedly *conservative* investment funds collapsed as badly as any others, and prices go up. Then we start thinking that maybe the people who spent everything they earn at least got to enjoy the stuff they bought, instead of seeing their money vanish quietly on dropping 401K balances.

    • For a few years I tried NJ's "Alternate Route" program to become a high school teacher; it was a natural fit with my CS/EE background to aim for high school math. Even seventh-grade students questioned why they would ever use any of this stuff - after all, said one, he would go into his father's driveway paving business and never use it. Of course you'll use algebra, I said; how will you calculate the amount of blacktop for a project, and figure the pricing? No problem; he'd just look it up on the chart. (sigh) So I tried the example of how he, a football player, lifted weights to build muscle "in abstract" even though lifting metal blocks isn't the primary purpose of most sports, and the idea that algebra is similar training for the brain in organizing and solving problems. But this kid didn't have any problems that needed solving, because the charts covered them. (deeper sigh) And who makes the charts? How do you know they're right, and current? What if you have a situation not on the charts? The blacktop vendors, and they cover everything. (deepest sigh) It's no wonder the public is so easy to mislead.

    • C++ can only comprehend this in terms of dynamic allocation. I can't do that. C can only do fixed size. All the way back in the 1970s, Sail and Pascal had the concept of "dope vectors" with information either explicit or hidden (typically in the bytes immediately preceding the array) so that a subroutine could do range checking on a passed variable. (And IBM 360 Assembler macros offered capabilities that I didn't see again until template functions.) Isn't there some quote about learning from history?

    • Back when C was first introduced, *everyone* I knew - most of us working in PL/1 or assembler on IBM mainframes - thought it was completely insane to create a language that didn't even know what size its intrinsic variables were. The sheer number of things left undefined or implementation-dependent told us this was a researcher's curiosity, despite the obviously good ideas of Unix that were demonstrated with it. 35 years later, C has somehow continues to be the lingua franca, and people keep finding the same problems. (I think Pascal had enough additional rules enforced to be better - PL/1 just wasn't quite enough.) Some of the examples are due to lack of definition. Does "volatile" mean "this may change while you're not looking, so make sure to read it before using" ? Or does it mean "this variable is hardware / special / magical"? If the former, optimizing away the load/store is completely correct. There's no way to *specify* that you are storing to, say, a UART port, and OF COURSE it's stored but never read. At least it's philosophically clear to everyone nowadays that the attribute belongs on the VARIABLE, so that the compiler can make the right decision automatically, rather than putting the burden on the programmer remembering to turn optimization on and off around the operation!

    • "It's mine. It has to be right." It's all in how you mean it. I do NOT mean "Mine is right and I'll defend it as-is"; it means "Because it's mine, I want it to be not just OK but the best, and if it's wrong - or even if it is merely sufficient and can be improved - I'll make it better". There are usually multiple ways to do any particular thing, and usually a handful of them are of equivalent cost/benefit, so arguing about the choice of approach may be moot. Within that choice, though, making sure that it's done as well as possible should be the focus. I am opposed to egoless programming. Personal pride in one's work should mean complete readiness to display it and explain it. Athletes and entertainers perform in public, with replays. One NHL goalie pointed out how when he has a bad second at the office, red lights flash and sirens go off and 50,000 people yell at him. And it gets replayed on the scoreboard to humiliate him again. We should be prepared for the same challenge.