Biography has not been added


's contributions
    • I'm with DaveSchabel. I'm old enough to to have grown up in a mostly analog world - in fact I still remember the input specs of the µA709 and 741, (though I don't remember what I had for breakfast). I must design to worst case specs, and will only use the typ numbers and curves to extrapolate as best I can. Equally worrying: I recently needed to tell a customer how much current he could safely draw out of the analog output of one of our COTS controllers. It uses a common or garden LM358 op-amp. I checked 3 brands' data sheets. *Nowhere* could I find a thermal resistance number! It would seem that engineers these days are not expected to concern themselves with such matters as Tj. Maybe they aren't taught how to calculate dissipation? Of course, my old printed data book library (once my pride and joy) went to the re-cycler years ago.

    • Bernard, I am getting seriously inconvenienced by the "You have hit you 2 article limit. Please log in" Is this something relatively new? It never used to happen. I log in, then refresh the various tabs in FireFox (I click multiple links in the newsletter then switch to FF to read the articles in tabs), but I still get the same message. Please fix or lose a reader.

    • I like it!. I shall take that as a formal proof. Now let some academic re-phrase that as a eye-glazing equation with lots of obscure non-ASCII symbols and call it Manning's Theorem, the popular version of which will be "all software faults are a result of wetware faults, not inadequate tools". Then we can all forget about it and go back to blaiming the tools for our failures.

    • We made a few thousand "glue boxes" for an HVAC company recently. The logic was at the level of a 555 and one or two gate packs. We used a 37cent 6-pin PIC instead. I could never see 32 bitter doing the same. BTW, what happened to 4 bitters? :-)

    • The above code must be untested. Compare and Branch on Non-Zero (cbnz) and Compare and Branch on Zero compare the value in a register with zero, and conditionally branch ***forward*** a constant value. The sample code expects them to branch backwards. This just cost us 30 minutes work!

    • A free PID simulator and easily digested PID tutorial are available at http://splatco.com/skb/2639.htm. This includes an Excel-based method of characterizing your system, and is used by a number of educators. David Stonier-Gibson http://splatco.com

    • This could clearly be a total game changer. The problem I see is that the paradigm shift is so huge that the current generation of practitioners (that's us, guys) will simply not be able to re-program our brains to usefully accommodate the new way of doing things. It brings to mind all the fuss around multi-core. By comparison that is a a minor shift, and yet everyone is frantically trying to find a way to cast multicore into an emulation of the familiar paradigm. Translation: To invent a C compiler for multicore processors that hides the profound difference in the underlying computing technology. I can just see, 20 years from now, huge protracted debates about, and failed commercial attempts to make, a C compiler that targets a memristor architecture that is totally different to the simple 100 core micros we are (by then) all using. No, IMHO what will be needed is a whole new generation of practitioners who have been trained, without having their brains polluted by "harvard" thinking, to work with the new architure. In particular the C language must go, as it should have years ago. David Stonier-Gibson http://splatco.com

    • To the editor or webmaster: PLEASE include some instructions on formatting posts, or get a true WYSIWYG editor. It is so annoying having some post appear with no paragraph breaks, and some with quotation marks converted to weird characters.
      If we knew the rules we could get it right.

    • When I first read this article I thought "this is spot on". The nightmare vision of a sky full of dopey 3D drivers is beyond contemplation.

      But then I realised that such technological possibilities should be viewed in the light of the positive changes they may offer.

      Our society is obsessed with the availability and ownership of personal point to point transportation, i.e motor cars. We embed vast quantities of resources into vehicles that spend most of their time not being used, then we consume further resources to move them inefficiently from place to place with ourselves - in energy terms - as incidental cargo. We build new, sprawling suburbs and in so doing neglect public transport infrastructure. When the roads get clogged we say “oops” and bang in another freeway.

      Is this any more sane than flying vehicles?

      So, imagine a world where the freeways are replaced by high speed railways. People get to and from the railway stations by summoning a fully automatic flying taxi. The stations don’t need huge car parks. People don’t need to personally own 2 tons of steel and plastic. One aerial taxi can service perhaps 50 people (rather than close to 1 car per adult). The fully automatic control of the taxis should not be beyond the wit of Embedded Man. The only possible down side is that the fuel consumption of a flying vehicle will (probably) always exceed that of a ground vehicle.

      I am not saying this is the specific transportation model we should aim for. What I am asking for is that we keep an open mind about the relationship between research, product concepts and practical applications.

    • We are a small company. We are developing a whole new product range centered around LPC family ARM7 chips. Should we be researching alternative ARM families or should we assume the LPC family would get sold off to another manufacturer? Right now LPC looks like the very best highly integrated ARM7 product available, with an upcoming LPC17xx rangethat looks even better.

    • Please bring back the old format. BTW, the Bell Labs group did not make a junction FET (JFET). They made a bipolar junction transistor (BJT). A totally different animal.

    • "Meanwhile, Qualcomm's Jha turned almost defensive at a suggestion that ubiquitous cell phones, together with their power-thirsty base stations, may be perhaps one of the sources contributing to the current energy crisis."
      Come on people! What's the point in sitting in your V8 SUV on the freeway in a traffic jam stressing over how much energy your cellphone is using as you call home to tell the spouse you will be late for the dinner she is baking in the 4kW oven in a house with 8kW of central air conditioning? Surely engineers of all people should know better than to give even a moment of thought to such silly notions as energy harvesting cellphones!
      I would suggest that Jha was being not so much defensive as incredibly diplomatic.

    • These are well established principles. They do however deserve to be re-iterated from time to time time if only for the benefit of upcoming generations.

      However, while the author is extolling the virtues of separating function code from hardware and engineering units, he then goes on to say in his ABS example on page 2:

      The output from the core software is a current value that is desired to flow through the solenoid/load.

      Core code generating current levels? That is not divorcing the core from specific engineering units. Surely all brake solenoids don't work at identical current levels?

      This slip will do nothing to help the upcoming generations, and should have been spotted by the editors.

    • I don't consider myself qualified to make informed comments about the high end of multicore processing. My field is the low end of embedded controls in "machines". However, what I can sat is this: 90%+ of programmers in my domain stick doggedly to C. IMHO, with nearly 40 years' experience in this, C is exactly the wrong language for these applications!

      "My" control problems are about sequence and timing. C is a high level assembler with a leaning towards data processing. The domains are very different. Yet C seems to be close to a religion to most practitioners.

      This realization has taught me some important lessons. The main one is that the tool must fit the domain.

      So, with multi-core processors, until the majority of practitioners can be persuaded to abandon their cozy little C-lined boxes and broaden their horizons, it will look grim.

      The other thing is that maybe the people who are desperately trying to find a solution to programming multicores need to consider that there may not be a one size fits all solution. There may have to be different solutions for different problem domains.

      Finally, Mapou, you say "The best solution will win out in the end. Necessity demands it." With respect, that is often not the case. Remember Betamax? I could in fact suggest the same about C and wonder why perhaps Forth never became dominant. In fact, could something like Forth, which essentially allows itself to be morphed into a specific language for each application, be the answer for multicore?

      David Gibson,

      SPLat Controls, Australia

      (I have no commercial interest in promoting Forth).

    • I must strongly disagree with Miro on one point, namely that his posting sounded presumptuous. Not at all. Other than that I fully agree. Encapsulation is something I have practiced in varying degrees for quite some time, purely as a self imposed discipline. Almost every time I am tempted to bend the rules I come to regret it. We are currently in the process of developing a whole new embedded controller family, along with its attendant IDE and suite of programming languages. One of the interesting debates we are having in-house is to what degree do we enforce good discipline on our users? This has to be considered in the light of the fact that our users are frequently novices, small to medium sized OEMs where the guy who will write the program has little prior programming experience (scary?). The current state of that conversation is that we will impose as few rules as possible. However, any library functions we provide will have an object type interface and be configured through design-time property sheets. That way our users will hopefully get educated without being bullied. I am using the term Object Based Programming (OBP) to signify the concept of encapsulated objects, although we have no concept of inheritance or polymorphism. I would love to be able to attend Miro's class at ESC. Unfortunately it's a long way from Australia. David Gibson, SPLat Controls, Australia.

    • Miro refers to "introductory programming course". My first and only formal course was Fortran 102 (the semester after Slide Rule 101!) back in 1968. Basically everything I do in embedded programming I have worked out for myself and only in recent years has Google allowed me to explore what others do and try and learn the accepted terminolgy. Since the early 80's I have persistently used a program structure for embedded controls where the program consists of a whole bunch of independent tasks running in a simple task queue (or threaded) system. Each task has its own thread. The threads are managed by a very basic "task queue manager". Each thread is responsible for "suspending" itself (giving up the CPU) when it has nothing to do. In assembler it does this by doing a JSR to the queue manager's Suspend subroutine, which pops the return address, saves it in a circular list of queued task, gets the adress of the next task in sequence, pushes it on the stack and executes a Return. One or two simple extensions to this idea will save a couple of CPU registers, most usefully to provide a timing mechanism. Tasks can also be dynamically launched and killed. Even back in the days of 4MHz Z80s we had task switching times around 20-30uS while heavy duty RTOSs where boasting of 1mS! Virtually every task in this scheme is a Finite State Machine (FSM). However, it doesn't need a state number to remember where it is. It's program counter is remembered and restored by the task scheduler so it automatically picks up where it left off. This mechanism (for which I make no claims of exclusivity) lends itself beautifully to writing FSMs that poll for events or conditions. These can be inputs, UART receive characters, timers or signals (flags etc) set by other tasks. The main point is that it is a polled environment. I call it Pull driven, as distinct from Push driven. AFAICT this does not comply with Miro's event driven model. It does not, however, have any shortcoming of stalling while waiting for an event. Any individual task may stall waiting for an event, but that is the very nature of an FSM - it stays in a given state until "something of interest" happens. That "something of interest" need not be one thing. The typical pseudo-code for a state in an FSM will look something like this: 1. Execute entry actions and note the system time (I don't use exit actions) 2. Suspend (yield the CPU) ... (next time around) ... 3. Test condition A and branch to another state if met 4. Test condition B and branch to yet another state if met .... n. Goto 2 if not timed out (suspend and try again later) n+1. Goto timed out exception handling. This code tests all conditions or events that are relevant to the current state but never wastes processor time on events that are irrelevant to the current state. The code itself is as lean and mean as it can possibly be - test and jump, test and jump then loop and suspend. The interfaces between these tasks will consist of 1-bit flags or multi-bit commands and/or variables. Incidentally, a layer of such signal elements between tasks is all that is needed to map between push and pull models. If you apply the principles of encapsulation (data hiding) to all this you get objects that comprise one or more tasks, each running in its own thread, along with the associated "method" subroutines that are the guardians of the interface flags and variables. You now have what in the excellent glossary on Miro's website are called Active Objects or Actors. The concept of an encapsulated Active Object or Actor, with an interface comprising a bunch of strictly defined methods and associated parameters, lends itself very nicely to code re-use. However, my 38+ years in embedded systems tells me that the ideal of code re-use in embedded controls is very elusive. Some things certainly, like PID controls and file handling can be re-used, but the cow hoof spray patterns I have been working on recently are hardly re-usable. A stubborn insistence on re-usable code is likely to result in a proliferation of virtually useless lowest common denominator objects like flip flops, counters and timers. My belief is that the Actor model has benefits more in the area of factoring a system so several people can work on it at once. That is in addition to the huge benefits gained from a model that encourages a high level of modularity in system design and structuring and allows independent testing of each Actor. The other point I'd like to make is about cooperative versus pre-emptive task switching. For my 2 cents I can see zero benefit in pre-emptive. Why have a task sit for 1000uS spinning its wheels waiting for an input that isn't going to happen this week, when it can all be over with in 10uS? The massive complexities that occur in pre-emptive systems just don't happen in cooperative systems. Except for ISRs, one "run" of the pseudo code above is atomic. You know that between runs of a task every other active task will run exactly once. In our current implementation we even have time "stand still" during one run of the task queue, so all tasks will see the exact same system time. We could even (though we don't) have all inputs remain unchanged, similar to what ladder PLCs (yuch!) do. This can avoid a lot of race hazards. In a pre-emptive system, it seems to me, the hazards are much greater and the RTOS itself burns so much CPU time that elaborate task prioritization schemes are needed just to dole out what's left! David Gibson, SPLat Controls (Still grumpy!)

    • How refreshing to see some recognition that perhaps "small" systems are common. There can only be so many GPSs, cell phones and MP4 players designed with ARM9s and 4MB memories. My business is in the embedded controller market, making board level products for OEMs. In my world of embedded controls 10KLOC is huge. All those coffee machines, breadmakers, air conditioners, gray water treatment systems, cow teat sprays, etc. etc. are all embedded controls working with quite modest sized programs. I distinguish between embedded controls (my world) and embedded systems (aka embedded computing), which is the world of embedded Linux, with Java or C++ running under a fancy RTOS. In 35+ years of doing controls I have never felt any need for bought in RTOS or OS. Everything I read suggests that the world at large (especially the technical press) remains blind to the existence of embedded controls and automatically equates "embedded" with computing. Controls and computing are related but otherwise very different things. Computing is about processing data. Controls is about reacting to and acting upon the external environment. The focus is quite different. Computing people could no more design a control system for a clay pigeon launcher or vegetable oil fuel system than I could design an accounting package. In controls our tools of trade are Finite State Machines, timing, and logical analysis, usually combined with simple, but oh so fast, multitasking. We make closed systems, where the code is burned into read only memory during manufacture and remains unchanged. We don't need memory protection, garbage collection or (at least not often) file systems. Our imperatives are quite different to those of computing. And yet so much of the embedded "discussion" centers around computing issues. I can't remember when I last saw an serious article about the concerns of embedded controls. Things like "Multitasking kernel in 25 lines of code" (I wrote mine, for the Z80, in the early 80s, later replicated on 6809, 6502, 6805 and 68HC08), "Demystifying State Machines" or "Designing effective user setup menus" or "C is not for Controls" Small is beautiful, and small is usually smart. David Gibson, SPLat Controls, Australia (Or just a grumpy old man?)

    • "One of the newer trends in embedded software involves designing simpler solutions for applications that don't need all the features." Gee golly, what a novel idea. Only thing is, I was doing just that 35 years ago when 2K was a huge footprint, let alone 2M.

    • I've been designing embedded controls since before the term was invented - early 70's. My company makes programmable controller boards for OEMs. Believe it or not there is still a whole world out there where people need to control valves, motors and proximity switches in pallet wrappers, water filtration systems, marine toilets and chain lubrication systems. It's not all embedded Linux with 4MB or memory, GPRS and SQL. Far from it. These are applications where every cent and every mA count. There is still life in 8-bit chips programmed in assembler or basic C, squeezing the last little drop of performance out of 32K of memory. I am amazed by the number of companies making so called embedded systems with hundreds of MIPS of processing power and not one single real-world output or input, like a 10A PWM motor drive. Of the hundreds of companies making embedded systems, I know of maybe 6 who like us make control boards for the real world (PLCs excepted, they are a different category). I believe the reason for this is that the term "embedded" has already been taken over by computing. So-called embedded "engineers" are actually IT people. They are safe with C and C++. The real world scares them. They don't know how to get how to get their hands dirty banging bits and driving motors and solenoids. So far from proposing to merge "embedded" and "computing" I would emphasize the differences. The computing guys can play with their data bases and embedded web servers, while we happily make things go wiz, whir and plop in the real world. In particular, I differentiate between embedded computing and embedded control. That said, of course there is overlap. We need connectivity on modern control systems, and computers sometimes interact with the real world. No taxonomy will ever succeed in keeping the fields separated. We practitioners of embedded control must learn from advances in computing and embrace concepts and methods like OOP, while at the same time adapting them to our specific needs. Hence things like Object Based Programming and Port Based Objects. These are concepts that allow us to better manage complexity and produce more reliable systems (and for my company, lower the skills bar so non-specialists can sensibly make use of our products in a modern context, just like ladder logic did in the 60's). But let us not thereby thinks we are becoming computer people. We are not.