Advertisement

mr_bandit

image
systems nerd

Biography has not been added

mr_bandit

's contributions
Articles
Comments
    • I had a gig at HP in the 90's. Their instruments used pSOS. Fantastic RTOS. Clean. I was the one who got the mpu when /reset was de-asserted.. Easy to setup tasks, fifo queues, etc.

    • I have been using the same type of logbook since 1981, when I got my first "real" job. The company supplied them, and I was hooked. The logbook is the AMPAD (www dot ampad dot com) #22-157. 152 numbered pages, heavy paper, heavy covers, wire spiral bound. Prices vary from $15 to 425 depending on source and quantity. I usually get 12+ The spiral bound is important to me (vs stitch bound) because I have one hand. The logbook lays flat. I would need a second hand with stitch bound to push the page flat. The only objection I have is it does not have a TOC page. Instead, I have t create a word/notepad/etc page I print out and attach to the blank page at the front (also a blank one in the back). And, yes - when I forget to write down something critical, it comes back to bite me. Happened recently - I forgot to write down a bit of majic. I needed that majic at a critical time. The pages are bigger than 8.5 x 11. Need to set the copier at 90%, but not a big deal. I have gotten several companies and clients hooked on these lab books. They are obviously superior to other ones. I try to use one per project. The incremental cost is low enough. I have had projects that took up several logbooks.

    • I am watching the trend of 3d printing hands for little kids. I am concerned the people making them feel pity for the kids. I can also tell by the language the parents use is some both feel guilty they have a "crippled" child (because they feel it is their fault) and pity for their "crippled" child. The phrase "It will make him/her able-bodied" is a key. The point related to this post is they are making weak fake "grabbers" instead of robust tools for specific purposes. @Duane: read "The Moon is a harsh Mistress" by Heinlein. The first person voice is "Manny", a repair guy, with a variety of "hands" that are both multi-purpose and specialized "hands".

    • There was a bank that refused to deposit a check because the guy did not have any thumbs, so could not give a valid thumb print. He won the lawsuit. Sorry to hear the guy at work died - sounds like the infection damaged enough of his body that it just gave out (a guess). On the other hand (if I am able to use that term :^) it sounds like he had tthe right attitude. That is one of the thinls I like about Def Leppard's Thunder Ghod. He was beating out rhythms when the rest of the band visited him in the hospital right after the car accident. He was fortunate to have the resources to build the drum kit that he now uses. And, speaking of your co-worker, I am sure the doctors told him he did not have fingers :^) I am slightly obnoxious about having one hand, so people I work with realize my momma told me most people have two hands. I don't play golf because I find it a silly game. YMMV. But I did play football in High School. Ended up on the defensive line. Tried the backfield, but this "hand-off" thing really confused me.

    • On a - ahem - serious note, I am a *big* fan of putting 0.1 inch headers on traces, unused mpu pins, etc, just so I have something to attach a scope probe to, so I don't need that extra hand to hold the probe while I type the CLI command. Note to EE's: do *not* make the first board the final form factor! And ... why are gloves sold only in pairs??? Too many people out there with two hands. (Mumbling to self - where *is* that chop saw? I'll fix that problem.... grumble....)

    • I gave up my left arm to be a two-fisted drinker. I'm ambidextrous - I use one hand just as well as the one hand. I get to do everything single handedly. My off-handed puns are in my second-hand shop.

    • Max: I have a right hand and about half a forearm on the left. I have been an embedded guy for decades. I build stuff - robots, houses, cars. So - no need to panic. The Thunder Ghod is one of my favorite examples. Dude has the right attitude - one has a new hobby. Worst case - for you, my consulting fees are half off - a five finger discount - steal as it were.

    • I have written *many* device drivers. Two comments: 1. I once found a major problem with the 68000. It had a race condition in udiv that was heat related. Once I found a set of values with two results (one divided successfully; the other gave a bogus answer), hitting the chip with a heat gun made it fail, and succeed with freeze spray. So - be aware of heat and cold changing timing. 2. I had a failure on an ADC. It returned two bytes, and sometimes the first byte was 0. You could continuously read the results, where I noticed the second (etc) results were always correct. I suspect a race condition - I took data and gave it to the manufacturer. My solution was to read N times, ignore the first set, and take the average of N-1 samples. I recommend reading "Soul of a New Machine by Kidder. It has a chapter on finding a bug (turned out it was a race condition) that took months to track down. I had a colleague who worked at Data General, one row of cubes from the project. He knew all of the players. Said the book was spot on.

    • I keep engineering notebooks on all projects. I try to write down everything - and kick myself when I fail to write down something critical. It is a learning experience. I use the AMPAD-152. My only complaint is they have a blank page at the front that really should be a TOC to fill in. They lay flat with wire bindings. Handy, because you can use one hand to write things down, instead of a stitch binding which needs one just to hold the page flat. (having only one hand, I find that useful :^) I would make use of engineering notebooks mandatory, with once a week switching of notebooks for a review.

    • I use asserts mainly to make sure I have setup and walking my data structures correctly: typedef struct { uint16_t type; // TYPE_FOO = 0xBEEF //etc } FOO_t; void bar( FOO_t *foo ) { assert( foo ); // non-NULL assert( foo (deref) type == TYPE_FOO ); } The consequence is I find data struct problems at the spider monkey stage, not the King Kong stage. This includes accidentally walking over a buffer into a struct following it in RAM. Also: char foo[], bar[]; strcpy( foo, bar ); // also use for strncpy() assert( sizeof(foo) .GT. strlen(foo)); I keep assert() in production code, and log it to EEPROM. I have done stats on my code with assert() in and out. Typically only 2..3% difference. If my time-critical code is within limits with the asserts in, why remove them? Often add a new case to existing code. switch default case assert in low-level code kicks. Saves me time from a brainfart. Also - had a chat with a former client. They had changed the system (normal types of changes any system will undergo). One of my asserts kicked in a low-level code. They were able to handle the case easily. Without the assert, would have taken *weeks* to find.

    • I ran across this . earthmake.com - a branch of earthlcd.com - has an Arduino UNO with a color touch screen attached. $90. (Disclosure: i know the guy - Randy - from embedded shows; Other than that, no connection). The API is full-blown - buttons, boxes, text, etc. The touchscreen response is kind of slow. Lots of examples. Main complaint is he used the HW serial port to talk to the LCD controller. But - for a quick project, not bad. No layout tools - need to do it old-school. But for a robotics project or small system interface, not bad. Jack: I assumed you pulled the monitor apart *after* you were done with your little adventure...

    • Wiznet makes some good hardware solutions. I used the W5100 on a project (sparkfun has one on a board, with RJ45 and 0.1" berg pins). Pick an interface and run their example code thru it to strip out the rest of the stuff. The code works, and there are some funky things about controlling the chip. UDP and TCP/IP - 4 independent channels, each may be a client or server. Separate RAM for in and out. does all of the handshake for you. Just a happy customer...

    • It irritates me when the EE makes the development board the final form factor size, leaving off the headers I can clip a probe to. And - with surface mount, it takes real skill to solder on a blue wire on a fine-pitched lead (or J-lead//BGA) to attach the scope to. At least you have three hands - try it with only two! And type in a CLI command at the same time to trigger the event.

    • My dad bought a TV for the occasion. I was 12. I will never forget. I saw most of the Mercury and Gemini launches, too.

    • I see you have found the "New Revised Standard" version of the Book of Armaments. (My favorite, after the Book of Hesitations.) On a somewhat more (ahem) serious note, one should write all device drivers as device drivers, ie some sort of open/close/read/write/ioctl or init/read/write/control model. There rarely a need on bare-iron to close a device. The init() should take a struct with some sort of general parameters, ie a port number, baud rate, etc, using #define values. (I prefer passing in a pointer to a struct instead of explicit parameters - a matter of style.) I always have mine return a number (fd) even though I may ignore it, because if I port the code to a different platform I may need it. Also makes it easy to port the code to a different platform in the same family, ie from an ATMEGA to an ATXMEGA.

    • @mellowdog: You are correct - the architectural level is at the "black box" stage. However, good article (I know I am late to the discussion). @krwada: we are in violent agreement. Clients often do not know what they want, and it takes "do you mean this or that" to help them. (Of course, I have the same problem with tax forms, so I use a beancounter.) I find one of the best requirement WHAT questions to ask is WHY. If you cannot give me a WHY, then there should *not* be the WHAT. (Sometimes it's just "because that's what 60601... says", but that is sufficient.)

    • Excellent article. However, it seems the manufacturer is rather sloppy in the quality control department. It would seem this might be advertised as a medical device. I would like to see their V&V specs.

    • I enjoy bringing up new HW, especially when I have had a hand in specifying what the HW was going to do. I always assume there will be problems, but I also recognize my code might have a problem. One of the first things I get going is a simple CLI, which allows me to create simple commands to exercise the hardware. Comes in real handy in debugging device drivers. The CLI is useful in production tests.

    • I do a similar technique but use a xxx.tbl and include it as needed. I include it first in a xxxx.h file to create a common set of enum values, then several times in a xxx.c file to create tables the code can search. This also allows me to define several types of macros in the xxx.tbl file (ie CLI_CMD() and CLI_CONST() for a CLI). I just used it to define the binding of control bits to port bits.

    • @Nails: I suspect that Jack has forgotten more about embedded systems than you and I (combined) will know. I will fully admit I do not know about the three products you mention (which are in the datasheet), because I do not like TI micros (personal taste - I found Motorola products much better suited to my needs when TI/Motorola DSPs were the common choices in the 90's). I still do not have a reason to use TI products. Also, I must admit to think poorly of TI as a company, being at a client that was bought by TI and seeing the transition. (TI attempted to snow the original employees (many of them Stanford PhDs) into giving up their legal rights, then treating most of the original employees very badly, to the point most left after their stock vested.) However, this does look like RTOS lock-in for TI. I will stay happy with my three top choices: ThreadX, uC/OS, and FreeRTOS. They do what I need, they have full source, and are proven. I have other things to worry about. But - one would think TI would bother to: 1. follow a consistent C coding standard (pick *one*, any one) 2. use consistent atomic definitions (UINT ??) (why not use stdint.h) Calling someone like Jack "a severe idiot", is downright rude. One can be mistaken - we all are from time to time. I am embarrassed for *you*. While you may be correct in your information, you come across as a troll. I learned a long time ago one can be polite while presenting conflicting opinions and information.

    • This basic technique was described at USENIX in 1984. Basically, you create a normal driver whose purpose is to give you direct access to the hardware. If you need an ISR, it lives in kernel space, but you use ioctl (wow - ioctl is in the dictionary!) to access the results of the ISR, ie the incoming buffer. Linux 4.6 is much kinder on writing device drivers than before, including most Unix variants. The basic Linux 4.6 device driver load/unload seems to be from Solaris (in my personal experience - I've done a variety of *nix drivers, but not every variant). My biggest complaint after 30 years of writing device drivers for many platforms is the devices themselves are designed by EE's, who are clueless on what the device driver writer really needs. The E's make devices that are difficult to control. A simple example is status registers that clear their bits after a read. Often, you want a status bit to be "sticky", ie set until explicitly cleared. The 16550 UART with several channels springs to mind. (Oye!)

    • I would like to thank pip_010 Johnson for some of the finest frontier gibberish I have heard in a long time. (Ribbit1)

    • Gerry Weinberg, in Secrets Of Consulting, has the Orange Juice Test. Basically, a sales manager wants 1000 16-oz glasses of OJ squeezed in 30 minutes. The place that says "it will cost this much" gets the gig. The test is simple: somebody wants something extraordinary, it will cost extra-ordinary. One of the advantages about doing a full analysis is when the customer comes in at the last minute with something new or other change, you can say "these are the affected requirements - which one will you relax?" This happened to me on the last project. It was easy to make the change (a majic number), but it forced the client to understand the change so they could make a decision that matched their desires.

    • @DutchUncle: Problem is, C only has one level of definition - global. But you don't want the space and time overhead of re-doing the context tests (more formally, parameter verification) when the subroutine is clearly intended to be used only in an already-tested context. If only you could constrain the routine's use! Well, with only one level of definition you can't, and that's why some people *discourage* factoring and reuse - which in turn leads to maintenance and parallelism headaches later. Well ... this is not true. You can restrict the scope of a function or variable to a file. You can limit the scope of a variable to a function. You can pass in variables (specifically, a copy) which means the copy is local - you need to do something to make a function argument really outside of a function scope (ie pass in a pointer to an int, instead of the int). You can put type checking in structs, and assert you have the type of struct you think you do. You can check for buffer overflow. You can control *all* of memory and *all* time constraints. You can do a number of things - is C perfect - no. Do you need to goto an effort to make it safe? Yes. But it *is* possible to write clean code to do verifiable systems of fairly high complexity. Don't blame the language for things it is not responsible for. Having said that, there is a *lot* of terrible code out there. I do mission-critical embedded systems. I just finished with a project to replace *really* bad code, in C. The hardware was really bad, too, and part of the bad code was to make up for the bad hardware. I listen to music on youtube via firefox and end up rebooting it (or it crashing) at least 1..3 times a day. But, i am not doing anything serious with it. However, i want my avionics to be rock solid, both as a developer and as a passenger. Btw - to the folks at JPL and other places (at least 100 in my home state of New Mexico) - WELL DONE!! Ad Astra!

    • An embedded system, by its very nature, has inputs, outputs, and specific tasks of converting the ins to outs. Simplest is read an input, do some simple math, change an output. But - this requires a knowledge of how the input sensor works. A thermistor into an ADC seems simple, but the resolution, sample rate (nyquest), drift, offset, non-linearity, and range of the thermistor circuit are all factors. Do you calculate the value of the thermistor resistance to temperature on the fly, or use a lookup table? How much time do you have? Memory for the lookup table? integer math or floating point? Then the output: is it a DAC, PWM, simple threshold (ie thermostat)? Is this a small run (100 units, cost insensitive) or a toy (millions of units, where a few pennies are critical)? These are just a few of the questions that need to be asked both at the start of a project and all of the way thru. Unfortunately, this kind of design process seems to rarely be taught. I now have one metric on if a project was well-thought out. If it uses a PID controller, look very carefully at why. Most of the time it's because the hardware is very difficult to control (ie designed by the EE), or the programmer never bothered to properly analyze the circuit. There are times for a PID controller, and I have worked on such systems. However, my last two projects (I was brought in because of serious problems) had PID control SW for the wrong reasons.

    • @dave brown: Let me first give you a pointer to one of Linus Torvald's rants against "volatile": http://lwn.net/Articles/233482/ I read his rant, and he is correct - use of volatile on a struct that is common to multiple threads/ISRs is not correct. It is also obvious that the I/O is not memory mapped. IO mapped access is different, and does not encounter the same situation as memory mapped. At a minimum, Linux is using the io mapped functions, which should guarantee proper behavior. volatile is appropriate on a memory-mapped hardware register. Now if we can only get FW guys to design the interfaces to devices, instead of EE's. The 16550 status register is a zen-like example. If I ever lay hand on the so-and-so ....

    • I saw a talk by the curator of the Bletchley Park Museum on this project. really amazing. The talk was about 2002 at the Computer History Museum in Santa Clara. I cannot remember the name of the curator - I am not sure if it was Mr. Sale. In any case, recreating Colossus was a real effort by many dedicated people (not to dis Mr. Sale), and a real contribution to the world.

    • Yep - and I assume your brain, too :^) A CLI on a serial port is a wonderful thing. It has the added benefit of providing a means of scripting manufacturing tests.

    • "Say what you will, the U.S. private sector is SIGNIFICANTLY more effective and efficient at just about everything." While I would generally agree with Mr Knapp, I have three counter examples: 1. WWII US Military vs the ongoing two wars. Partly this is a function of scale. Yes, companies did military contracts for profit in WWII, but it was primarily for *goods, not services*. The military did its own services, provided its own security, etc. A further discussion is probably out of the scope of this thread. 2. The plethora of TLA's created by FDR - they put a lot of folks to work after years of no work, because private companies could not create the jobs. This was a LAST course of action. It also prevented riots and a breakdown of general society. We may yet see this as the only viable option. 3. Google "medicare overhead vs private insurance". Read the articles on the first one or two pages. There were two basic points that were important in the articles: a. scale matters. The govt gives the needed scale that a private company cannot b. Health insurance is based on the same model as auto and other insurances. However, health insurance is *fundamentaly* different - you have a choice to own a car, but not to own a body. Yes, you make choices that affect your health, but many things can happen that you do not have a choice about. It is very difficult to compare the two systems for the above two reasons, but for most of the more independent articles, medicare has better numbers than private industry. As far as why jobs have been sent overseas: short-term gains, without regard to the long-term consequences, even for specific companies. If a CEO is compensated for how the company does in a 5-year period, instead of a quarterly basis, we would see more jobs here. I have no problem with your approach, but that is one a politician will never use. You *must* be an engineer.

    • If I were an economist, I would look at one thing closely: the rental of moving vans, both commercial and residential; in-town and between town. This gives a pretty good idea of folks and businesses moving. There is enough real data to do something with. Combine with some good granular data on home sales, and apartment rentals, and office construction and rental, and you could put together some quality data that can measure real flow of jobs and money in the US. You would need to account for the part that is military transfers, but that is quantifiable. But .. I agree with Jack - how can you be precise with such blunt tools? I can measure something to 0.000001 units, as long as I get to choose where 0 is after the fact.

    • (continued) ... This is true in the FPGA world, too. The level of abstraction above VHDL and Verilog is not there yet, unless you include the C compilers generating VHDL and Verilog. (A former client told me he gets a 10x decrease in development time using HandelC - because he has the transistors to burn.) The new generation of FPGA tools is going to take a while to mature. Then you have the issue of mashing these new tools into a seamless mass. That will also take some serious work. Not sure it is even worthwhile to do the integration into a single tool. Linux has a footprint issue - it is *way* overkill for the small and medium sized applications. Yes, it can be whacked down, but right now it starts out-of-the-box as lots of stuff you don't need, not to mention not real-time for almost all distros, and the real-time ones come with cash license models. A Linux OS is useful, IMO, only for applications with lots of resources, short development times, and non-real-time needs. The real-time stuff still needs to be handled separately (dedicated hardware and micros). I mean, when was the last phone made whose purpose was to make calls? They all have games, calculators, calendars, etc.

    • I see two basic problems: there are different levels of scale (@pmoyle is Right On), and the tools are not here yet. It will take a while to create the tools that are effective over a range of applications. One is the OS architecture, which RTOS's fill right now (or the standard control loop of state machines). A universal OS, capable of handling real-time system constraints, is still a ways off, RTOS or Linux. What we are seeing are tools to handle specific areas - Matlab to C (math algorithms), UML to C (state machines), etc. The real-time community tends to be rather conservative in its practices, because failure == death. Change comes slowly for good reason, which is why C is very common, and a few RTOS's are at the top (ThreadX, Wind River, QNX, Integrity, uCos, to name most of the big hitters). Licensing and footprint are two of the reasons control loops are still used instead of an RTOS. Other tools will take a while, because of the time it takes to get it right, and each will come with its own limitations. (I suspect most Matlab-to-C code has a large footprint by the time you suck in all of the libraries.)

    • (appears first post lost, please pardon if a duplicate) funny thing - just read this -------------------------------- http://www.rdmag.com/News/2011/07/Life-Sciences-Imaging-Cornell-Develops-A-Lens-Free-Pinhead-Size-Camera/?et_cid=1782925&et_rid=54726991 Cornell develops a lens-free, pinhead-size camera Wednesday, July 6, 2011 It's like a Brownie camera for the digital age: The microscopic device fits on the head of a pin, contains no lenses or moving parts, costs pennies to make—and this Cornell-developed camera could revolutionize an array of science from surgery to robotics. The camera was invented in the lab of Alyosha Molnar, Cornell assistant professor of electrical and computer engineering, and developed by a group led by Patrick Gill, a postdoctoral associate. Their working prototype, detailed online in Optics Letters, is 100th of a millimeter thick, and one-half millimeter on each side. The camera resolves images about 20 pixels across—not portrait studio quality, but enough to shed light on previously hard-to-see things. --------------------------------- Might be interesting to glue a bunch of these together with a wireless link and scatter them around, especially where the citizens want to watch the cops, like at demonstrations. Make them small enough to attach to someone's clothes (via a brush-pass, etc) and see what they say "in secret". Shades of David Brin's "Earth" and "Transparent Society".

    • funny thing - just read this -------------------------------- http://www.rdmag.com/News/2011/07/Life-Sciences-Imaging-Cornell-Develops-A-Lens-Free-Pinhead-Size-Camera/?et_cid=1782925&et_rid=54726991 Cornell develops a lens-free, pinhead-size camera Wednesday, July 6, 2011 It's like a Brownie camera for the digital age: The microscopic device fits on the head of a pin, contains no lenses or moving parts, costs pennies to make—and this Cornell-developed camera could revolutionize an array of science from surgery to robotics. The camera was invented in the lab of Alyosha Molnar, Cornell assistant professor of electrical and computer engineering, and developed by a group led by Patrick Gill, a postdoctoral associate. Their working prototype, detailed online in Optics Letters, is 100th of a millimeter thick, and one-half millimeter on each side. The camera resolves images about 20 pixels across—not portrait studio quality, but enough to shed light on previously hard-to-see things. --------------------------------- Might be interesting to glue a bunch of these together with a wireless link and scatter them around, especially where the citizens want to watch the cops, like at demonstrations. Make them small enough to attach to someone's clothes (via a brush-pass, etc) and see what they say "in secret". Shades of David Brin's "Earth" and "Transparent Society".

    • Full disclosure: I had Al Stavely as a CS instructor at NM Tech, and I was a reviewer. I like his basic concept of the word processor with the embedded code. However, I have not yet actually tried it - mainly because I have been too far in the trenches to actually get the tool together. I do use Word to create the various docs: requirements, functional, design, then vi to write the code. When I can find the time, I will try a project. One of my concerns is the life of the tools vs the life of the code. vi and text files will be around for a long time. Word versions have shorter lifespans, but the OpenOffice stuff might help solve that problem - we will see. I found the stuff on assertions to vary from practical to academic - it can get obscure quickly. I think of assertions as assert() from a practical viewpoint - they should be executable to be effective. Having said the above - his basic point about the act and art of writing code is dead on. We need to concentrate on communicating with the poor human (perhaps ourselves), who has to read and understand this stuff. The old "the code should be self-documenting" is balderdash for anything over a few hundred lines. I prefer, if possible, to turn the design doc into the comments, even if I need to put the "why and what" into an include file. If it gets out of date, it still gives me the thinking behind the code - what I want the code to do in the first place, which gets lost in the usual case. I think the book is useful, especially if it helps set a coding and commenting style for a team. Perhaps a practical compromise would be smaller Word docs that are cross-linked to code (ie a comment that says "see foo.doc for the details on the foo subunit"). I have done this with logbook pages - scanned them in (ie a derivation, or controlling a complex device) into a PDF, then keeping the PDF with the code files. Al's book did change my thinking - one of the main purposes of this type of book in the first place.

    • There is a much simpler method - still using the same timer with the same discussion on frequency. Just maintain a counter - if unsigned: if( counter .ge. thresh ) { set output high } if( counter .ge. 100 ) { counter = 0; set output low } If a signed value, set the counter to (- thresh) and test for .ge. 0, then for (100 - thresh). The unsigned is the easiest. This method avoids the modulo - a very expensive operation. Also, obviously set the initial conditions properly. Might also want to put the whole logic in an if( doing_pwm ) to turn the PWM on and off.

    • Today happens to be my 20th anniversary. My wife, when a teenager, decided if she ever married it would be to a nerd. She grew up in rural washington state, where her choices were miners, loggers, and dairy farmers. She is an opposite of me in most ways. Folks who know her from childhood or other close friends were quite shocked when they met me - not what they expected. But - both she & I knew what we were looking for. She is a former Marine and quite strong-willed (very important for any woman marrying into my family). It *did* take her about 4..5 years to *really* be aware of what it means to marry an engineer. (This matches the experience of the wife of another good engineer friend - they are on roughly 40 years of marriage.) I will say or do something and my wife will still turn to me and say "You are a freak of nature". I give young engaged couples (when they will listen) three pieces of advice: (1) go on a long trip or other stressful sequence; (2) elope then throw a party (too much money spent on "her big day"); (3) do your bills together - there is no doubt about the money. Two engineering students/friends met one day on campus. One had a new bike. The other admired it. the one with the bike told of a beautiful woman who rode up to him on the bike, got off, took off her clothes, and said "You can have anything you want". The other nodded, and pointed out the clothes probably would not have fit. Lots of truth there....

    • "Diamond Age" by Stevenson is going to be here in just a few months. The plastic CPU will end up in cheap, disposable consumer projects like cell phones and toys. Put one in a cable to make it smart.

    • Build the prototype as big as you need - all components on the top if possible, only bypass caps on the bottom is next best. Write a simple CLI on the serial port. Simple command with 2..4 parameters (allow decimal and hex; usually all that is needed). If the command is preceded with "rep " (ie "rep foo 0x02") repeat the command until you hit ESC. Have some way of changing the delay, in mSec, between loops. Use the CLI to test the hardware. The repeat is really useful for that. Build the CLI into the final app if you can, or a version with the CLI dedicated to just testing the hardware (also useful for production line testing with i/o boards in a PC). Have the poor sod that has to write a device driver for that custom device tell the HW guy what he needs to write the device driver in a simple, robust manner. Avoid write-only registers and status registers that clear when read. (This is a *much* longer rant...) Have test points on every line if possible. Test points on every control line. Test points can be 0.1" 2xN holes you can solder a pin into. Write the complex stuff on a PC and use typedefs such as UINT8, UINT16, etc that can be defined for the target system. Have debug vars with bitfields to turn on/off printf()s in the code. Have separate bits for function entry/exit values. Have separate debug vars for each module. Have the ability to compile those debug printf()s into the code or be put in the source as blank lines. Run devices for 24 hours. Push them to limits of speed and range. Have something to measure or feed them. Serial ports: echo on a PC and make sure the data sent is checked on receive (ie ascending sequence). Use external watchdog. Be kind to yourself on the design and layout of the prototype. Don't layout in the final formfactor until everything works. Spin the prototype once more to make sure you got all of the changes before the final formfactor. Mount the original prototype as a trophy.

    • Cost of the micro is a function of the number of units sold vs the NRE (broadly speaking). If you are going to make millions of units, it makes sense to spend the money to save a nickel on each unit - the NRE is made up quickly. But .. if the units sell for 5x or 10x of the BOM, then engineering time is more valuable than the nickel. I am told doorknobs have 4-bit bangers in them (never had a chance to take one apart :^) - I suspect it's a 4004 or similar. You don't need more that that, and they need to sip power. 8-bit micros will be around for a long time, for the reasons most folks have posted - ease of use, cost of entry, don't need the power. I have a back-burner project - an LCD controller - and there is absolutely no need for a 32-bit 400MHz ARM. I have used ARMs and Coldfires, and really like them, but only when I had a real need for that sort of horsepower. It's like putting a muscle car engine in a VW bug - way overkill that can wipe you out. On the cost of entry - with most 8-bit chips you can get DIP packages and use a cheap/free PCB layout package and one of the proto board houses (I like www.apcircuits.com) so you can lay out the board at a reasonable scale, solder sockets onto it, and get something working cheap. You need a higher class of tools and more expensive board houses (a factor of 5x to 10x) if you need to goto 5 mill or even 3 mill traces.

    • I used to use PICS, but now am firmly an Atmel AVR (8-bit) freak. Wide range, great instruction set, free tools (AVRStudio and gcc), killer devices that are easy to write drivers for, good power consumption, built-in osc to 32MHz, plenty of pins (or just a few on the ATTiny), built-in USB support if you need it, etc. The documentation is good - the appnotes are real good, the code examples are good if you strip out the weird comments (yeas, they are there for the automatic documentation package - they just get in my way). One of the things that drove me away from PICS is the really bad PIC24 CCS compiler - is is very buggy. Microchip does not do a good job qualifying third-party products. I lost 60 hours of my life in the middle of a deathmarch from the CCS bugs. I even had to write my own UINT32 divide routine, the CCS version would fail sometimes. Not that I mind ARMs - I have used an ARM7 in the past and like the ISR shadow registers. Also, when I use an ARM I have two (I am the one-armed bandit...). Still - that is way too much horsepower for what I normally use. (I also like Coldfires for the higher-end stuff. Checkout www.netburner.com - great stuff if you need that level.)

    • Memory leaks - better still, never use malloc() and statically define the buffers (or only malloc once for the fixed buffer pool). KEEP STATS on the buffer pools. The end-notes are correct. Deadlock - Dining Philosophers. Race conditions - Don't allow them, wait until the race is guaranteed over, and everything else. Only the first two actually work.

    • About half of my career has been writing device drivers. It would drive me to distraction to be handed the prototype hardware in the final form factor. The one thing that made the situation workable has always having access to a good tech who could solder blue wires to a 2xN berg header I could put scope probes on. Make the board as big as needed, with all components on one side and lots of test points. Every trace has a test point. Once that PCB has been cut, the PCB layout person can do a rough layout on the final form factor. Do all of the development on the big board. When you are happy, spin the big board *one more time* to make sure every change is documented and correct. Take the original big board and frame it - hang it on the wall. Verify the new big board, put in an anti-static bag, and carefully stash away - all future development is on that board. Finish the layout on the final form factor PCB, cut, stuff, sell. BTW - handy test points are ones you can plug a logic analyzer pod directly into. Put notes on the schematic, ie what the pins on a connector are for. Make all directions (TX,RX) device-centric. Write a design document for the hardware && firmware. The guy writing the device driver should be the architect of the device. I worked with a sharp EE that was stuck on a device design. I asked him what issue he was fighting. He was trying to put a status bit in the byte MS bit position so I could "test for negative". I told him I was *not* going to do so, and put it anywhere because I was going to mask off the bit. This greatly simplified things and the design was done.

    • www.ampad.com #22-157 Buy them in quantity 12. Metal spiral bound, will lay flat - good stuff. Only thing missing is a table of contents. Make up your own, copy, and tape in the front. I have logbooks going back 20 years. I miss the ones I could not copy or escape with. I do not like the stitched binding because I cannot lay them flat. I have taken them (when I could not get my preferred logbook) to Kinkos, cut off the binding, and had them spiral bound. Hint: a circled number means the page in the same logbook (ie on page 56, a circled 74 is page 74) for cross-referencing. A number in a triangle (or non-circle) is the number of a different logbook (triangled 3, circled 34, is logbook 3, page 34). Use page 1 for contact info for others on the project, frequently referenced or important small references (tool serial numbers, etc).

    • One could also purchase an identical safe to get the emergency key (or steal the key), thereby making the whole issue of the electronics irrelevant. Or blackmailing the guy who made the key. Or blackmailing the engineer to put in a backdoor sequence (pi to 9 digits?) Or, as Feynman discovered, try the factory combo because folks do not change the combo. Or figure exactly where to drill to get to the motor contacts or wires and directly engage the motor. Or determine how to replace the keypad with one that looks the same, but would record the key sequence. Or an overlay keypad that would look correct but log the key sequence. Assuming, of course, that more brute-force methods such as making it go boom, or physically stealing it and dropping it from a great height, or taking back to the secret lair under the volcano to work on it in peace. Nice effort, but too many places to get around the lock. The mattress is more secure.

    • Jack: My deepest sympathies to you and your family. I happen to agree with your basic attitude. I have not had the experiences of TFC-SD. If one *is* addicted to nicotine, the patch is a really good way to go. My instructor in college on Drugs and Human Behavior, Dr Frank Etscorn, held the basic patent on the patch (full disclosure), but his research was almost strictly centered around nicotine. One of his findings was: nicotine (via a patch/gum, *not* smoking), will not cause physiological harm, other than death via overdose. Simply put, if it does not kill you, it does not harm you. Note the LD50, the overdose limit, is quite small. Two patches at the same time will kill a fair percentage of users. see http://en.wikipedia.org/wiki/Nicotine_poisoning Smoking as a delivery mechanism is what causes the cancer. There are nasty stuff in that smoke. For the survey: never smoked, never pressured.

    • This article makes a serious assumption. I just toured a facility with machines to deconstruct ICs, both destructively and non-destructively. One machine they have is equivalent to a CAT scanner for chips. They can read the links in flash chips to reconstruct the data - they get the contracts to read damaged flight recorders. This makes it trivial to read the crypto key. The machine is only a few $million - well within the reach of any group that is a serious player - they also need to know how to get around ITAR and export controls, but that only takes money - and a serious player could do so, sad to say. This assumes a govt or international players, but good old industrial spying is viable. One stolen flash drive with IP could pay for itself the first time. Security needs to be built into the system from the start, and humans will still be the weak link. The article's tech will only slow down the Bad Guys, not prevent them from being Bad.

    • There are only two reasons to learn a computer language: 1) It changes the way you think 2) someone pays you to write a program - bandit's rule of languages (inspired by Pat Orr) There cannot be good engineering without conflict - Liddell's rule of engineering

    • I had a chat with these folks. I was impressed, but they are focused on x86 and some ARM. Most of the tests (as I recollect) are intended for PC clone systems. I finished a system last year that was severely broken (in many ways). The first thing I did was write a CLI that allowed us to test one thing at a time on the board. This way, we were able to characterize and fix each device in isolation. This type of testing also transfers to production testing, where the intent is to make sure the hardware works. The firmware for the app gets tested at a completely different time (ie development/customer acceptance). The production test jig gives the CLI commands and examines the results. But - if you are developing the type of system they support, this seems a good way to go. I am all for co-design, especially devices. HW folks are not generally savvy to the tricks the FW folks use. I worked with one HW guy on a project, and he was stuck at one point. When I asked why, he was trying to put a status bit at the MS position so I could test for negative. I told him I did not care where it was - I was going to use a bit mask. This made it *much* simpler for him, thus for me.

    • He didn't mention a Green Hills or other company product once. What he provided, at the simplest level, is a checklist. "I'm lost, now what?". It is easy to get locked in a groove, and an external force is the quickest method to knock you out of that groove. I try to setup a system to do just that for me - because I know I can get trapped in the groove. Notice he talks about knowing your tools. Either you make them or buy them, but not knowing them (how to use, what they can do) is one of the problems I have seen in a lot of places. I fight this in myself. One thing he did not mention is: keep a logbook. It is very useful for knowing what has happened - the brain gets fuzzy && forgets details. Keep good enough records of your experiments (you *do* experiments to debug, I assume), and you can see the trends. Designing in trace tools and logging tools will save you in both the lab && the field. Some are embedded in your system, some external to it. Most you will need to make/modify yourself, because you have unique things to look at. So, while the article was lite on details, it is a good high-level checklist. And - I have worked on embedded systems where tools varied from a $30K emulator to a board with an RS232 port and an LED. Ultimately, your best tool is your brain. All other tools spring from that tool.

    • 1) what to do when things assert: there are two basic choices: log && return an error, or log/reboot. I worked on a state patrol message switcher that mixed these. There was a scripting side that would report the error (ie ill-formed data) and abort the script. If the data/etc looked good, it went thru a firewall. Anything past that point was a hard reset. We met the spec of less than a minute of downtime in 30 contiguous days in a 90 day period - one reset was a few minutes. 2) I always leave the asserts in production code, unless there is a compelling reason. I did a project for a client. 3 years later, I was chatting with someone on the project. They had changed code (standard upgrade of the product) and an obscure part of my code asserted - they were able to find & fix the problem easily. Without the assert - no way. They could have tracked the bug down - my code tends to be very clean and understandable - but it would have taken a while, because the spider-monkey would have turned into King Kong by the time the bug exhibited itself. BTW - that project was mission-critical telecom equipment. That is, in a nutshell, the benefit of asserts - catch the bug at the spider-monkey stage. The biggest single thing asserts give you is a run-time check of your data structs. The pointers are non-null, and assert( ptr-type == expected_type ); has saved me more time than I can describe. The other thing it gives you is the ability to check the invariants - I expect foo() to return a value 5..10 under all cases. I expect I will *never* overflow the internal buffer: assert( strlen(buf) < sizeof(buf) ); So - I suggest using a combination like we used on the message switcher - check data/format/etc of inputs, then pass them thru the firewall && do hard asserts. Also - every struct has the first field of "UINT16 type;". if the struct is called FOO, the value in .type == TYPE_FOO, and make it a big number, like 0xDEAD or 0xBEEF - something that is unlikely to be a normal value (ie not ASCII or 0x03). What would be interesting is to categorize what the asserts in the studied code. A glance at the study does not seem to indicate this detail. Also, the references do not include "Writing Solid Code" by M$ press. (It hurts to recommend M$ products, and I disagree with some things in the book, but they are style issues.) One point: if you take the asserts out for production code, your error-handling changes - how do you test the error-handling code in those cases?

    • "I read a bit of wisdom in, of all places, a sword-and-sorcery story years ago: "A knife always works." Electricity, software, and enchantments may fail, but that little piece of metal will still be there." Conan the Barbarian says: "Steel with get you thru times of no ghods better than ghods will get you thru times of no Steel." When I am doing prototype work, nothing beats the X-Acto knife. I was a young cub, on a summer gig. We were working on a printer-driver. My manager, an old IBMer, hauled out his X-Acto knife and started cutting traces. It was a wake-up call for a wee lad.

    • oops - the formatting on the code fragment turned out badly. And - upon posting then reading, the pointer was not == 0xff, the ptr[0] == 0xff. However, ptr[0] should == 0 for a null-string.

    • I *always* define and use UINT8, INT8, UINT16, etc. The only times I use 'int i' is when I *know* I am looping thru some short buffer and I will always have enough room in the 'int'. However, I usually use a 'UINT32 i'. I almost never use a signed value - there is little need in most of what I do. If the math requires it, of course I use signed, but this is pretty rare in the systems I work on. The only times I use 'char' is when I am using a character array/string - and I usually make it UINT8 instead, then have to cast to (char *) for the C string library functions (sigh). This is one of the few times I cast. (I also do not usually need to work with non-ASCII strings.) I needed a cast for a macro that could take an argument that was either a UINT16 or UINT32. It was the only way I could get the compiler to shut up. I was doing a swap(a): #define swap_field(ff) { if( sizeof(ff) == sizeof(UINT16) ) { UINT16 t16 = SEX16(ff); ff = t16; } else if(sizeof(ff) == sizeof(UINT32)) { UINT32 t32 = SEX32(((UINT32)ff)); ff = t32; } else { assert( #ff " not UINT16 or UINT32" == NULL ); } } Then I just needed to say swap_field( ptr->field ); to deal with byte-sex in a struct and I did not need to care about the size of the field. The swap was done once, in place, and I would define SEX16() and SEX32() as needed. Even though the field was UINT32, and SEX32() takes a UINT32, the compiler complained. However, in general, I am careful when I cast && look to see if there *really* is a reason to. I agree with the article. I also have to wonder why a pointer is being checked for == 0xff instead of NULL..... Also - why is the 0xff not #defined as some symbol like PTR_DELIM or whatever...

    • I have a very simple description for an embedded system: Something the average person does not think has a computer/micro in it. This is a variation on the question, which I always ask first: What percentages of micros end up in laptop and tower computers, the kind your [grand]mother would say is a computer? I then suggest 10, 20, 50, 80 % .... and most folks guess 80+%, and I tell them that depending on the source and the year, the answer is 2% to 0.5%. (The interesting part is just getting them to make a guess, even though there is no obvious penalty.) They are amazed. I have to point out the average cell phone has more compute power than existed in the *world* in 1970. I get into this discussion when asked what I do. I tell them "I do mission-Critical Embedded Systems." As one person recently commented, "I lost you at 'I do'". My simple answer is: "I do the computers you do not see, but your life depends on, like telephones, airplanes, cell phones, door knobs." For the average person, that is enough. For the techie, I think the answer is anything that uses a micro but is not, or is not intended to be used as, a general purpose computer. This handles the case of the PC as a controller. A PC as a print server is embedded under this definition. A blade matches this definition, too. Of course, there are aspects that become important on the lower end - where things live (RAM vs FLASH), real-time interrupts, etc that are important to the techie, but those are why we get paid the big bucks. If you claim to do embedded systems, you should know about those things already. An RTOS is not important: to call WinCE an RTOS is laughable. Personally, I work on *real* embedded systems, not toys that WinCE could handle. (That was an unpaid political announcement.) For a school that teaches embedded systems, take a look at www.nmt.edu and their EE program. (Full disclosure, I went there many moons ago.) I recommend to kids that they minor, or at least take, the CS systems classes (FA's, data structures, OS theory, etc) to understand the programming side. ... bandit

    • I highly recommend a couple of books by Jerry (Gerald) Weinberg: "Secrets of Consulting" and "More Secrets of Consulting". Full disclosure: I am a friend of Jerry, and the "Sherby" in his intro crediting Shebie's (correct spelling) three rules as the inspiration is my father. Having said that, these books are very useful in giving both the novice and experienced consultant ideas and reminders. Fundamentally, as Jack implies, consulting is a PEOPLE profession. The tech part is only 20 percent of the biz. You have to have a high threshold for chaos and uncertainty to be a successful consultant. You are the glory boy while the gig lasts, then you are an un-person so fast it can make your head spin. But it can be very rewarding in both money and satisfaction. I tried being an employee (a wage-slave) for a year and hated every minute of it. Find good professional organizations. The IEEE has consultant groups in just about every city in the US. That is a good place to start. ... bandit