Bob Snyder

image
sw-engineer

Biography has not been added

Bob Snyder

's contributions
Articles
Comments
    • "WARNING! Pressing the trigger may cause a projectile to be discharged at high velocity from this firearm!"

    • I also think that it is easy to fool people with percentages. What does a typical news consumer think when they hear that something has increased by 200 percent? Do they think it has doubled? Do they think it has increased two hundredfold? If you offered to give a worker a temporary 10% pay decrease followed by a permanent 11% pay increase, would they take the deal?

    • I found a TI reference design that uses the TPL5111 to control a TPS22860 load switch. http://www.ti.com/tool/tida-00484 The TPL5111 is part of the same family as the TPL5110. Here's the kicker: The TPS22860 has a total leakage current of only 12 nA (typ) and 150 nA (max)! It can handle continuous loads of up to 200 mA.

    • Designing an effective UI requires talent, but where safety is concerned, it clearly involves science as well. I have seen some really awful UI designs in automotive climate controls. Some of these take far too much mental effort and also require near-perfect visual acuity. While driving toward the setting sun, some of these things are virtually unusable. I wonder if the automotive crash testing folks have ever considered developing a suite of control panel usability tests.

    • Food for thought... Cockpit of P51 Mustang: http://mycargear.com/wp-content/uploads/P-51-Mustang-Cockpit-3.jpg Cockpit of Douglas DC3: https://upload.wikimedia.org/wikipedia/commons/d/d3/N34---Douglas-DC3-Cockpit.jpg Cockpit of Boeing 767: https://upload.wikimedia.org/wikipedia/commons/2/2e/Continental_Airlines_Boeing_767-424ER_flight_deck.jpg Cockpit of Airbus A380: http://www.airbus.com/fileadmin/media_gallery/aircraft_pages_photo_galleries/a380-gallery/A380_Cockpit_2.jpg

    • A large percentage of our gray matter is devoted to processing visual and spatial information. Dedicated knobs and dials are visual/spatial elements. Items in a nested menu are somewhat spatial, but rely much more on verbal/conceptual processing abilities. I think the best user interfaces combine these elements so that more neurons are involved, thereby increasing our ability to recall. Personally, I prefer mechanical knobs located in fixed positions and labeled in my native language. That way I get visual, spatial, and verbal connections to the concept.

    • To replace the headlight bulb in a 2010 Subaru Outback, you have to remove the front tire, partially unfasten the plastic fender liner, stick your entire arm inside the fender, and perform the operation by feel. It's best to perform this process when there are no small children within earshot.

    • I was picturing a control system that would do exactly what the skipper does; periodically wake up, check the compass heading and GPS coordinates, adjust the sail angle relative to the gearbox until the ship was headed in the desired direction, and then go back to sleep.

    • I only sail a little, but I like it a lot. I wonder... Is this a pattern? Are other embedded folks also into sailing? Perhaps we need to plan a Barefoot ESC Conference aboard a Windjammer somewhere in the South Pacific! Who says nerds can't have fun?

    • "To set the course one steers the boat in that direction and orients the sail directly into the wind." How much energy would it take for an actuator to occasionally adjust the sail on the wind vane, whenever the wind direction changes, so as to keep the vessel on a constant compass heading? Presumably there is a locking mechanism that maintains the relative angle between the sail and the gearbox. So the actuator would only use power briefly, and only when the wind direction changes. It wouldn't eliminate transistors, but at least the mechanical actuator wouldn't use much energy. Would that be feasible?

    • Microcontrollers are going into all kinds of devices these days. Perhaps someday soon we will have "smart bridges" with sensors and maybe even actuators controlled by firmware. Perhaps the boundary between civil engineering and embedded software engineering will become less distinct. How might that affect the licensing issue?

    • With any development platform, the solutions to some problems are preprogrammed, so users are basically activating code that others have developed. I once supervised a robotics club where I taught high school students how to program LEGO Mindstorms robots with the goal of winning a contest. The language used was NQC (Not Quite C), a scaled-down version of C designed to run with very limited resources. The students had to deal with raw, noisy sensor values. In one example, they had to program a robot navigate a maze having 8" high solid walls. One student decided to use a pair of ultrasonic distance sensors. He had to think about what would happen if the pings emitted from one sensor were reflected by the walls and picked up by the other sensor. The physical world is a messy place. When people write software that has to deal with the physical world, they quickly gain an appreciation of how the complexities can stack up. When the media run a story about a robotics contest, they frequently make it seem as if all of the robots are amazing and every kid is a budding Einstein. In reality, most of the robots struggle and/or fail miserably to deal with the complexities of even a simple contests. I was always more concerned about students feeling inadequate than I was about them gaining a false sense of mastery. If someone thinks programming is easy, challenge them to program an autonomous robot that can navigate a simple maze.

    • Here are a few mementos that I inherited from my dad. I keep these around because they help me to appreciate just how far and how quickly we've progressed. The 1952 Radio Amateur's Handbook. A book entitled "MOST-OFTEN-NEEDED 1954 TELEVISION SERVICING INFORMATION". If you ever need to repair a 1400 series Hallicrafters TV, let me know. This book's got it covered. RCA Receiving Tube Manual, copyright 1961 - This may have been the TTL data book of its day. The layout is very similar. 1963 Lafayette Radio Electronics catalog #630. Page 137 lists a 67.5 volt "B" battery. Page 257 lists a "subminiature" 4-transistor hearing aid measuring only 2 1/2" long x 1 3/4" wide x 5/8" thick for $22.50. An assembled and (still) working Knight Kit KG-70 stereo tuner and a matching KG-320 stereo amplifier, complete with assembly manuals. A pair of RCA 2N581 transistors in their original, unopened packaging. Digi-Key Catalog #816, Nov-Dec 1981. Featured product inside front page was: "4116N-ND 250 nsec 16,384 x 1 Dynamic Ram" selling for $2.25/ea. Shipping cost for "foreign" orders was $5.00.

    • A related issue is that a knob serves two purposes: it allows you to SEE the current setting as well as CHANGE it, both in one place. With a menu system, you often change a setting in one place and see it displayed in another. Last week I wasted several hours tracking down a problem because I forgot that one of my scope channels was AC coupled instead of DC coupled. That would have been obvious with an older, knobby scope. But with my newer scope, it is easy to miss.

    • In The Time Machine, H.G. Wells portrays a future time when all knowledge of science and history have been forgotten. A dusty library containing volumes of information stored on disks lies in disuse. None of the Eloi people seem to have any thirst for knowledge, presumably because all of their daily needs are being provided (by the Morlocks for whom the Eloi are basically livestock). Was H.G. Wells a visionary? I sure hope not, but I worry. Humans have a lot of the same instincts as other mammals. Animals are programmed to expend only as much energy as necessary. When MCU designers first got the idea of putting a system into sleep mode to save energy when there is no work to be done, they were simply repeating a discovery that evolution had figured out long ago. Nobody ever describes sleeping lions or sleeping cattle as being lazy. It seems quite sensible that nature would have provided them with an instinct for conserving energy. And I see no reason to think that the human species is any different. Laziness is entirely natural. Of course there are probably some very curious lions and cattle in the world who constantly explore their surroundings in a quest for knowledge. But those are the exceptions. In a world where work is no longer necessary for survival, what will motivate people to get off the couch? It's nice to think that people will be self-motivated, and undoubtedly some will be. But all animals are programmed to be energy-efficient. Learning is hard work, and laziness is in our DNA. Parenting and socialization have always played essential roles in motivating people to do the hard work needed to realize their potential. But those institutions seem to be weakening at a time when we need them to be strengthening. Technology may someday provide all of our material needs, but do we really want our children and grandchildren to live like the Eloi? We have some big challenges ahead.

    • Duane, Great article. Much appreciated. It would be really great if Octopart offered a tool that would read a BOM file and generate a report giving historical availability data for each unique part number. The data for a single part might look like this: Manufacturer: Linear Technology Part No: LT3065EDD#PBF Average Stocking Levels, All Vendors, Previous 12 Months: (fictitious data) Dec 2014 - 1732 Jan 2015 - 1629 Feb 2015 - 1208 Mar 2015 - 672 Apr 2015 - 0 May 2015 - 0 Jun 2015 - 209 Jul 2015 - 1856 Aug 2015 - 3298 Sep 2015 - 10873 Oct 2015 - 8997 Nov 2015 - 6590 The past is not a perfect predictor of the future, but it would be a lot better than flying blind.

    • Suppose that you have a complex 100,000 line program running on a 32-bit MCU and you need to guarantee that several time-critical interrupt sources are always serviced in a timely fashion. One approach would be to offload the time-critical tasks to one or more 8-bit processors that have nothing else to do.

    • I agree. And consider this: In 1955, when president Eisenhower had a massive heart attack, he certainly would have received the best medical care available. But what did that include? Nitroglycerin and bed rest, and probably not much more. Today the poorest person in America can show up at an emergency room and get FAR better care than Ike did. The world is getting better in many ways, and most of the credit should go to engineers and scientists.

    • If I were in charge of the EPA, I might be asking why our test protocol isn't more robust.

    • The following is not intended to defend what VW did, but I think it may provide a bit more context for the discussion. Over the weekend I did some searching to learn more about NOx emissions. From what I can gather, Volkswagen's transgressions will have little effect on overall NOX levels in the atmosphere. According to the EPA, 74% of all nitrous oxide emissions are due to soil management, and only 5% are due to transportation. According to a 2012 article from UC Berkeley, "Since the year 1750, nitrous oxide levels have risen 20 percent – from below 270 parts per billion (ppb) to more than 320 ppb. A steep ramp-up in atmospheric nitrous oxide coincided with the green revolution that increased dramatically in the 1960s, when inexpensive, synthetic fertilizer and other developments boosted food production worldwide, feeding a burgeoning global population." Sources: http://www3.epa.gov/climatechange/ghgemissions/gases/n2o.html http://news.berkeley.edu/2012/04/02/fertilizer-use-responsible-for-increase-in-nitrous-oxide-in-atmosphere

    • The features described in the article are just a hardware implementation of the standard data normalization steps that many of us have coded up in C and assembler countless times over the years. I think it makes sense to implement these things in hardware. I suspect that these features will be popular and other MCU vendors will offer similar features in the future. And when that happens, vendor lock-in won't be an issue.

    • I needed a high-resolution real-time clock that would run for years on a coin cell. A 32768 Hz crystal provides 30 uSec resolution, but none of the commercially-available RTC chips expose the raw tick count. I didn't want to settle for 0.1 sec resolution, so I decided to design my own RTC using an 8-bit chip. After experimenting with various crystals ad load capacitors, I was able to get the operating current down to about 800nA at 25C. Of course not every product needs to worry about nanoamperes. But for those that do, 8-bit processors still look pretty attractive.

    • The big, unsolved mystery is consciousness (self-awareness). We're not even sure about what it is, much less how to create it. http://www.ted.com/talks/david_chalmers_how_do_you_explain_consciousness?language=en

    • "Where I work, we're using 32 bit processors but all the time critical processing is done in the FPGAs." Using a 32-bit processor for non-real-time tasks in conjunction with an FPGA for the time-critical tasks would clearly have some big advantages. The FPGA would not only provide determinism, but also massive parallelism. Some of the smaller PIC chips have what Microchip calls a "puddle of gates", which is like an FPGA with only one logic cell. I suspect that a single MCU design is probably the most cost-effective approach for most real-time systems. The 2014 Embedded Survey results indicate that 61% of embedded projects require real-time capability. 50% of all respondents said that their current project has a single processor, and 27% said two processors. Only 16% said that their multiprocessor systems included an FPGA. When asked why a customizable chip was NOT used, 62% said that they didn't need that functionality and 28% said that FPGAs were too expensive for their application. It appears that a lot of MCU-based systems are performing real-time tasks, and a lot of people are moving from 8/16-bit MCUs to 32-bit MCUs which are often cache-based. Nobody wants to disable cache in order to meet real-time requirements. Therefore it seems to me that it would make sense for 32-bit MCU manufacturers to incorporate just enough FPGA fabric to allow the time-critical logic to be handled without the CPU, which is basically your approach, but with the FPGA fabric located inside the MCU. Perhaps one day "Number of Logic Cells" will be listed right alongside "Flash" and "SRAM" in the device selection tables. Then we can select anything from "puddle of gates" to a "sea of gates", depending upon the application, while still using only a single device.

    • A large number of the survey questions ask about "my current project". That could be problematic, because some types of projects may require more time than others. For example, a project that incorporates three processors is likely to take much longer than a project that uses only one processor. Suppose that every developer surveyed completed three single-processor systems that each took two months, and also programmed a single three-processor system that took six months, and everyone started their projects on randomly-chosen dates. If everyone took the survey on the same day, about half would say that their current project is multiprocessor, and half would say single processor. So a 3:1 ratio would appear as a 1:1 ratio. 8-bit and 16-bit projects typically have fewer features and are less likely to have a graphical user interface, so these projects are probably completed faster, on average, than projects using 32-bit processors. In general, any configuration that tends to lengthen the project duration would tend to have an increased number of responses when the question asks about "my current project", simply because any project using that configuration has a greater chance of being the current project at the time the survey is completed.

    • You wrote that "The 8-bit processor is not dead, but it is slowly dying. It has declined from 13% to 9% of projects since 2010, and 16-bit processors have also declined in use. The slack is being taken up by 32-bit processors..." In previous years, the survey asked whether the MAIN processor was 8, 16, or 32 bits. Increased use of 32-bit MCUs as the MAIN processor does not necessarily mean that 8- and 16-bit MCUs are being used less. I'm not as familiar with other vendors, but Microchip's quarterly financial reports have consistently been reporting strong growth in sales of both 8- and 16-bit MCUs.

    • I've gained the impression, in recent years, that the IEEE is strongly opposed to H1-B visas. Today I did a web search on "IEEE H1-B" an the results appeared to confirm that impression. I find their reasoning plausible, but not persuasive. It might be interesting to re-read some of their recent position papers in light of the newly-available data. For example, a female engineer representing the IEEE testified to Congress that increasing H1-B visas would reduce the percentage of female engineers in the US workforce because H1-B workers are predominantly male. http://qz.com/64334/a-female-engineer-told-congress-today-that-h1-b-visas-are-sexist-and-should-be-replaced-with-green-cards/ But is it true that H1-B workers are predominantly male? The engineer said "...I don’t like making decisions without hard data. IEEE-USA has been trying for months to get the actual data from DHS. It’s a simple question: how many women get H-1B visas?". Apparently the IEEE's past positions were based upon anecdotal evidence more than hard data. It will be interesting to read what they say in the weeks and months ahead, now that the data is available. IEEE-Spectrum published an article about the new data analysis tool on Mar 28: http://spectrum.ieee.org/view-from-the-valley/at-work/tech-careers/a-tool-for-analyzing-h1b-visa-applications-reveals-tech-salary-secrets Some of the comments below that article mention errors in the dataset. Hopefully the errors are rare enough that the dataset as a whole remains credible.

    • Well, someone might have reconfigured the system when you weren't looking! :-) My intent was mainly to point out that it was the system that failed, and the controller is just one part of the system.

    • "How can traffic lights fail all green?" The controller is just one part of the system. The wiring is another. Let's not rule out the possibility that the red and green lamp wiring was reversed by the installer. Installer: "After I installed the signals, I noticed that the amber lights were coming on AFTER red instead of BEFORE red, so naturally I assumed that the controller was defective and submitted a bug report. Now the system has failed to all green, so that makes TWO bugs in the controller!".

    • "Software can keep on changing - even after the system has shipped. That forces people to think differently." Each time a product is designed, someone must decide whether the firmware should be field-updatable. This capability comes with many costs. It may result in less effort spent getting the firmware right the first time. Distributing firmware updates may be costly, especially in terms of technical support, and it may provide another avenue for a competitor to reverse-engineer the firmware. Providing the capability to update the firmware may also provide an avenue for a hacker to do something malicious that might compromise safety. All of these costs and risks depend on the product. For complex, low-volume products, field-updatability is essential, and simple, mass-market products, it is clearly overkill. But for many products, the designer has a choice. In these cases, I am tempted to say that it is better to omit field-updatability and put the time and effort saved into getting the firmware right the first time. Of course every case is different. I just think this decision merits careful consideration of the costs and benefits of providing and supporting field updatability instead of automatically assuming that it's always best to provide it. Sometimes less is more.

    • "We know, for a fact, that the average IT software team doesn’t remove 15% of the bugs they inject prior to shipping." Jack, I suspect that you were using the word "inject" sarcastically, but a literal interpretation might also make sense. If we want to measure the attenuation of a low-pass filter, we can inject known high-frequency noise into a signal and then measure how much of that noise was removed. Similarly, a separate group of developers could inject a known set of bugs into a code base and then pass the code base through the formal inspection and static analysis processes in order to measure their effectiveness at removing this type of "noise". One never knows how many naturally-occurring bugs exist in a code base, so it is difficult to quantify the percentage that any process removes. By artificially injecting a known set of additional bugs, and then measuring the percentage removed, the effectiveness of the process (and people performing it) can be more accurately quantified.

    • "What BLE profile would suit a ECG signal best and is there any example code for ADC in to that profile?" This question would probably not be shocking if it were posted on an internal message board viewed only by employees of the same engineering group. In the old days, engineers had limited options for interacting with other engineers outside their group. The internet has made it possible to obtain guidance from a much larger group. That's generally a good thing, as long as the engineer takes responsibility for deeply understanding the reference designs. But is it really okay for an engineer to trust internally-supplied reference designs more than externally-supplied designs? Shouldn't the engineer exercise the same degree of caution when re-using code that was developed by the person in the next cubicle? We all use components that we trust to reliably behave in accordance with specifications published in datasheets. Part of the problem with software components is that there is generally nothing equivalent to a datasheet. A datasheet represents a commitment on the part of the supplier. The lack of datasheets for software components would seem to represent a lack of commitment and accountability.

    • Oh, and did you notice that manufacturing jobs were rising along with manufacturing output up until about 1980? It wasn't until after 1980 that the jobs started falling off. That's about the same time when microcontrollers came into widespread use. When I hear politicians talking about preparing young people for the high tech jobs of the 21st century, I am reminded of the story of John Henry, the railroad worker who tried to prove that a man could drive spikes as fast as a machine. If humans have to out-think machines in order to keep their jobs, eventually the humans will be burning out their neural circuitry just trying to keep up. I don't know how to solve this problem, but I wish politicians would stop blaming each other and start thinking about how people will find meaningful employment in a world where machines can increasingly outperform humans at mental tasks as well as physical tasks.

    • I completely agree about deteriorating infrastructure. But I am not sure what to think about US manufacturing capabilities. The data I have found just from occasional internet searches seems to indicate that manufacturing output has been rising steadily for many decades. But as productivity has increased, the demand for manufacturing workers has steadily fallen. The graph on this web page illustrates these trends: http://www.theequitykicker.com/2013/12/31/robots-artificial-intelligence-replacing-jobs/

    • If you were hiring someone to manage a company that you owned, would it be a problem if they had 30 or 40 years of experience? I served a year on my local school board, filling in for someone who had quit. It was a humbling experience. There were many times that I cast votes on issues that I didn't understand very well. Some of the other school board members were annoyed because I was constantly asking questions, which made the meetings run longer. I think it would take several years to become a really effective school board member. I suspect it would take quite a bit longer to become a really effective congress person.

    • "The analyzer was designed to compete with Keysight’s N2820A current probe, a $4000 probe"

    • If you search the web for images of an Airbus A380 cockpit, you may be surprised to see that instead of having a yoke mounted in front of each pilot, there is a joystick located next to the outer leg of each pilot. That means the pilot is using her left hand, while the copilot is using his right hand. A good friend of mine is a commercial pilot. When I asked him about this, he said that it is no problem to switch hands. But I remain skeptical. I once spent an entire day trying to operate my mouse with my left hand, and gave up in frustration.

    • Thanks for providing those helpful insights. I am interested in learning more about the manufacturing process. My goal is to understand how MCU families and subfamilies are related at the manufacturing level. When a new subfamily is released to the market, it appears that there is typically a corresponding data sheet, errata sheet, and a flash programming specification. I have often wondered how those things relate to mask sets, variants, reusable modules, silicon revisions, and other design elements of which I have only an elementary grasp. I would be very grateful if you could recommend a textbook that would help me to better understand how the design, manufacturing, and testing processes eventually result in the various documents provided to customers.

    • I would very much like to see an article or a video in which a representative of a chip manufacturer explains how MCUs are characterized (e.g. what is the definition of Max) and how they are tested during manufacturing. For example, how many parameters are tested during manufacturing? I am most familiar with Microchip MCUs. On their datasheets, some parameters have footnotes saying "This parameter is characterized, but is not tested in manufacturing.". Is it safe to assume that all other parameters ARE tested in manufacturing? If so, what percentage of parts are tested, and how does this relate to the meaning of a "Max" value? Please consider inviting a chip manufacturer to contribute an article or video addressing this issue. Thanks.

    • Automotive AEC-Q100 parts are apparently treated differently. A document on NXP's website states the following: "Six sigma design philosophy is applied to all Q100 devices. This ensures that an end user application designed to the datasheet limits can tolerate a shift as high as one and a half sigma in NXP’s manufacturing processes. As the process control limits are much tighter than one and a half sigma, this virtually guarantees trouble free end user applications. During electrical test process, average test limits or statistical test limits are applied to screen outliers within automotive lots. Figure 1 shows the distribution of devices passing a test and the calculated statistical test limits in red. Although the outliers are within the upper and lower specification limits they are not delivered as Q100 products." Source: http://www.nxp.com/documents/brochure/75017356.pdf

    • "It's actually a really hard thing to determine mean and std deviation, and if they change, people will sue you." It is my understanding that the goal in IC manufacturing is to use Statistical Process Control (SPC) to continuously monitor key parameters of manufactured parts so that any deviations from the expected values can be quickly detected and corrected. This feedback loop results in a well-controlled, stable manufacturing process that consistently produces parts whose parameters fall within the desired limits. Continuously testing hundreds or thousands of MCU parameters would be a daunting task for a human, but it is my understanding that automated testing technology exists which makes it practical to continuously monitor a large number of key parameters. If the design parameters have been well characterized, and the manufacturing process is well controlled, then the mean and standard deviation should not be drifting. If they are drifting, and the manufacturer is not aware of it, then there is something wrong with the manufacturing process. As customers, I think we have a right to expect quality components resulting from well designed and well controlled manufacturing processes. Manufacturers who "publish as little as possible" do not inspire confidence.

    • Freescale's clear definition of the Max value is helpful, but wouldn't it be great if everyone just published the Mean and Standard Deviation? That would allow us to derive our own Max values based upon requirements (e.g. Mean +2 sigma, Mean +3 sigma, Mean +4 sigma, etc.) The engineers who perform the testing surely know the sample Mean and StDev. But the traditional way of representing that information in a datasheet seems to result in a loss of useful information.

    • As long as we're talking about hacking, what about Mom's garage door opener or her residential water supply? I think it's really cool that my son can monitor and control his new garage door opener while he is at work by communicating with the device using a smartphone. But how can he be assured that criminals won't find a way to open the door using a "smart app" developed by a hacker? I would love to have the ability to monitor my water meter remotely. A friend of mine recently got a huge water bill for an unoccupied house that he owns. That's how he discovered that a water pipe had burst and had been flooding the kitchen for weeks. If only he had known sooner, he might have avoided the need to replace the entire floor and much of the drywall. All of this connectivity increases the "surface area" that is available to hackers. If these types of attacks ever become commonplace, there may be a backlash in the form of a demand for devices that are NOT connected. And that would be a shame, because the advantages of connectivity are real. I recently discovered a KickStarter project called "Water Hero" that sends you a message if it detects an unusual flow volume in your residential water system. It includes a shutoff motor that allows you to remotely shut off the water to the entire house using a smart phone app. And therein lies the vulnerability. It's no longer just Mom's computer data that is at risk. Now we have to think about the possibility of someone gaining unauthorized access to the same interfaces that allow us to conveniently monitor and control many of the devices in Mom's home for her safety and comfort. I think this issue is going to make the jobs of embedded developers even more challenging.

    • Generating C code is less than ideal for real-time systems, because there is no way to predict the execution time of a given statement. For real-time systems, it would be far better to generate assembly code where the worst-case execution time is known. If we are modeling the ENTIRE system, then there would typically be no need to modify the generated code. This is already happening with Mechanical CAD/CAM where the instructions for milling out a physical part are generated from a 3D solid model. The big problem with embedded systems is that we haven't yet figured out how to build a complete model of an entire system. I feel confident that we will eventually get there, and when we do, there will no longer be a need to generate C code. The best systems will generate machine code directly from a System Model.

    • This article reminded me of a funny experience. I once worked in a research lab where there was a grad student whose last name was Milliron. Any non-scientist would realize that this name was a contraction of Mill and Iron and pronounce it in that way. But to the scientists who worked at the lab, this poor student was just 0.001 of a Research Octane Number, because they always pronounced her name as "milli" + "RON".

    • Correction. It lists the 1702. The 1701 was a typo on my part.

    • "The 1702 EPROM, the most common non-volatile memory in those days, could store a whopping 256 bytes. Alas, I can neither remember nor find the price of that part." Digi-Key catalog number 816 (Nov-Dec 1981) lists the 1701 on page 7. It is described as a "2048 Bit MOS EPROM (256x8), 24 Dip" with a price of $5.48. For 33 years I have wondered why I am keeping that yellowed, old catalog. Today I just discovered the reason. (LOL)

    • If you have the privilege of being the parent or grandparent of a pre-teen boy or girl, set aside time on a regular basis to provide opportunities for them to explore their interests. Put a piece of wood in that basement vise and let them cut it with a coping saw. Give them some batteries, lamps, and DC motors and then let them experiment without any rules or agenda. If they enjoy that, then give them an old printer or scanner and let them take it apart and harvest some parts or subassemblies. Then help them to understand what those parts do. Did they harvest the long, thin fluorescent lamp and driver board from the scanner? Show them how to connect it to some batteries and watch them grin when it lights up. If you can light a spark in a young mind, it may grow into a roaring flame. But do it while they're young, because once the hormones kick in, it's probably too late. “If you want to build a ship, don’t drum up the men to gather wood, divide the work, and give orders. Instead, teach them to yearn for the vast and endless sea.” Antoine de Saint-Exupéry

    • "Teaching most specific skills is pointless. By the time the professor has learned them, they're probably obsolete." Agreed. I like to think of it as a three-level hierarchy: 1. Science (e.g. Physics, Calculus, Sorting Algorithms) 2. Technology (e.g. MOSFETs, DRAM, The C language specification) 3. Products (e.g. Atmel ATmega 328P MCU, MPLAB C Compiler v. 2.1) The half-life of scientific knowledge is very great. The half-life of product knowledge is very short. But when employers post job openings, the Requirements are often little more than a list of the Products being used by the employer. If I were hiring engineers, I would be looking for people who could demonstrate knowledge of Science and Technology. Without it, a person could waste a lot of time using a Product ineffectively.

    • "Therefore you need a proper backup system. One that has a single point of failure is not exactly sound." I beg to differ. The original document constitutes a single point of failure. A backup provides redundancy. Multiple backups provide greater redundancy. But until the first backup is made, there is no escaping the fact that there is a single point of failure, namely the laptop and its hard drive. If Jack's laptop had been stolen or dropped down a flight of stairs, the outcome would have been the same. Whenever any file is backed up for the first time, some piece of technology is going to have access to that file. That piece of technology can be faulty or it can do something unexpected. Regardless of how many backup systems are used, one of them has to go first. If the first one obliterates the original file, then it's toast. I suppose you could install a RAID controller and dual hard drives in a laptop. But if the laptop is stolen, it won't help. Possession of the laptop is a single point of failure.

    • The electronics vendor SparkFun has recently created an online place for people to stream data from their IoT projects. It's located at data.sparkfun.com. If you click the Explore button, you can see the kinds of data that people are collecting. This is apparently intended for personal, experimental use. It would probably be a great way for high school students to monitor from home the parameters of an experiment that is running in the classroom. What it lacks, at the moment, is any way to put the data in context. Someone in Boulder CO created a data stream named "Jim's Office Conditions" which includes not only humidity and light level, but also methane level. I can only think of one way that Jim's methane level would increase, and I can't understand why Jim would want to share that information!

    • Here's what I used to tell the high school students who participated in our local robotics club: You can be a scientist, an engineer, or a technician. The job of a scientist is to discover new things about the universe. Scientists work on projects that often take years or decades, and have no guarantee of success. Jobs for scientists are relatively scarce. The job of an engineer is to learn what scientists have discovered and use that knowledge to invent technologies that solve problems for people. Engineers work on problems that typically take months or years. Engineering jobs are far more numerous than science jobs. The job of a technician is to learn what technologies have been created by engineers and to recommend, install, configure, and maintain these solutions at the point of use. Technicians work on projects that typically take days or weeks. There are far more jobs for technicians than for engineers. The categories are not fixed and there is a lot of overlap. An automotive engineer might be responsible for troubleshooting problems that would keep an assembly line running. They are basically a technician, but their job requires a deep understanding of theoretical concepts. My advice to a young person is to figure out where you want to be on the continuum. Do you want to work on lots of short-term projects that take days or weeks, do you want to engage in research that may take years or decades, with no guarantee of success, or do you want to be somewhere in between? Most dentists and doctors are technicians. Most do not conduct research or engineer new solutions to problems. But their jobs require a lot of knowledge and skill, so it takes many years of education and internship. Dentists, physicians, and other technicians who have mastered their crafts have my respect and admiration. I have seen some pretty amazing concrete work over the years. Just imagine what it takes to build the forms for a freestanding concrete spiral staircase.

    • At a specific time and within a specific engineering group, there is a de-facto working definition for what constitutes a bug and what constitutes a line of code. The definition of what constitutes a bug or how logic is factored into lines of code can change over time, but usually changes gradually. If a group wants to measure the short-term effect of a process improvement, I think that bugs per line of code could provide meaningful insight as long as the definitions of "bug" and "LOC" remain relatively stable during the evaluation period. It's analogous to measuring an AC signal using a sensor that has a drifting DC baseline. Over short time periods, the measurements are valid. But one has to be careful when comparing measurements from different sensors and/or widely separated time periods because the baselines can be very different.

    • "To summarize, the problem I see is that new generations are not being taught to be curious and know before use, they are being trained to use and do it quick." A few years ago, I became acquainted with an E.E. graduate student at a local university. He was employed by the university as a graduate assistant. He told me that when E.E. undergrads came to him for assistance with homework, many of them expected him to work the problems to completion, and they were miffed if he only gave them hints and explained broad concepts.

    • "The problem I see is that many embedded people or so-called developers/engineers don't care to be real Gandalfs" A few years ago, I became acquainted with an E.E. graduate student at a local university. He was employed by the university as a graduate assistant. He told me that when E.E. undergrads came to him for assistance with homework, many of them expected him to work the problems to completion, and they were miffed if he only gave them hints and explained broad concepts.

    • "I mean, who has the time to get into the nitty gritty of everything?" Exactly! Remember when Jack doubted the MCU manufacturers' claims of 10-year coin cell life, so he constructed a test apparatus and designed a test protocol to validate those claims? This undoubtedly took a LOT of time. Now imagine trying to validate NASA's claims of sub-millimeter accuracy in determine global mean sea level. Who has that kind of time? Did you know that the sea surface is depressed under areas of higher atmospheric pressure, or that the amplitude of the lunar tides peaks every 18.61 years when the moon and sun are most closely aligned? All of these things have to be accounted for when processing the raw satellite altimeter data. The NASA engineers and scientists are dealing with an incredibly complex system with a lot of moving parts. Knowing that even the best embedded systems contain bugs, I can't help but wonder about the overall accuracy of the system used to determine sea level. Can they really compute global mean sea level with sub-millimeter accuracy? I can't think of any way to test NASA's claims, so I guess it comes down to a question of trust. Sometimes (e.g. Challenger disaster) the engineers are pointing to problems but the administrators, who are constantly under budget pressure, are painting a rosy picture. In general, there seems to be an inverse relationship between complexity and public trust. People intuitively sense that complex systems are more prone to errors and/or failures. Perhaps, as an embedded engineer, I have a heightened sense of skepticism because I know from experience just how difficult it is to validate complex systems. Who knows the limitations and weaknesses of Gandalf better than another Gandalf?

    • I think the same thing is happening in Science. A few years ago, I wanted to understand how sea level is measured. Since then I have probably read hundreds of scholarly articles and scientific web pages, and I am still discovering new dimensions to the problem. Many of the factors that affect sea level measurement were initially unknown to me, despite having taken a Geoscience elective in college. I had never heard of the solid earth tide, caused by lunar gravity, which can cause the land in the middle of continents to rise and subside by as much as 30 cm every 12 hours. I had never considered the fact that the earth's gravity field is not uniform, but exerts varying amounts of acceleration upon a satellite as it orbits the planet. Nor had I considered the fact that a satellite is affected by varying amounts of lunar and solar gravity throughout its orbit, as well as the solar wind, which is variable and turbulent. The scientists and engineers who build, calibrate, and operate the systems that measure sea level have to construct accurate and sophisticated models of the external factors that affect the various types of instrumentation such as tide gauges and, in recent years, satellite-based altimeters. The newest and most accurate satellites, Jason-1 and Jason-2, orbit at a mean altitude of 1336 km (1.336 billion mm) and measure sea surface height using RADAR. Detecting a one millimeter change in mean sea level would require an accuracy better than one part per billion. It seems to me that clock jitter alone could introduce that much uncertainty into the RADAR time-of-flight measurement, so I am more than a little bit skeptical. However I have to admit that I am just scratching the surface, and I can't afford the time to dig much deeper. If you press the Mean Sea Level Change button on the NASA remote control, the display will read +2.28mm/yr. If you open up that remote and try to understand how it works, prepare to meet Gandalf.

    • I bought a Cobra torch about ten years ago. My son used it to do a body-off restoration of a badly rusted '72 Super Beetle. It's great for welding sheet metal and making repairs to steel gardening tools. When my steel lawn mower deck developed a crack, the Cobra fixed it in a jiffy. I love the pinpoint precision. Conventional oxy-acetylene torches have the feel of a fat magic marker. The Cobra feels like a fine-point pen. I have read that these torches are popular with experimental aircraft builders for welding thin-wall tubing. http://www.cobratorches.com/ (I have no affiliation with the company.)

    • If the MCU has an unused analog comparator and an adjustable voltage reference, they could be used to generate an interrupt whenever the supply voltage drops below a certain threshold. The ISR could set a flag which the main program could check after writing to Flash or EEPROM. When the battery is depleted to the point where low voltage events begin to occur, the system could notify the end user with some type of low voltage indicator. 1. Clear the low voltage flag. 2. Write to Flash or EEPROM. 3. Is the low voltage flag set? If so, notify the end user.

    • It is interesting that you used a food product label as an example of what you'd like to see. Not so many years ago, food manufacturers provided very little information on package labels. I am not a big fan of government regulations, but the food labeling requirement provides consumers with useful information in a way that maintains a more-or-less level playing field for businesses. I'm not sure whether something similar could work for component manufacturers. It would probably have to be crafted in a manner that avoids getting into the particular specs of particular component types, because there are just too many of them. My gut feeling is that regulation is not the way to go here. A better solution, in my opinion, would be for market forces to create an impetus for manufacturers to publish more complete specs for their components. Companies like DigiKey and Mouser may unwittingly be helping to facilitate this by allowing customers to easily restrict a component search based upon multiple criteria. As these component search engines grow to incorporate more parameters, any component lacking parameters is more likely to be excluded from a customer's search results. If manufacturers figure this out, it might provide the necessary incentive to start providing those specs. If there ever comes a time when suppliers like DigiKey and Mouser start telling manufacturers that their components are being excluded from searches due to missing criteria, then maybe things will start to change.

    • Me, I want a donut! The Min and Max values are a reflection of the manufacturing tolerances. If a company is using SPC (statistical process control) and continually measuring these parameters for a sufficient number of parts, then they should be able to supply hard numbers for the Mean and Standard Deviation of any measured parameter. If they're not publishing those numbers, I can't help wondering whether they've got something to hide. I am not sure whether a Min or Max value is an absolute "guarantee". From a manufacturing standpoint, the Min and Max values might represent the endpoints of a 99% confidence interval, meaning that 1% of the parts might be below the min or above the max. A really great datasheet would not only list the Test Conditions, but also describe how the Min and Max values were calculated. For a medical device, where 100% of the parts are thoroughly inspected, the Min and Max could be absolutely guaranteed. But for ordinary electronic components, I suspect that a small percentage of parts can be expected to lie outside the Min/Max range.

    • In addition to the UL safety guidelines, it is also important to be aware of restrictions on the shipment of products containing lithium batteries. Knowledge of these restrictions might cause a designer to prefer one battery type (e.g. coin cells) over another. This UPS page is a good starting point: http://www.ups.com/content/us/en/resources/ship/packaging/guidelines/batteries.html

    • There's a good tutorial about this on YouTube: http://www.youtube.com/watch?v=IrB-FPcv1Dc They are using a P-FET to block reverse current on the high side. If you search for MOSFETs on DigiKey's website, there is a column called "FET Feature" which has numerous drive level options such as "Logic Level Gate, 0.9V Drive", "Logic Level Gate, 1.2V Drive", etc. This makes it easier to find MOSFETs with low gate threshold voltages. I found a couple of 1.8V drive N-FETs with gate-body leakage currents of 100nA, which would seem suitable.

    • If the MCU is powered by the coin cell at the time the ADC is sensing the cell's voltage, it might be necessary to scale the measured voltage using a voltage divider so that it's in the range the ADC can read. The voltage divider can also serve as the temporary load. If a digital output pin is available, an N-channel MOSFET could be used as a low-side switch to disconnect the lower leg of the voltage divider from ground. A Fairchild BSS123 might do the trick. It has a gate threshold of 1.7V typ (2.0V max), and a zero gate voltage drain current of only 10 nA. I will be experimenting with this in the near future.

    • I just found this article, written in 2011, which presents results of CR2032 battery tests using a pulsed load pattern: m.eet.com/media/1121454/c0924post.pdf

    • The graphs provided by battery manufacturers typically show a battery's expected lifespan when powering a CONSTANT resistance or a load that draws a CONSTANT current. Suppose that one circuit draws 30 mA continuously for 60 seconds, while a second circuit draws 30 mA for one second each minute over a one-hour period. Both circuits draw the same amount of current and consume the same amount of energy. But do both circuits have the same effect on battery chemistry and battery life? I would like to gain a better understanding of the chemical processes happening in the anode, cathode, and electrolyte under high current conditions. Does the internal resistance increase because of an internal temperature rise? If so, then brief periods of high current (i.e. running fast) might have a much smaller effect on internal resistance than a sustained period of high current. Jack, how about a test protocol that pulses the load on and off? Each battery could be subjected to a different pulse frequency, duty cycle, and load current. If the MCU vendors are right about running fast to maximize battery life, then I would expect duty cycle and load current to be inversely proportional, regardless of pulse frequency. This would imply that a battery subjected to a 30 mA load at a 0.1% duty cycle would last as long as a battery subjected to a 30 uA load continuously. If true, it would be good news and a handy rule of thumb.

    • Consider how an electronic camera flash works. The circuit first charges up a capacitor with energy from a battery. The charging process typically takes a few seconds. Then the energy is dumped into the xenon flash tube to produce a flash which lasts a few milliseconds. Suppose the battery current is 100 mA for a 3 second charging period. In order to extract the same amount of energy directly from the battery, without a storage capacitor, we would need to draw 100 amps for 3 milliseconds.

    • I was thinking of a situation where the MCU is in sleep mode with nothing but a real-time clock running, but then once per week it needs to transmit a data packet wirelessly. If that 3 second transmission needs 30 mA, then it might make sense to gradually charge a cap at 3 mA for 30 seconds using PWM with a 10% duty cycle. That way the battery only sees a 3 mA load. And since the cap is disconnected during the week, the cap is not draining power from the battery.

    • Correction - the MOSFET would obviously need to be in series with the capacitor, not the battery, so the MCU would still have some power during sleep mode. This is what happens when people post comments at 1 am. :-)

    • In certain applications, it might be possible to put a MOSFET in series with the coin cell so that it is almost completely isolated from any capacitors or loads when the system is idle. I have seen MOSFETS with Drain-Source leakage currents as low as 100nA. The external event that wakes the MCU would first need to turn on the MOSFET so as to charge up the capacitor, and then when the capacitor voltage reaches a threshold, the MCU could come out of sleep mode. This would effectively increase the wake-up time of the MCU, but only by a handful of milliseconds. There is a long-running debate about whether it is better to wake up and run at full speed for a short time, or run at reduced speed for a longer time. If a slower system clock rate is an option, then the maximum current requirement could be reduced. This won't reduce the total energy consumed, but by reducing the current it might (a) deplete the battery more slowly, and (b) allow the system to work with a battery that is further depleted. If the system can work effectively using a 32768 Hz watch crystal, the current could be drastically reduced.

    • Not sure why my comments didn't stick. Just wanted to mention that Energizer L91 (AA) and L92 (AAA) Ultimate Lithium cells have a 20-year shelf life, according to the datasheets. Of course, they're not coin cells, but 20 years is achievable if you have room for a AAA cell.

    • I have been thinking about a protocol for testing the specific type of battery I am planning to use to power an RTC circuit. Suppose I purchase a dozen identical batteries and connect each one to a different resistance. For example, 100 ohms, 200 ohms, 400 ohms, 800 ohms, etc. Suppose a battery lasts 10 hours with a 100 ohm load, 25 hours with a 200 ohm load, and 60 hours with a 400 ohm load, and 150 hours with an 800 ohm load. At this point I should be able to fit a curve to the data that would allow me to (crudely) extrapolate out to ten years. In the weeks and months ahead, as additional batteries expire, yielding more data points, I should be able to improve the curve fitting and get a better estimate of the how much current the battery could deliver for a ten year period. The idea is to get some useful data quickly, in order to know if this battery type is even in the right ballpark. If the early data look promising, then I can continue the experiment to improve the accuracy.

    • David...for what's it's worth, I found a TI white paper (Google "SWRA349") which states that "adding a capacitor in parallel with a CR2032 coin cell is the most effective choice a designer can make to maximize battery capacity utilization in low power RF applications (more than 40% improvement with poor quality CR2032s). The test results also show that using 30mA peak current versus 15mA peak current only slightly reduce the effective capacity of a CR2032 (9% on average depending on vendor). These observations are valid across all six coin cell vendors tested, and implies that minimizing average current is the key to achieving long battery life with CR2032s."

    • The datasheet lists "T1OSC Current" at 1.8V as 0.65uA typ, 4.0uA at +85C, and 7.0uA at +125C. It also says that the T1OSC figure includes the power-down base current (Ipd). Most of these data loggers will be used mostly on the workbench, with rare excursions into the field. But when used in the field, they need to be as rugged, reliable, small, and lightweight as possible. And some of them will be installed on DIN rails in industrial enclosures where temps may be consistently high. When in use, the data logger, including the RTC, is externally powered. If the coin cell dies, the data logger loses the ability to remember the date and time from one session to the next. This means having to reset the clock manually at the beginning of each data collection session if meaningful timestamps are desired. Designing for the worst-case scenario would mean making the device larger and heavier. Using the 0.65uA (typ) figure, 190 mAh would theoretically last 33 yrs. Derating that by a factor of three would provide 11 years, assuming the device spends most of its life at room temp. That seemed like a safe bet a few months ago when I designed this circuit, but your article got me thinking, and it now appears that I was making some rather optimistic assumptions.

    • Correction: The PIC16LF1823 doesn't have an RTC. I am programming it to work as an RTC by clocking the 16-bit Timer1 at 32kHz. The timer uses only 650nA (typ) and 4uA (max). Every 2 seconds, when it overflows, the chip wakes up just long enough to increment a 2-second tick counter. At this point the chip draws 2uA (typ) and 20uA (max) while running at 32kHz. While I agree that, in general, it's pretty hard to build a useful system that uses only 10mA, this custom, PIC-based RTC never draws more than 20uA from the coin cell (assuming room temperature operation, no PCB contamination, no condensing humidity, and accurate MCU current specs). Your article got me thinking about the number of assumptions I am making. Maybe I should get out the uCurrent and double-check the actual load on that coin cell. You also mentioned that two of your batteries were defective. In a production environment, where a tabbed coin cell is being soldered to the PCB, a defective cell would be a real PITA for the customer. It would seem like a good idea for product testing to include some type of voltage and current tests of the coin cell under an artificial load. Someone could probably write a book about using coin cells in low-current embedded applications. (wink, wink)

    • I wonder whether BR-series coin cells would have performed any better (or worse). According to Panasonic, their BR-series coin cells have a lower self-discharge rate than their CR-series coin cells. I am planning to use a tabbed BR1632A/FAN to power the RTC of a PIC16LF1823 in a sealed, ruggedized data logger that will be powered off (except for the RTC circuit) 99% of the time. I chose the BR1232A/FAN because it is rated for -40C to +125C. Conventional CR-series coin cells are rated for -30C to +60C, making them unsuitable for a ruggedized product used in harsh environments. I calculated the life expectancy of the battery in my circuit, factoring in the self-discharge rate. Theoretically, it should last well beyond my ten-year goal. Theoretically. Thanks for conducting and reporting these tests. It's nice to have some additional benchmark data, which is pretty sparse in the published literature.

    • A system can fail for many reasons. If the temperature gets too high or too low, the MCU or the oscillator might malfunction. Or a connector might fail after 3 years due to fretting corrosion if subjected to constant vibration. Any one of these failures could be just as deadly as a software defect. So if we're concerned about saving lives, it is important to recognize that software is just one factor out of many. As a safety-conscious consumer who also writes firmware, I would not have a lot of confidence in the ability of government experts to find software defects in something as complex as a car. What I would really like is a centralized, unbiased, online database of vehicle accident histories. If I am contemplating the purchase of a specific type of car, I would like to enter the make, model, and year into the database and receive up-to-date summary statistics on the number of accidents, injuries, and deaths reported for occupants of that particular type of vehicle. Creating, maintaining, and publicizing such a database might be a very good role for government.

    • Wouldn't 8-bitters typically be less vulnerable than 32-bitters to random bit flips or permanent damage caused by cosmic radiation and/or an EMP event, due to their larger process geometries and simpler circuitry?

    • It is better to light one candle than to curse the darkness. If you don't like the system, change it! Start your own company and hire the kind of people who want to do meaningful technical work. If you're correct about the other companies having a very low percentage of people doing meaningful technical work, then your company should blow them out of the water!

    • "the lecturer asked everyone who loves their mom & dad to raise their hands. He then said, "you all can leave! There is no place for that crap in business" I worked for a mid-sized software company for five years. They sold a very high quality product and treated their customers and employees with respect. In the years since I left, they have done quite well. Anyone who thinks you have to sell crap to succeed in business is just rationalizing their own low standards.

    • For an additional perspective on modeling, you might want to look at the history of CNC programming. The most widely-used NC programming language is G-code. But solid modeling tools such as Pro-Engineer, Inventor, Catia, and SolidWorks are now being used to create models from which the G-code is generated. Many people still program G-code by hand, but solid modeling is well established and growing in populatity. A G-code program describes the individual actions of a milling machine or lathe. It is very difficult to look at G-code and visualize the resulting solid part. Solid modeling tools allow engineers to directly envision the part and then generate the machine instructions automatically. Portability is achieved not at the G-code level, but at the solid model level. Each G-code program is geared to a specific type of CNC machine. The set of instructions needed to make a specific part on one type of machine might not even make sense for a different type of machine. To someone creating solid models of physical parts, portability means the ability to create a solid model in one solid modeling package (e.g. Inventor) and then import that design into a different modeling package (e.g. Pro-E). When modeling an embedded system, we want the code generator to handle as many of the implementation details as possible. The designer may need to know the total amount of RAM available on teh target MCU, but they probably don't care about the specific address where the RAM is located. However, the code generator definitely needs to know this detail, and this information then becomes embedded within the generated code. The generated code will always be much less portable than the system model, because the generated code contains all of these extra MCU-specific details. So if we want to achieve maximum portability, we need to achieve it at the level of the system model, not at the level of teh generated code.

    • "...how to combine the generated code with hand-crafted code." "...graphical modeling and code generation are mature technologies" I agree that modeling and code generation are the future, but it seems to me that if the modeling tools were FULLY mature, then they would capture ALL of the system requirements and generate ALL of the code, and there would be no need for hand-crafted code. And if they captured ALL of the system requirements, then they would also capture timing constraints, which would require generation of ASM or binary code, not C code, since we cannot predict how long a C statement will take to execute.

    • An embedded system model need not be represented graphically. Any system composed of graphical objects and properties can also be represented in tabular format, where each table represents a type of object, each row represents a specific object, and each cell represents a property of an object. In some situations it might be more natural to manipulate the information in tabular format than in graphical format. State transition tables are frequently used to represent finite state machines, and tables of rules are frequently used to represent the logic of fuzzy systems. There is no theoretical reason why an entire system could not be represented in tabular format. When we export designs from a schematic editor or PCB editor, that data is typically in tabular format. When making global changes, it is sometimes easier to edit the data in tabular format than it is to edit the data in graphical format.

    • If the model includes hard real-time constraints, and if the MCU's clock frequency is known, then it should be possible (but not necessarily easy) to generate ASM code that is guaranteed to meet the timing constraints. This would not be possible with C code, since the C language spec does not allow us to predict how long a given C statement will take to execute.

    • I have grown weary of articles in the popular press about hackers/makers. They seem to follow this recipe: 1. Find a group of students who are building something technical such as a robot or a solar-powered thingamajig. 2. Describe their creation as something totally new and groundbreaking, without bothering to learn anything about the underlying science. 3. Describe the students as geniuses who are going to change the world with a single flash of insight (as opposed to decades of dedicated study and labor). Having taught robotics and electronics to high school students, I believe that these "hype" stories are actually detrimental. Too many students have the false impression that all of the important problems have already been solved, or that only exceptionally bright people can do important work. They need to know that there is much important work yet to be done, and that sudden flashes of insight usually occur only after years or decades of dedicated work in a field. In order to devote that kind of effort to a profession, a person needs to have a mission; a purpose; a reason for being a scientist or an engineer. If a hacker or maker has a mission, then they will discover or create a path to professionalism. And if they're not on a mission, then it's a hobby and that's okay too. The only thing I worry about are the people who make it a profession without having a sense of purpose; people who want to "be an engineer" as opposed to "doing engineering". "It used to be about trying to do something. Now it's about trying to be someone." - Margaret Thatcher

    • About eight years ago, a friend and I (both software developers) created an engineering club at our local, rural high school. We received funding and assistance from IEEE and also a grant from a local university. We developed an annual robotics competition that attracted students from several other districts in our region. University IEEE students served as contest judges. Our contest had a K-8 division for remote-control robots, and a 9-12 grade division for autonomous robots. Here's a video clip from our first event: http://www.youtube.com/watch?v=mONgUK8Gwko After two successful years, the university IEEE chapter offered to take responsibility for the annual contest, and they continue to host the event every Spring. Our students had great enthusiasm for the mechanical aspects of robotics. Most had somewhat less enthusiasm for the electronic aspects. Our greatest challenge was finding students who wanted to write software, especially at the junior high school level. I suspect that it has something to do with the level of abstraction. In mechanical designs, we can observe the movement of the parts. In electrical designs, we can at least observe the physical pathways over which the electrons move. But software design requires the same kind of symbolic thinking as advanced Algebra: systems involving multiple symbols and multiple relationships. From my limited experiences, I have not seen anything to indicate that REQUIRING young people to learn programming would be helpful to them. I think it's great to offer the experience to those who CHOOSE to pursue it, but I doubt that this would be a large percentage of students. When I started programming in the 1980's, programming was seen by many as something cool and fun. I think that perception has worn off. Today I think that programming is seen by many people as something akin to accounting, and I have to admit that sometimes it feels that way to me too! It's definitely not for everybody.

    • I agree. Brand loyalty introduces a time delay in a company's customer feedback loop. If a company's product's are beginning to fall behind the competition, brand loyalty could mask this fact until the gap becomes so large that the loyal customers can no longer resist the competition. If a company requires negative feedback, it is better that they receive it without delay.

    • Car manufacturers still make cars with 4, 6, and 8 cylinder engines. I once owned a 3-cylinder Subaru Justy and loved it. I have heard of cars with 5 cylinder engines. The point is that when you're behind the wheel, you don't usually have to think about the number of cylinders. If the engine delivers the right balance of power and fuel economy for the task at hand, then as a driver you are satisfied. If I am programming in C, then to a large extent I don't really care whether the processor is 8, 16, or 32 bits wide. Indy cars use 4-cylinder engines. An 8-bit processor could be made to run at 300MHz.

    • Imagine that a huge solar event caused widespread damage to our electronic infrastructure. What kinds of chips and systems would still be working the next day? I suspect that the average 8-bit device is somewhat more radiation-tolerant than the average 32-bit device. If you set out to design a system that would survive an EMP, wouldn't it make sense to minimize the number of gates and memory cells while also using a larger geometry? The marketing folks woudl have a field day: "93 percent of 8-bit PIC chips survived the great solar storm of 2015 that fried over 60% of all 32-bit chips in consumer devices" I'm just asking, without any real knowledge of the subject. Can anyone confirm or refute this?

    • I recently asked my optician about this. He said that there is already a product on the market that provides two distance settings. It uses a liquid crystal inside the lens and you have to tap the frames to change the setting. So it is basically a bifocal lens, except that the entire lens changes between two settings instead of an over/under arrangement. When a person looks at a near-field object, their eyes converge. Theoretically, it should be possible to mount a couple of tiny cameras in the frames to look at how the wearer's eyes are aimed. From this info an MCU could deduce the correct focusing distance. I would gladly pay $1K for auto-accomodating glasses.

    • I still have the Heathkit VTVM that I built as a youngster and it still works!

    • There will always be many dedicated-purpose applications where energy consumption and/or manufacturing costs are primary concerns. The trick is to achieve greater abstraction without making the final system more costly to manufacture or to operate. I suspect that real-time operating systems and C/C++ will gradually become less desirable for building dedicated-purpose embedded systems as software modeling and code generation tools mature. The mechanical engineering world has embraced solid modeling. Many people who once programmed CNC machines by hand using G-code are now generating that code automatically from solid models. Tweaking the generated code is sometimes necessary, but only because the code generators are not yet mature. Modeling an embedded system is not easy. But if we can figure out how to capture logic, data flow, and timing requirements in a model, we should be able to generate a single executable containing all of the functionality used in a dedicated embedded system. In this scenario there would be no need for operating systems and C/C++ code. The logic would be translated directly from the model to assembly/binary code. Again, I am speaking only of dedicated-purpose embedded systems that run a single application.