Duane Benson

image
consultant

Duane Benson’s involvement in the hardware and software design world goes back to the days of the CDP1802, 6502, and Z80, and up through current microcontrollers such as Microchip PIC, Atmel Atmega, and whomever’s ARM. Afterhours, he designs microcontroller and motor control boards for small robots and Arduino compatible custom hardware under the moniker SteelPuppet. He’s run a successful Kickstarter with Embedded.com’s own Max the Magnificent and is working diligently toward completion of his “Designing Arduino-code-compatible Hardware” book. In his day job, he has been dishing out PCB layout and DFM advice via the Screaming Circuits blog since 2006. Duane is also a contributor to industry technical publications and conferences on the topics of trends in prototyping, and ways to improve efficiency in product development efforts.

Duane Benson

's contributions
Articles
Comments
    • This open-source Theremin offers additional capabilities over traditional implementations, including a dual-pitch mode.

    • Where Duane Benson offers insight on the workings of intriguing developments in embedded.

    • Until USB Type-C connectors enjoy widespread adoption, one solution is to include both Type-C and Micro-B connectors in one's designs.

    • Circuit board manufacturers see the effects of supply chain issues on a daily basis. In fact, component issues are probably the most frequent cause of build delays.

    • I keep coming back to start-up time. Not start-up, as in how long it takes the clock to stabilize, or the OS to boot. Start-up in terms of how long it takes to first get any code into the MCU. 32 bit ARMs are getting easier to work with, certainly. But, to date, not much can beat the speed at which I can take a new PIC or Atmega, design, layout, and fab a PCB and get code loaded and running.

    • Working in an assembly shop, I regularly see justification for extra time spent checking the PC board. In addition to nets and component location, footprints are very important too. Make sure that the part ordered matches the part in the BOM, which matches the footprint on the PCB. It's pretty easy to accidentally purchase 0805 parts when the board wants 0402s, or a QFN instead of a QFP. Then there the case of my PCA9306 line level converter. One variant, of about ten, has a different pin-out. I used the odd-one-out for my layout, but purchased one of the other variants.

    • Your hands, the two antennae, and ground form capacitors. In the original, the capacitance was used to alter the volume and frequency of the analog tuned circuit. In this version, the values are interpreted and then used to adjust volume and pull wave tables out of memory.

    • I'll do that. It's a fascinating story. If I recall correctly, he disappeared for quite a number of years leading to speculation that he had been killed. But, then he reemerged later in life.

    • You're very welcome - and thank Urs Gaudenz, the designer, for accidentally sending two, and then saying I didn't have to send the second one back.

    • The expression comes from the fear that Mt Theremin might come back out of the grave to avenge the butchering of his awesome instrument.

    • I visited the LIGO detector in Hanford, Washington. The "sonification" of the detection was pretty cool, as was all of the data behind it.

    • CP/M kept the VTOC in RAM and required reloading if if you changed disks. I don't know about all versions, but 2.X would reload it with some non-remembered control key combo.

    • It's really tempting to head down there. Sometimes flights are pretty cheap between here and there - or I could take a 12 hour road trip...

    • I actually used PC-DOS version 1. I did some consulting for a guy on his very early model IBM PC. It had two 160K floppy drives. If I'm remembering correctly, it had 64K (no zero in there) RAM. I certainly don't have any insight into the source code, but it seem pretty obvious to me that MS/PC-DOS was certainly inspired by a combination of CP/M and Unix.

    • On the subject of augmented reality... Apparently, there's a new Pokemon game that uses your phone and some augmented reality. It allows you to "find" Pokemon in the real world, and do some interaction with other players. I just read a "USA Today" article about a case where some kids used the game to lure players to a spot and then rob them. Once again, the law of unintended consequences.

    • I do remember the Z8000. I never used one, but I read a number of articles about it. I did use the 1802. In fact, I still have my Cosmac ELF. from Quest Electronics. I bought and built the kit in 1978 (maybe early 79). The biggest personal downfall of the 1802 was that it came with 16, 16-bit general purpose registers. If I recall correctly, any of them could be the program counter, accumulators, in-CPU scratchpad, or whatever. Why would that be a downfall, you might ask? Because, it spoiled me and when I tried to write assembly language for the 6502, 8080 or even 8088, I was dumbfounded at the limitations; especially the 6202, with two 8-bit registers, and an 8-bit accumulator.

    • I still drive a 1995 Chevy pick up truck. Hard to believe that it's old enough to hit the pub and buy me a beer. That's not the point though - I think about that year was the peak of owner maintainability. It's pretty obvious that most systems are designed specifically with accessibility and serviceability in mind. Both the alternator and starter replacement are about ten minute jobs. Even the air conditioning compressor isn't much more difficult than that. Since that time, I think weight, cost, and aerodynamics have trumped serviceability in terms of priority.

    • I don't sail, but this blog resonates, none the less. I've only called my landlord once in eight years for a problem, because I kind of feel like I can fix things that break better than the repair person that would show up. (Of course, sometimes it takes me two or three attempts). I can't imagine calling someone for help with a loos fixture, running toilet, or other common household maintenance.

    • After page one, I was going to suggest something akin to a trim tab on an air plane. What you've got is similar in concept, I think, but much more complex. A trim tab, for those not familiar, is a small control surface on a larger control surface - a tiny rudder on the larger rudder, or other control surface. It's adjusted opposite the direction you want the rudder to turn, and uses the air pressure to keep the control surface at the correct angle.

    • Thanks for the compliment. I too am hoping we don't soon end up with a Type-C.2, or D, E. However... One of the other aspects of the USB 3.1 Type-C setup, that I'm still exploring, is the concept of the intelligent cable. To get the maximum capability, the cable will need some electronics embedded. I'm sure that means that we'll have to deal with cable variations. All of the cables will connect together, but some won't give full speed, or maximum current, etc. Ugh. I'll write up another blog when I have enough solid understanding of that part of the system.

    • Rework might be easier with the hybrid part. Hand soldering would be possible with the hybrid and not the SMT only. (I'm not going to hand build this though.) I used it because having the thru hole pins eliminates a few vias. I may change my mind for the actual build though because the SMT only is smaller.

    • Most for personal use. I may not populate both connectors, but if I've got the space, I'll put both there until I have enough Type-C chargers and cables to not worry about not having one. This particular board is being designed for hobby / hacking use, so I'll populate both and whomever gets on will be able to use either.

    • re: "equal time devoted to the benefits and side effects." The annoying thing about that, is that it seems like they just put every possible side effect in, however improbable. That pretty much removed the usefulness of any of it.

    • My other thought, is that if we are in fact alone, does that make us incredibly significant, immeasurably important, and the only hope for life to exist and spread throught the cosmos? Or, are we essentially a zit or a mole - an accident that shouldn't be here, and of no consequence at all to the universe?

    • I have a couple of perspectives on this question. First, is the near-infinite number of monkeys model. Yes, our emergence is so incredibly unlikely, but with numbers of potential planets being so vast, and the possibility of non-carbon-based life, I still see it likely that there are, or have been, others. That brings me to my next viewpoint: Given how short a period of time we've been around relative to the universe, the likelihood of any two civilizations advanced enough to communicate, arising in close enough proximity to ever communicate, is about as unlikely as the number of habitable planets is likely.

    • I have to say, I've really enjoyed getting to know these little chips. I've pretty much only scratched the surface of their full potential though. They've also got some interesting sounding Mosfets, at least one of which, the way I read the data sheet, will block current flow in both directions. The body diode in Mosfets is usually a good thing, but there have been a few times that I don't want any reverse current. In one of my future projects, I'll take a closer look at this one.

    • I use my phone as my alarm too. The desk clock I have on my dresser is actually reasonably easy to set. It even has individual buttons for forward and backward. But... whomever designed it neglected to care about key debounce. The exercise of getting the clock or alarm to stop at the right spot is enough to raise my frustration level to the point where I lose sleep after attempting to set the alarm. The other nice thing about the phone is that I can set different alarms for different days and such. I theory, I should be able to speak the "set alarm for 8:00 am." command, but I don't really trust the voice recognition that much. Too many cases of voice recognition thinking I want to search for "king the fog breakfast."

    • I've started on the Simblee shield. I haven't yet decided if I should go with the Simblee module, or the chip, but I'll get to that soon enough. If I have enough space, I'll look at adding in 5 volt tolerance for the analog pins as well.

    • Mark - you have some very good insight here. My understanding is that Octopart isn't substituting after the fact. What they're doing is saying that if you pick one of the parts on their list, it will most likely be available. For my LED example, as the designer, I'm still responsible for looking at the datasheets and deciding if Octopart's three similar LEDs will work for my application. What they're saying is if I pick any of the three, I'll have good supply. If all of the three fit what I need, I could possibly just use the CPL part number and a participating manufacturer would know that I've already approved all three parts. It's still a pretty new thing and bringing up real-world issues like you have can only help move it in a good direction.

    • That is an even bigger challenge. With these kind of requirements, every aspect of the component specifications can be significant. The CPL is starting out with consumer and industrial connected devices as a first focus. Real specialized requirements may fit in in the future, but until then it's still the more traditional careful (and time consuming) process: BOM management using AVLs and vetting component suppliers.

    • In my head, it came out "Wet Ransfer" too. I had to stop reading for a moment to try to come up with some idea of just what is a "Ransfer"? DropBox is good. I use it for collaborative projects sometimes, but if you're near your allocated capacity, it might be a problem to send off a big file like that.

    • Inspiring story. My favorite air show memory was a year that a B-17, B-24 and B-25 all flew at the same time. The big guys took off and slowly circled the field. Then a couple of P-51s took off, caught up with and passed them. Heart pounding. My dad's a few years younger than yours and I cherish times when I can share some of his history and some of himself.

    • When I first read the title of this post, I read it as "Fabulous Flaming Flamingos." Max missed his chance to tile it "Fabulous Flaming Flamingos of Death." In any case, I've done a bit of mechanical work with TinkerCAD - an online 3D CAD system. I've used it for a few items I've had 3D printed. I like the idea of having CNC in my arsenal too. In general, I think it's very good that mechanical newbies, like me, now have access to such capability. Sometimes, though, I do wonder if all that does is unleash a bunch of amateurs, (again, like me) without design discipline.

    • I've shut down a few production lines in cases where I thought the company I worked for at the time was bordering on misusing the trust of our customers. I do understand that the line can sometimes be a bit fuzzy or difficult to find. But this is just so over the top that I'm not sure I can wrap my head around it. To do this, someone had to write up spec's - probably a team of people, and that's after going through the process of getting approval. Someone had to do some pretty extensive research to determine the conditions that would set the cheat mode. Many jurisdictions would need to be researched. I would guess that existing sensors would be sufficient, but someone had to test and ensure that those sensors would do the job. There would be a lot of coding. Test plans would have to be developed. Something like this would need to be very well tested - it wouldn't take many failures to raise suspicion, so it would need an extreme level of reliability. A team would need to design test cases, build fixtures, and schedule a test regimen. Field tests would also need to be run. That's a whole lot of work, and many people that would need to have some level of involvement. I can see how it would be possible to write up the spec's such that they didn't actually use the phrase "illegally defeat the U.S. emissions testing systems", but I would imagine that most of the people involved could pretty easily figure out what was going on. It boggles the mind.

    • The translator is really quite impressive. It's not quite ready for prime-time, but with a bit of care, could even now be used in a lot of situations that are otherwise very difficult. I live in an area that has a fair number of non-native speakers. Think of emergency services and first responders. The grocery stores could make use of it too. The cardboard is pretty cool too, although, I haven't done anything more than watch the demos. I do find that with the version I have, I need my reading classes, which don't really fit in the box.

    • I had to watch it several times to convince myself it isn't fake. It's just too completely nuts. I understand a little bit of the adrenaline rush concept (I have actually jumped out of an airplane), but I certainly couldn't and wouldn't take it anywhere near as close to suicidal as that guy does.

    • I have to say, this XKCD comic has a pretty good take on wing suiting: http://xkcd.com/962/

    • Sounds like a really interesting documentary. Kamen also started the FIRST robotics competitions, which is just an incredible opportunity for geeky kids. Whenever I see one of these automated water purifiers like this, I wonder what happens to the concentrated contaminants that would be produced by it.

    • I met these guys at Maker Fair. In fact, they were right next to my SteelPuppet booth. Very nice folks. I hope they do well with the Kickstarter. Your bit about the toaster reminds me of an old (1984) movie called "Electric Dreams." In the movie, the protagonist buys a home computer. The first thing the computer does when turned on was ask his name. He accidentally said "Moles", instead of his actual name "Miles", and for the rest of the movie, the computer called him "Moles." Given some of the many misspellings and mispronunciations I've seen of my name, I can imagine the toaster constantly asking: "Fuane, how do you want your toast this morning."

    • One of the nice things about the American version of the English language is that, while rules are taught in school, by convention, there really aren't any rules. If you don't like something, make up your own spelling, grammar, or rules of any sort. And, we don't just misuse the English language. We'll take words from just about anywhere and add the into our vernacular of improper use.

    • Paul - I didn't look for that information on the datasheet. It's a good question. I'll see what I can find, or perhaps someone from Silego could weigh in.

    • If I can come up with a application idea, I'll build it into one of my boards and put it to practical use.

    • Speaking of "how could they do that" bugs... I recently read about a software bug with the Boeing 787 aircraft. It seems that all of the generators have the capability of simultaneously failing in the off position. According to the FAA: "If the four main generator control units (associated with the engine-mounted generators) were powered up at the same time, after 248 days of continuous power, all four GCUs will go into failsafe mode at the same time, resulting in a loss of all AC electrical power regardless of flight phase." At least the time is predictable.

    • I like that. I built a clock driven by a custom Leonardo-compatible PC board I designed, but mine is really just a clock. I would be sorely beaten by yours, and I imagine by Max's also. Very nice job!

    • I picked up FPGAs a few years ago, initially without much knowledge of the underlying architecture. Over time, I gained a better understanding of what was going on that the low levels, and that did help. I did a lot of writing code and just seeing what came out the other side. That was helpful very early on. Other than that, I found that the compile and configure process took way too long for that methodology to be practical. With an MCU, if I'm not sure if I've got my code right, I can often compile and upload in about a minute. With an FPGA, that might take 10, 20 or more minutes.

    • I like the idea of making this thing situationally aware. There are some pretty tiny image sensors around - cell phone cameras - that I could put on a version. But then, I'd need more horsepower than an 8-bit MCU has at its disposal. NXP and Freescale both have some really small ARM Cortex M0 MCUs that may be enough for some rudimentary image and/or sound processing. Hmmmm.

    • Crusty - I've thought about a batter operated version, but the NeoPixel draw a lot of current. I've considered a number of options, but so far haven't come up with anything reasonable.

    • Before I designed this, I had a mirror image clock on the wall behind me in my office. It worked well for me, because I'd see it in the reflections on my window, but people coming in to talk with me would get really perplexed looks on their face.

    • I've got one of these clocks on the wall in my office. I've been seriously thinking of making it speed up or slow down at various times of the day - make meetings seem shorter or longer, and just generally confuse people.

    • Here's what it looks like on the wall. https://pbs.twimg.com/media/B8ulfx0CQAASzmo.jpg:large I had to dim the pixels quite a bit for the photo to keep them from washing out the image.

    • I tried the 12 pixel ring, but I found that it was difficult to discern exactly which pixel was on at a glance. I think it would work well for a desk clock, but I hang mine on the wall, so I need to be able to see it better from a distance.

    • I've only used the Adafruit library with NeoPixels. With that, they are pretty easy to use. I made a clock with the Adafruit 60 pixel ring and a custom Arduino-compatible PC board. The 60 pixel ring comes as four 15 pixel segments, which leads to the only challenge - aligning the four segments and soldering them together.

    • One of the things I find quite interesting is the relationship between our senses and our physical actions. For example, the act of leaning into a conversation vs. leaning back can change the other person's interpretation of what we're saying. I've read that it also has an effect on the speaker. For example, if I lean in vs. lean back, while saying exactly the same thing, my feelings about the conversation may change, and my verbal inflection might involuntarily change as well. That sort of thing is used by advertisers and politicians all the time.

    • Ah. Makes more sense now. I'm sometimes a bit slow on the uptake.

    • For only $10.00, I could afford that, plus "DuaneBenson.rocks" and maybe even "basalt.rocks"

    • I love those diagrams, and that's a very good one. I have a few that I keep close by, on card stock, for the mbed, Beaglebone, LPCexpresso, and a few other boards. I don't have any printed out for the Arduino or Raspberry Pi, but there's plenty on the Internet. They're a life-saver. There's a lot of really good, but poorly documented microcontroller driver code running around. I'm very grateful for the people that wrote it all and made it open source, but at the same time, disappointed at the level of documentation. Often, it's just one example. That's great if you're using the part in exactly the same way, but can be frustrating (while still grateful) when trying to use it in other ways.

    • I don't actually know what a "budgie" is, nor the ramifications of it having or not having teeth.

    • Well, that sounds like quite a racket. But really, the only things I'd worry about would be a shady competitor grabbing it. I'm not terribly concerned about customers using it. If I worked for a company that was insecure and treated customers badly, I'd probably have quit already, rather than stress about people complaining in public. Here at Screaming Circuits, I'm pretty much the voice of the company so if a customer does get a ".sucks" domain, I'll be the one dealing with it and I don't mind when people tell me what they don't like.

    • I'm glad to hear that a surgeon's knife is not waiting in the wings for you. A few months ago, I had an insect bite in my left arm become infected with one of those antibiotic resistant strep infections. While in the ER getting IV antibiotics, after the first two types of antibiotic were ignored - perhaps even enjoyed - by the bug, I pondered the same question. I got to thinking - why do prosthetic arms not have extra appendages? If you can build an artificial arm that works reasonably well, why could you not have slots for specialized attachments? My real right hand could hold a soldering iron. An artificial left hand could hold the PC board, on extra appendage could feed in solder, and another could manipulate parts with tweezers. For that matter, despite being blessed with two healthy arms, what's stopping us from building a prosthetic third arm with such utility?

    • Back on topic - I don't think the all seeing / all hearing toaster is very far off. When a few key components, that today reside in a smart phone, get down a bit in cost, it will be very doable in high-end toasters. I recently discovered the Google translate application for my phone. I can speak to it the few words I remember in Russian, and it will show and speak the English translations. I can point the camera at a word printed in English and it will translate to Russian (or whatever language) and replace the the English text with the Russian translation - in real time! It still has enough limitations for me to consider it not quite ready for prime-time, but it's darn close.

    • Hi Crusty - I do remember Robby. Wasn't he in both the TV series "Lost in Space", and the movie "Forbidden Planet"? I just watched Metropolis for the first time last year.

    • Well, I'm always anxious to go to Minneapolis. I suppose I'll have to work harder than normal and come up with something mind-blowing to present, or perhaps bribe someone important with beer.

    • Max - If all robots are named "Max" and all dogs are named "Max", then I must conclude that robot dogs are named "K-9." Or, something like that. In any case, I'm looking forward to spending more time over here on Embedded.com.

    • "When you can get an 8 bitter for a penny whole new applications will open that we can’t imagine today" - To me, this is really the key. Some applications will allow for astronomical volumes. Whether it be throw-away toys and novelties, greeting cards, RFID-on-everything, even fractions of a penny can start to be significant. The peripheral components will be just as cost-critical, so it won't just be the MCU that needs cost reduction. I could also envision a time when volumes for 8-bit MCUs could justify a move to a more modern process. Again, it's those ultra-high volume applications that will do it.

    • I don't have any Heathkit equipment left, but the first computer I poked into was a Heathkit H8. I used a few other kits from them, but the H8 is the one I remember.

    • www.adafruit.com sells a number of kits and does a pretty good job with tutorials. There is an awful lot of competition in the kit arena. I think the business is different than it was back in Heath's prime too. So much open source and tools collected from various sources. You can do just about anything you want without too much effort or cost. I imagine that while today's DIY community is very dynamic and active, the sorts of folks that want to really be handheld through the process are spending their time on things like video games or putting together computer components. Many of the development resources are now available at a very low cost from parts manufacturers. Rather than being a profit center, many of these are essentially subsidized with the aim of selling more chips. It's have to compete in an environment with so many free or very low cost development tools and systems available.

    • It seems more common, at least in my part of the world, to teach only Java in computer science. Some of the universities are offering a degree called "Software Engineering Technology" that seems to be more hands on. A good software developer can learn just about any new language, but I believe that starting with experience in a variety of languages will make that self-learning process faster and more effective.

    • My guess is that we're less than a decade from DIY autonomous vehicle control. It will be done by the same folks who today are building robots with the Kinect and such. One of these days, don't be surprised if you get passed by some engineer with his or her hands off the wheel and an ear to ear grin on their face. Commercial full auromonomy can't be that far behind. Though as many have suggested, I suspect legal issues will be the hold up long after the technology is solid.

    • Very well written. There is something about being able to hold actual paper in your hands and flip though the pages. To me it's a much better experience than is online reading. But online has plenty of other advantages. Thanks for all of your contributions over the years.

    • It is an odd world we've created: Text when you get there. Text when you're about to leave. Text when you're on your way home. Text when you get home. I'm pretty sure that back when I was a kid, the world didn't come to an end when I went out sans any type of personal communications device. We've created more than nomophobia. It's kind of an overarching insecurity that makes us always want to be connected and always know where our loved ones are. When tracking functionality is default and easy to use, we'll at first complain about privacy issues, but then will adopt it and will feel compelled to be connected on an even more granulated basis.

    • Wasn't a commutator, at that time, generally a powered mechanical rotary switch? If I recall correctly, the same technology was used for multiplexing the early phone system.

    • The MicroMan has a very good point - to an extent. The typical consumer really doesn't care what processor is under the hood. They just want it to work and keep working. To many consumers, there isn't "Word", "Excel" and other applications. They just use Windows. To A large segment of the PC using public, all of these names and terms have no more meaning than does "Turbo Encabulator." To counter that is the brand. People but Pepsi or Coke because they are loyal to the brand. Not because they have an understanding of the ingredients. When you look at a diet soda, people do understand the benefit there. Thus if Linux/Android/ARM noteboook makes can demonstrate a clear technological advantage, they can sell against Intel with that. If not, then they will have to spend gobs of money to create meaning in their brand with the consuming public.

    • Hopefully some standards will emerge allowing at least some of the computing power to come from multi-model platforms. That should allow volumes to go up and thus pricing to go down. I'm not talking about using mostly common components or a common system with custom mounting for each model, but something that is both bolt-compatible and software compatible. At that point, if the ECU dies, a mechanic or competent owner could pull the old unit, bolt and plug a new one in, download some software and be done.

    • I don't think we have to wait for 2084 for this to happen. Cars with auto braking and throttle by wire could very easily be speed limited by some hidden bureaucrat. "Smart grid" devices could be controlled remotely. When fully in place, your utility could decide that you really don't need any more hot water today. Those automated systems can already lead to a home foreclosure. Almost everything we interact with today is automated to some degree or other and once they're all connected, we could be controlled to a great degree by unscrupulous people in power.

    • Software is a component, albeit a major component, of systems that are already covered by various laws. If bolts, used as specified, fail causing an accident, the manufacturer will likely end up in court, but not because of any bolt-specific laws. A similar software failure causing an accident would end up in court as well, software specific laws or not. If that's the case, why clutter up the system with additional regulation that won't add any protection to what already exists?

    • Is anything not a double edged sword? Anti collision systems in commercial aircraft have likely saved thousands of lives, but every now and then - very rarely - they mess up and require intervention by a human with good judgement. That works because airline pilots undergo massive amounts of recurrent training. The undoubtedly run though just that scenario, as well as many others, in the simulators. I read that the co-pilot flying the the Air France flight mentioned above had not been trained on that one specific condition. The unfortunate fact is that automobile drivers don't go through that same sort of training. Automobile drivers have allowed their GPSs to take them into places where an aware human would easily avoid. I'm in favor of backup cameras, in-car radars and all of those type safety devices. If built properly, they will save a lot more people than not. Putting the same thing in a software application on a phone that was never designed or tested with the same oversight as the systems built into cars would make me very nervous. Some will work perfectly, but some will be dangerous garbage.

    • I've been involved in both Agile and traditional project management and I've seen both work and not work. The biggest pitfall I've seen with Agile relates to this sentence from the original article: "But even if a release is not suitable for delivery to the customer" There is a temptation to release to the customer earlier than the software is ready because it will be changed soon enough anyway. If that temptation can effectively be resisted, then it works. If not, then it does look like cowboy code.

    • If I recall correctly, my ZX81 cost $99.00. That was about the same as my Super ELF which was my first computer. I understand that Raspberry is trying to be charitable and promote computer science learning, but I question whether they can keep going once their initial funding is gone. That's a great price but it won't help if they don't make enough money to sustain manufacturing and future development.

    • I own a set of those headphones shown in the crystal radio drawing. I spent a lot of time fiddling with different crystal radio configurations before I had money or know how enough to use more modern technology.

    • If I need a new graphics card or hard disc, I can choose from countless models from different manufacturers and purchase through any number of retail outlets. I can also start small and add in more power and functionality as I get more money. Further, I can install it all with a screwdriver. It's virtually impossible to electrocute myself while doing so. Home automation can't really say any of those things. Maybe you can start small and add in more later, but it's not something that the typical non-technical person can deal with. If it has to be installed, you've automatically priced a huge segment of the market out. In order for home automation to gain wide acceptance, we need good standards, safe and easy to install hardware and, most importantly, a good value proposition.

    • I think it needs to be both the tool and the tool user. On the one hand, a good, conscientious and well disciplined programmer can produce a good product from many languages. On the other hand, a language that does a better job of taking care of syntax, typing, housekeeping and other sorts of errors can free up time and brain power to be used in other areas. It's a bit like having a spell checking in a word processor. It sure is nice to not have to worry so much about spelling. But does that lead to the sort of complacency and poor attention to detail that results in picking the wrong suggested word off of the list? e.g. contentious instead of conscientious. Regardless of the language, you still need to use it properly.

    • If, while riding a bicycle, the rider comes to a a fork in the road, with one leading up a very steep hill and the other to a nice long gentle up slope, which path would most people take? Human nature indicates that most people would pick the long gentle up slope, even if in the long-term, the elevation gain was twice that of the steep slope. The time and energy expended would be much greater than the short steep slope, but just looking at that steep hill would be enough to turn many people away. I think that's what we see in management decisions as described in the article. Put a $10,000 expense on a request form and blood pressure goes up. The pain is right up front and obvious. The counter may be in adding long-term time and cost to the project, but humans tend to look to the short-term.

    • This article makes me think about a recent XKCD comic (http://www.xkcd.com/927/). Open source delivers many benefits, but the fragmentation of implementation as described above really holds it back at times. That's probably why Linux as a desktop platform hasn't gained broad adoption outside of the technical community. Eclipse seems to have a lot of potential, but when I've tried to use it, the issues involved in set up and use often outweigh its advantages.

    • The seesaw economic forecasts these days are driving me nuts. Should I be optimistic or pessimistic? Should I spend aggressively or conservatively? What future am I investing for in my business? One week we see growth, the next, another dip is on the way. I'm wondering if the career path to economist doesn't start in meteorology.

    • Someone donated an already assembled Heathkit H8 computer to my high school electronics class. That was my first hands-on exposure to a microcomputer. Back then, they weren't "PC's", they were called "microcomputers." One unique feature of the H8 was the fact that it was programmed in octal, not hex as were many other systems back then. I don't recall any other small computer using octal.

    • What I find interesting is the bell curve in size shifts. The microprocessor first lead to the shrinking of computers. As they got more powerful, more power hungry tasks could be shifted to the little computers. The former warehouse sized big systems got smaller as the benefits of integration migrated throughout the computer industry. Today, while individual systems are quite small, the big "mainframe" and super computer systems are back to the warehouse size again. Granted they could and are also be called server farms, but big systems have largely taken the server farm architecture and are massive in size and power consumption once more.

    • Many things can be used for good and for ill. I'm not talking about the safety factor. That's pretty obvious, but there are cases when texting can improve social interaction. I know quite a few people whom are reluctant to place voice phone calls. It seems silly, but I don't think it's all that uncommon; especially in the set of us that may not be on the socially inept side of the scale. In that particular case, texting seems to be easier so you have people interacting with their fellow human beings when, without texting, they wouldn't. While I abhor the texting abbreviations (l8r,r, u...) and what they are doing to the English language, people who use the T9 type entry need to know hoe to spell quite well. With a normal keyboard, you can use any mash up of letters to approximate a word, but with the cell phones, you really need to know how to spell the word or you phone will output something not even close.

    • Test strategies are frequently a very difficult challenge. It's fairly easy to test for known error conditions. It may take a lot of time and sometimes the company won't allow for that amount of time, but it's not rocket science - unless your project involves rockets. The bigger challenge is trying to go past known scenarios into the real of use out in the wild world of actual use. This is where process plus creativity comes in handy. Many years back, while working for an OCR company, I broke the software by scanning my tie. I didn't have any logic behind doing so. I just wanted to see what would happen. I received some ridicule for doing so, and the phrase "scanning your tie" became a synonym of doing something absurd. However, the software folks did look into it and found that what I had inadvertently done was create an easy method of reproducing a mystery crash that had been plaguing our beta testors for quite some time.

    • It's not just the assumptions that we need to consider. It's also the examples we use and the habits we follow. Back in the early 80's, I was writing business software. I typically only stored the second two digits of the year like everyone else. When Y2K became the buzzword, the generally accepted explanation was that memory was expensive back when the software was written and I accepted my share of the blame. Then one day I ran across some notebooks with old code samples in them. It got me thinking about the problem again. The particular piece of software was designed to print data on a form, replacing pen or typewriter work to manually fill it out. The forms had date fields pre-printed as such: "19__". The question is, did I store two digit year because I didn't have enough memory? Or did I do so because that made it easier to print on the form. Stepping way further back in time; before my time, I would imagine that memory was scarce enough to require such tricks. But, into the 1970's when computerizing businesses hit it big, did programmers store two digit years because they had memory issues at that time? Or did they do so, simply because it had always been done that way? Assumptions, examples, habits... They can all easily become our nemesis if we aren't careful.

    • As you say, the first rule of UI design really should be "Don't piss off the user." Unfortunately, I'd have to say that the first rule of UI design in too many companies is "we don't care about the user, just get the dang thing out the door." I know their are advantages to client/server architectures, microncontroller based products (such as the entertainment system) and configurable touch screens. But when cost is a first order factor, those three things usually result in confusing UIs and poor performance. I maintain that two of the best user interfaces ever designed are the knob and simple mechanical switch.

    • "Dad won’t have to yell at the kids to turn off the damn light!" Very true, but I can half imagine the parent of the future getting after their teenager for hacking the home lighting net to beat with their music.

    • This speaks to the double edged sword of productivity. More productivity means that we can purchase more and are more competitive globally. However, more productivity also means that fewer workers are needed so there are fewer high-paying jobs available, leading to fewer people able to purchase all of those things. At some point, the few owners of the masses of robots will be earning and owning the vast majority of the wealth and resources and the masses of humans that don't have that ownership, will end up as little more than surfs. If humanity were, on balance, fair, generous, altruistic and hard working, even without a paycheck, then the "Star Trek" society could come out of that. The few "haves" could fund scientific, health and other pursuits for the sake of the pursuit of knowledge. I think as a species, we are still too scared and greedy to allow that to happen.

    • Most of the code I write is for my own use only. Still, I find the need to comment. If I don't document data, flags, code snippet functions, I find myself needing to spend way too much time digging through the code to remind myself what I was thinking. But that could just be due to my own short-term memory issues.

    • It wasn't that many years ago when word in the media was that much of the software jobs were going to be off-shored and thus CS was a poor career choice here in the US. These days, the same is being said of electrical engineering jobs, but software is back big in the US. From my own recent experience, I would certainly agree that good software developers are in high demand and command good salaries.

    • I don't think the number of bugs can really be planned. The number in the list is a function of the number of bugs identified (an unknown), the speed of correcting those bugs higher on the priority list(an unknown) vs. the bandwidth of the programming team (should be a known). I'm assuming that they don't mean that they'll just write software that only has exactly 15 bugs or that they'll stop looking after 15 bugs. Rather, I would guess that they mean that no matter how long the list is, they'll cut a new release each 15 critical bug fixes. If those leading the development project have trouble prioritizing and accurately determining the scale of the bugs, then I think this approach makes sense. If, on the other hand, the project manager is unbiased and fairly accurate in determining the severity of bugs, then I don't think this approach makes a lot of sense. The stop-ship bugs get fixed and the others can be handled in a follow-up release. There are some bugs that must stop shipment, no matter how many of them are found. What I see all too often is that the bug master will prioritize more based on his or her wishes than on true user impact. When this happens, you get minor pet-peeve bugs high on the list and it becomes difficult to keep to anything close to a schedule and difficult to track the real stop-ship bugs. A disciplined code freeze system and accurate prioritization scheme can go a long way toward mitigating the never-ending schedule issue. Folks just need to look long and hard at the bug list and slip anything truly not critical to a subsequent release. The 15 bug approach sounds admirable, but shipping a project with five critical bugs remaining, because there were 20, won't help users or the company in the long run.

    • I wasn't home-schooled, nor were my children. My father was a teacher in the public school system. As my children have grown, I've gotten to know a number of their friends that have been home schooled. I've also seen teams of home-schooled kids compete in the FIRST robotics tournaments. As far as I can see, home-schooling can provide a very high quality education. Public schools can also provide the same education, although with increasing numbers of students per class, it's becoming more of a challenge to reach those that are less than self motivated. Based on the home-school families I've seen, many of them will select a parent that has a certain area of expertise and gather a few students together for that subject. The kids also frequently have access to sports teams at the local schools. I see both of those trends as mitigating the two major concerns that I hear about the home-school process: limited social interaction and lack of complex subject expertise by the teaching parents. Like many things, I suspect that most (but not all) of the fear and complaining comes from folks without any experience in the subject. I don't know if it's that way everywhere. Home schooling isn't really scalable - not everyone has the luxury of spending that much time. We have to work. Nor is it effective for everyone. The important message with it, as well as with distance learning and "iPad education", is that there are a lot of effective methods of teaching / learning. Not all fit everyone or all situations. Education is a complex problem. The more options we have, the better.

    • I'd say that, like so many other things in life, web-based education delivery has some places where it makes sense and others where it does not. A lecture hall with 400 students taught by a teacher's aide can not be more effective than a quality web-based class. If the instructor is off trying to get published while an aid is teaching the class, consider the creation of an online course be equivalent in status to a published paper. On the other hand, that same web-based instruction would be wholly inadequate as a replacement for smaller subject specific classes where a good portion of the value comes from collaboration with the instructor and with fellow students.

    • "Social networks don't waste time, people waste time." Bumper sticker #2: "They can take my social network when the pry it from my cold, bored fingers." They're just tools. It's up to the individual to use the tool wisely or wastefully. They do have an ability to aggregate a lot of time wasters together for a common cause, but I don't think inefficiency and poor time management are at all a new phenomena.

    • I like the characterization of Linkein as a "cloud-based self-updating address book". I think that works for Facebook too. Although, you can do a bit more with Facebook and it tends to be less professionally oriented. Twitter is the odd one and it took me a while to determine its usefulness. I've settled on it as a vehicle for getting news bites in a couple of areas that I'm interested in. I try not to follow people who tweet or re-tweet too often and I don't follow many companies. I may be a bit Hypocritical on that last point as I represent my company (@pcbassembly) on Twitter. I do try to keep on subject though and I do my best to keep a high value to glurge ratio. I'm still not completely sold on Twitter, but I've gotten to the point that I can accept that it might be long-term useful to me.

    • One of the factors that I find interesting about this stadium, as well as all construction projects in general, is the correlation between extravagance and the economy: from sports stadiums, to public works to family homes. Pick construction from a specific era and look at the relationship between form and function. Are houses built to a minimum practical size, or are there a lot of 3,000 square foot homes for families of four? Do freeway bridges have a lot of non-functional design work added, or are they just basic concrete and steel? Are sports stadiums just big, drab people-holders or are they centers of opulence like this stadium?

    • "How many teenagers ever stop to think about the design efforts poured into that iPod or Wii?" I have a son near the age of entering college. He and a number of his friends are heading towards technical careers. One of the problems with young folks and engineering these days is the distance between the outside of an engineered product and the amount of work it takes to get there. Many of the young folks I've run across as a parent want to be video game designers. Many years ago, it was exciting to get a computer to do anything, like display "hello world". But getting it to do so, and do a lot of other things, was in the reach of an individual. I don't think most people realize the amount of work that goes into creating a video game, iPod or cell phone. New devices are so complex as to fall into the classification of "magic" and that can obscure the amount of work that goes into it. If someone thinks that being good at playing a game makes them qualified to create it, they obviously don't understand the scope of the project. People as a whole likely look at technology products and just assume the someone "does some stuff" and then the thing works and you buy it for thirty bucks. That reduces the amount of respect given to the designers.

    • In my experience, engineer types tend to look at women (or men, as the case may be) as people with interesting and engaging qualities rather than objects. Certainly there are exceptions, but in the case of most engineers and other techie type folks that I've run across, substance is the most important aspect of a person. I would expect that that alone is enough to reduce the divorce rate and increase the quality of married life. It can certainly be a problem, however, that many tech-types find it easier to communicate with a machine than with a member of the opposite gender. I think that's largely an issue of self-confidence in the human interaction arena. Sadly, the rules for machine interaction don't necessarily apply well to inter-human relationships.

    • In the high-end amateur and professional markets, we'll see some amazingly high quality cameras. In those segments, users will look beyond the simple megapixels marketing number and buy on resolution, dynamic range, SNR, speed and optical characteristics, and the manufacturers will keep improving in all of those areas. For the point and shoot consumer, one only has to look at the past popularity of the 110 and 126 film cameras to get a strong feeling that phone cameras are already pretty close to what will be mass-acceptable. My 3MPixel phone camera isn't quite up to 110 film standards in all respects yet, but it's pretty close in most areas. The real key to the masses is a minimum quality level (fairly low by my standards) coupled with extreme ease of use. I suppose we could debate the question of what is the mass-acceptable feature and quality set for quite a while. Time will tell.

    • Nathan, I really struggle with this one. I'm in favor of people being able to make a living off of what they own or have created, but it seems like there is something wrong in there mixed in with the right. In your house analogy, I don't have a problem with people buying houses without intention to live in them, but I have discomfort with the practice of turning loan portfolios into traded securities - especially when poorly used and regulated as happened in the recent past. Due to the complexity of the patent system (as well as the real estate financial system), I find it pretty difficult to articulate my concerns in this area. I do believe that the patent system was designed to encourage invention and advancement. It just seems like there is a good amount of the opposite going on as well.

    • I really don't think the patent system was created to fund people that have no interest in developing technology (or other products). I'm a little mixed on the subject though. If someone invents something, they should be able to profit from that invention whether they can afford to build and market it or not. If these organizations help inventors do that, then I support them in that particular activity. On the other hand, companies that do nothing buy buy up patent portfolios from other companies or defunct companies and run around suing people as their only source of income... I think there's a lot of room for exploitation there. Especially in cases where an item has been independently invented or is so obvious that a number of people came up with it.

    • Years ago, who would have thought that you could build an electronic music device small and inexpensive enough to put into a greeting card. Now, you can buy cards just about anywhere that play music when opened. What does that have to do with this article? Well, the iGrill may be just the first step. It won't be long before those little metal pop-up things that tell you when your turkey is done can be replaced by a little wireless temperature sensor. How long before the whole sensor, power and transmitter cost a buck or less? The turkey factories could insert them during manufacture, just like they do with the pop-ups we have now. Certainly, if you could do that, it wouldn't be a stretch to see the same little sensors sold to consumers to plug into roasts or other foods as well. In terms of whether this is a solution waiting for a problem, I don't see it as such. Salmonella doesn't seem to be as common these days as it was a few years ago, but such a device could help to further reduce that ailment, while at the same time, giving cooks more control over the cooking process.

    • Where do you find MS's greatest successes? MSDOS was built on top of QDOS, purchased from another company. Windows followed the MAC and Lisa OS (And the Xerox Star). IE really followed Netscape browser. The X-Box followed generations of game consoles. I'm not making a statement about "good vs. bad" relative to this approach, just that following and improving is what they have done best. That's where MS has succeeded to the greatest degree in the past. The have the money, person-power and marketing muscle to look very closely at what others are doing, see what customers do with it and then build a product that will work well for the masses. They haven't always been successful at this approach (phones, music players), but if they keep at it, they will build a broadly used product. Given that, I can see them dabbling in ARM product OSs for a while and then diving in full force once the market settles into real-world use for the products. The past few years as well as what happens with ARM OSs in the next few years is really not relevant to MSs ultimate success in that market.

    • I'd say that the dedicated motion video camera, or hybrids that morphed from motion video cameras will be the first to go. You can already see this happening. "Still" cameras are well on the way to morphing into hybrid cameras and the selection of traditional motion video cameras is dropping radically. A few years down the line, cell phone cameras will very likely replace almost all of the point & shoot still and motion video cameras. We'll end up with a few ultra-cheap hand held video or still cameras, a set of high-end amateur still image derived hybrids and all manner of specialized professional cameras. #1, Full HD is inevitable, if not mostly here. #2, Widely adopted 3D - the jury's still out as far as I'm concerned. I doubt we'll see serious movement in 2011 here; we'll still be in the early adopter stage. I'd say that in a few more years when the cameras all have more serious processors and DSPs, we'll likely see most if not all cameras with two lenses. The CODECs will simply render the video output based on whatever display device is being used. #3, The new generation CODEC - I haven't heard much noise about movement in this area. I do see that it is necessary, especially if capturing of 3D data becomes commonplace. #4 Compact interchangeable lens cameras - maybe in the advanced amateur or low-end pro market, but I can't see this as a broad-based consumer product. It's been tried before back in the film days. Certainly new technology has removed many of the mechanical limitations, but the point & shoot practicality just isn't there. Most folks don't want to hassle with it. #5, Smooth video under all lighting conditions - This really needs to happen, along with greater dynamic range in still image capture. It will happen as more powerful processors and DSPs drop in cost.

    • I disagree with the premise that Microsoft Office is not a requirement for broad adoption of a Windows tablet OS. Regardless of any amount or lack of suitability, Windows is used by the masses and the masses need transparent interoperability. People have enough trouble just managing file location and duplication with multiple devices. Add in format differences between a tablet and a full-PC and the problem will on be much worse. If tablets become inexpensive enough to be single use devices (web browsing only, entertainment only, etc) then, perhaps it would work without office application compatibility, but as anything close to a replacement for a netbook/notebook or as any kind of a productivity tool, it just won't fly without it.

    • If Moore's law has followed the exponential growth indicated by the law of accelerating returns, we shouldn't assume and depend on advancement continuing at that pace forever. At some point, the curve reverses and we end up moving to a point of diminishing returns. (until the next game-changing breakthrough) It's already happening if you consider clock speed. Until recently, clock speeds were increasing as fast as anything else described by Moore's law, but they have essentially leveled out. It happened with aircraft speed, with automobile horse power and it will happen with chip density and all other aspects of semiconductors. At that point, we'll have to adapt to a world of incremental improvements rather than one of constant dramatic improvement.

    • Kenneth - You are correct. There has been an ARM version of Windows CE for quite some time. I used it back in 2001. It had limited compatibility and all of the office applications had very limited functionality. Files had to be converted both ways which essentially limited the devices to being note takers, advanced calculators and PDAs. I would assume that this announcement relates to a more full-featured version of Windows and Windows applications, with full file-format compatibility. At this point anything less would likely result in a ho-hum response and limited sales.

    • I don't think Microsoft has a choice but to port Windows to ARM. If they don't, they will be giving up a potentially significant market segment. Many tablets are and will be ARM-based. If they really catch on, someone will try to own the OS space and there are already a few contenders. Intel is trying to get into the tablet arena with the Atom, but how well they do remains to be seen. MS can't rely on Intel to create an opening for them in that space.

    • The proliferation of open source and inexpensive tools along with easy to use microcontrollers has really dropped the barrier to entry for a lot of science and technology. Years ago when parts were big and thru-hole and easy to breadboard, one could get into electronics easily with a small budget. It's the same way today except the microcontrollers are much more powerful than just about anything you could build with discrete logic chips. Further, there are programs like the FIRST Lego league robotics competitions that help to make technology more approachable and more acceptable by the masses. I'd say that we're in a renaissance of sorts like we were back in the late 1970's. Many other technical disciplines benefit from electronics control and are therefore participating as well.

    • Is the difference between the handling of wireless and wired due to wireless having a strong lobby, all with a similar agenda, while the wired providers are still too fragmented to present a common voice? It seems like that wireless exemption is designed to allow each wireless carrier to keep their own semi-walled gardens which just makes life for consumers more difficult.

    • I suspect that we'll see a lot from electronic paper in 2011. It's been in a couple of e-readers for a few years now and I'd guess it's about time for it to take the next step with increased response speeds and reduced costs. I'm not sure color will be available in 2011 though. Maybe 2012. With those improvements, a new set of applications will open up in the coming year. We may not see many of the commercial products, but we'll see the raw displays come out ready to be put into commercial products. And, just what are those new applications? Too many to list, but I can speculate on a few. Modal safety and regulatory labeling. Labeling has become less and less effective as there are more and more things to warn about. However, quite often, the warnings needed are different depending on the state of the device (off, standby, full-on, etc.) By putting labels for each possible condition, all of them can become lost in the noise. With e-paper, the warnings can be modal and thus very prominent and customized for each needed condition. Instructional and operational check-lists. This application isn't that different than e-readers, but it could benefit from specialization for the task. With costs down, that specialized product could be commercially viable. Advertising. We're already seeing a few early-adopter forays into e-paper advertising, but again, with a reduction in costs, it will proliferate into all sorts of annoying and intrusive places.

    • On the web, it's standard practice to release code frequently and with issues that will be solved later. And the definition of "later" does not need to exist at time of release. It will be ready when it's ready. While I'd like to see a greater level of completeness than that sometimes delivers, it seems to work online. The practice even seems to have moved to software distributed on CD/DVD as well as console games. To me, it doesn't seem right to buy a new piece of software and have to update it during or right after the install process. Again, despite my reservations, it does seem to work. The difference between those consumer applications and Google TV is that Google is now not just dealing with consumers who can individually choose to use unfinished software, wait or turn elsewhere. Google is dealing with corporations that have invested development money, marketing money and merchandising money. It's not just a matter if missing the launch window. That's bad enough, but perhaps millions of dollars of other people's money is at risk.

    • Smaller die geometries lead to smaller physical packages. Each time this happens or there's a new package-of-the-day, we (Screaming Circuits - we assemble prototypes) get a flurry of PCBs with the new package that still follow the old rules. QFNs popped up as a common problem a few years back and many, many designers struggled because of the layout challenges, thermal challenges and solder paste layer challenges. Now most folks seem to be doing well with QFNs. The most recent hit seems to be 0.4mm pitch BGAs with some package on package thrown in for good measure. In some cases, the old rules for non soldermask defined vs. soldermask defined BGA pads change at such small pitches. Vias may have to be in the pads to allow for escape routing with 0.4mm pitch parts (make sure the vias are filled and plated over, please) It's not just a matter of moving to a smaller trace and space in your CAD package. Some of the physical rules change as do the electrical as called out by Code Monkey.

    • "OnLive has filled the global patent pipeline with a portfolio of hundreds of applications with thousands of claims" Perhaps this is part of the problem. I don't think that the patent system was designed for people to flood it with applications. I don't know enough about OnLive to make a personal judgment on the validity of their claims, but I can't imagine that hundreds of patents are necessary to protect an investment. It really doesn't matter how may examiners the PTO hires if us in Industry actively and aggressively overload the system.

    • The article is pretty brief, but it is worth the quick read. It's interesting to watch Intel push chip size and power consumption down into the handheld arena while at the same time, ARM is pushing up into it. ARM is, of course, already there. But they'll have to keep moving fast as hand held devices increase in power. It's interesting that Intel was in that arena when the had the StrongARM and XScale lines. Now entering back into that market with an X86 processor, I can see another "Intel vs AMD" type race brewing.

    • The super-thin client approach has been tried before. Perhaps, now the time is finally right??? Still, I don't think this is the death knell for Microsoft any more than Linux was. It's a matter of tools best suited to particular jobs and markets. MS will be around for the masses for a long time to come. The Chrome OS will hopefully cause them to keep improving their product. I think Linux has done a good job of doing that. My suspicion is that the "phone" will be the eventual big challenge for Microsoft. They haven't been able to really get a solid hold in that market and yet the phone-type devices are starting to take core-computing tasks away from the PC, such as email and web browsing.

    • To date, I haven't lost and USB drives containing corporate data, but I have lost a few of my own. Fortunately, the ones I've lost so far are old ones that just had a few songs or pictures on them. My kids have lost a few with their homework assignments on them. An early lesson on the value of a backup. It does worry me though. I really should find some security software. The problem as I see it is that from what I can see, encryption software is still pretty obscure to the mainstream. It doesn't, by a long-shot, pass the grandmother test.

    • In the short-term, this country to country interdependence, that's making Korea watchers so nervous, makes all of our economies very vulnerable to conflict or other sources of disruption around the world. In the long term, such interdependence is a good thing. It makes our world smaller, closer together and hopefully someday will make wars simply too expensive and completely impractical for everyone. The problem we have now is that there are still too many countries not included in this globalization. Granted, it's as often as not, their choice, but my belief is that the more people that we can get tightly wrapped up in this world-wide economic web, the better off we will all be.

    • I don't drive a semi-truck. I would have absolutely no use for a semi-truck. If I were to get one, I'd be paying way to much money to purchase and to operate a tool wholly unsuited to my needs. But I understand that semi trucks play an important role in our world. And if my career were driving trucks, it would be a different story. I think that's essentially what a lot of engineers down on social media are saying. It's not that engineer types are really averse to new tools. It's that engineer types are averse to using tools unsuited to their needs just because "everyone is using it." There is a lot of societal pressure toward adopting social media because it's popular. Find a use for social media that makes an engineers life easier, and I bet the adoption will come swiftly. Now, as SiliconCowboy suggests, I do think it is very important to at least study the end use of these technologies for the purpose of better developing products.

    • I can see a lot of "potential" in Twitter, but the practicalities of the actual real world tend to get in the way. For example, schools could use Twitter to announce school closures and important events. Real world: A large majority of parents would have to sign up to make it useful. It would have to be limited to authorized parents for safety reasons. With more than just a few feeds, most of the important stuff would get lost in the noise. And - email works just fine. It allows more content and the school can control who gets the message. Conclusion: Twitter is not ready for prime time in this application. News organizations or local governments could use Twitter for special bulletins. Real world: unless people only subscribed to a very small number of feeds, it would still be so cluttered that all of the important bulletins would be missed. Conclusion: Twitter is not ready for prime time in this application. Marketing people could use it for announcing new products. Don't we have enough of this already? Just more clutter. Conclusion: Twitter is not wanted for this application. Bloggers could use it to announce their blog posts (I use it for this). Realistically, an RSS feed could do the same thing. Conclusion: the jury's still out on this one. People could announce to the world that they ate a banana. Conclusion: need I even comment on this one? And what about the people that follow like a thousand Twitterers? There's no way you could get anything useful out of that much other than the occasional random lucky find. My conclusion is that while the technology is interesting, and I can see practical applications under specific circumstances, as is it is mostly not practical in a professional setting.

    • Certainly standards are important, but I don't think that the lack of these standards is a first order, or even a second order factor in the limited level of adoption. I can buy a Terrabyte spinning media HD for $60.00. The cheapest SSD I found (in my three minute research project) was $60.00 for a 4GB device. On the same site, I found a 128GB SSD ranging in price from $239 to $469. That's more than 30 times the cost per byte at best. When the SSDs are in the range of two or three times the cost per byte, then things like standards start to be a part of the decision making process. There are applications with very specific requirements that override the 30X cost factor, but I really doubt that standards, again, factor in to those decision.

    • Anyone remember Bubble Memory? It was going to replace both DRAM and the hard disk. Then there was some talk of EEPROM doing the same. The problem is that the primary requirements for each stand on either sides of a wide divide - cheap enough for mass storage vs. fast enough for processing. By trying to do both with one part, you end up with something that does neither well. I suppose it's logical to assume that someday, we will have a non-volatile memory technology that is both cheap enough to store massive amounts of video and fast enough to hook to a CPU, but I'm not holding my breath.

    • Eventually, the "phone" device will begin to replace desktop PCs. A person will be able to carry all of their computing capacity and all of their local data along with them in their hyper-phone (Super-duper phone?). As the processing capability of low-power CPUs grows, it will reach a point at which it will be able to run word processing, spreadsheet, presentation, video and the other standard productivity applications. A few things need to happen first though. Obviously, the low-power CPUs must get faster and/or the fast CPUs must drop in power consumption. That's probably the easy part. We'll need wireless input and output devices. Yes, you can watch a movie on a 3" screen, but that won't do in the case of a primary computing device; seamless wireless technology for text entry, software navigation and display are prerequisites. Next, are some real standards - and the phone companies can't be in control of the UI/User experience. It won't work unless there is one format for each document type that crosses between hyper-phone, home desktop and work desktop, of every brand. Remember the days when there were thirty different word processing programs, each with their own file format and command structure? Not going to work. There will still be conventional desktops and laptops, but they will diminish in quantity and migrate toward specialized and maximum performance applications.

    • It's interesting how centralized much of our consumption has become. Not just in electronics, but pretty much everything. One case of salmonella can disrupt vegetables or ground beef in half the country. Problems in South Korea could disrupt how much of the DRAM supply? Of course, there is also the Middle East and oil. How much chip fab capacity would go with troubles in Taiwan? It's a very interdependent world these days.

    • I would assert that efficiency has already been added into Moore's Law. That's how we've been able to create more powerful laptops with increasing battery life. That's why we have the Atom chip. ARM is doing a great service in this regard by continuing to increase computing power while keeping power consumption low. The competition with ARM will keep Intel focused on continuing to lower the electrical power per unit of computing power requirements. The consumer demand for longer battery life will keep everyone going in that direction. The simple limitations in cooling will keep desktop processors moving down in that scale.

    • Ultimately, the job of a corporation is to make money. However, that doesn't mean that it can't or shouldn't have a soul. There are always lines that should not be crossed. So what happens when cigarette smoking declines in the US? Should the tobacco companies decide that profit is more important than ethics and just push their products harder outside of the US? That's what a profit-first, profit-only organization would do. Would the tobacco companies even survive if they decided that smoking is bad and stopped selling cigarettes? I'm not going to suggest any more or less legislation in that area, but the point is that there are plenty of companies in existence now that have dramatically changed their business model. Coca Cola didn't go out of business when they stopped putting cocaine into their drink a century ago. Sometimes ethics are forced upon companies by consumers or the government and sometimes companies live by high ethics on their own. Regardless of whether it's by choice or by law, companies do exist and thrive with a soul. In fact, it's probably much less expensive to put ethics into your corporate culture by choice than let it be forced in by law. In the extreme of doing business without soul, you get Enron type debacles or Bhopal type disasters.

    • There are features you can use and features you can sell. They are often not the same thing. Right now, 3D, in my opinion, lies much deeper in the "features you can sell" end of the spectrum. Whether it will ever make the transition to the other side remains to be seen. If the market ends up demanding that transition, then the 3D features will get better and better to the point of not making people sick and less and less expensive to the point of becoming a standard feature. An interesting aspect of the debate that I rarely see in print is the effect on spacial development. Us older folks developed our visual and motor skills in a real 3D world. The younger generation that grew up more with video games than with baseballs has developed a much greater ability to see things in 2D and translate it into 3D in their head. That's why the video game generation is much better suited for tasks like remote surgery or aerial drone operation than are the generations prior. If computer systems and televisions do end up going 3D, that advantage will turn into a disadvantage, I suppose.

    • "the rate at which the fluid level in a martini glass will go down" One of the exercises I did back in chemistry class was to try to calculate how deep a mole of moles would cover the Earth. This didn't take much calculus, if any, and I had to make a number of assumptions such as the specific species and the average volume of said mole. If I remember correctly, the volume could be estimated without calculus. I don't recall my answer. I might have to dust off a lot of brain cob webs and try to re-do the calculation sometime. Exercises like the martini glass and a mole of moles just make the study of math more fun. Neither are particularly relevant to the real world, but they give a creative mind something to hang on to. If that makes math more interesting, then it's a good thing. I did lousy in calculus, but I loved it none the less. It gives you a new way of thinking and a new way of looking at the world. That's where I found and still find the most value. With math, even if we don't use the calculations, we can start to see beneath things. We look at inter-relationships, logic, cause and effect - all from a different and more enlightened perspective. All that and you can figure out if you get more by getting one large pizza vs two mediums. That's important.

    • On the high-end, Intel processors are probably much better due to competition with AMD than they otherwise would be. On the low-end, perhaps both the Atom and competitive ARM processors will end up better due to the competition. When a company says "we welcome the competition", I don't really believe them. But for the consumer, having alternatives generally makes the companies work harder which in the end produces a better product. That's something I certainly welcome. I can see how losing a Google product to ARM could create the same type of quality/performance/value/power-consumption improvements in the Atom and ARM alternatives that we've seen in desktop processors over the years.

    • Let's see. What electronics could I use for Christmas? I still have an old CRT (remember those) TV set, but the kids pretty much watch everything on the Internet and Mythbusters is just fine on a small, old set, so I don't really need a new one. As far as 3D TV goes, I think I'll wait a bit for the shake-out and standards. My PC as waaaay out of date, so maybe that. It's a 1.8GHz P4. Yikes! Well, I don't have time these days to play the power-hog games with my kids, so I don't really need much more processing power. What I have is fine for office apps, Eagle CAD and my Microchip dev tools. Okay. Let's look at my camera. I have a 6 MP Cannon with a really sweet high-quality optical zoom. It's fast and the picture quality is great. It wouldn't satisfy a purist, but I'm not trying to do that. My cell phone? I'm not an iPhone or Android guy. I really don't like the idea of a touch screen on something that I use in environments where motion may jumble my fat fingers when trying to push a "button." Don't need that. Crud. I'm running out of consumer electronics self-gift ideas. A laptop? Well, my work supplies me with one of those so don't need one at home. The little minis are cool, but with generally only 1 Gig of Ram, I'm pretty concerned about their ability to handle the newer bloatier web sites. Well, shoot. I give up. I'll just buy myself some good chocolate and call it good. Or maybe send my kids to college. Perhaps, this is one of the reasons for the CE slump. The recession is still there, but maybe a whole lot of people just have what they need already.

    • Another factor in the 8 vs 32 vs 64 vs... discussion is the total system complexity. You can start with an idea and a few parts and have a PIC or AVR project up and running in just a few hours - design, prototype and coding. The barrier to entry is very low. It's not just seasoned embedded engineers working with these things. Engineers from other disciplines are more and more frequently being tasked with adding a bit of electronic control to their old mechanical devices. In cases like that, that low barrier to entry can be the difference between success and a failed product. With an ARM, you can certainly do a lot more than with an 8-bit processor, and the chip prices can be pretty comparable, but you need to spin a PCB or get a development board before you can do much of anything. You frequently need to worry about things like line-level conversion and higher speed PCB design. Not that those are uncommon issues, but they do add time and cost to the project over and above just the chip price.

    • Yes, the power companies make money selling power. But as new2coding stated, load leveling can improve their return on investment. By shifting 20% of the power load from the near-capacity day time to the lots-of-excess-capacity night time, the power companies can sell more power with the same amount of infrastructure. That equals more money without adding expensive capacity. The smart grid, in theory, would also give the power companies greater flexibility in setting pricing based on the specific time, and possibly even on the specific usage. That flexibility would allow price tweaking to improve the financial model even further. As I like to say "there are features you can use and there are features you can sell." The power companies likely are eager for the smart grid primarily for the profit-oriented features such as load leveling and greater pricing flexibility. They present it to the public as a green thing, because "green" sells much better than "we can make more money." There are (potentially) green benefits, but that doesn't mean that green is why the power companies are in favor of smart grid.

    • Remember the old pull-down projector screens in class rooms and conference rooms? Those settings would require somewhat larger versions of the roll-up display, but the usefulness would be the same. Have the display out when you need it. Hide it when you don't. For smaller systems, like phones and note books, a roll-up screen could dramatically improve on the portability as well as reduce the weight. The only down side is that for all of those folks browsing the web on mini-smartphone screens, the market for reading glasses could go away with a light-weight, roll-up display.

    • Re: "What if a Russian hack-athon brought down 20 million U.S. homes..." Scenario: It's a cool spring Saturday morning. New neighbors with teenage kids move in next door... Circa 1956: "I hope they don't put a baseball through my window." 1976: "I hope they don't play their music loud late at night." 1996: "I hope they don't sell crack out of their garage." 2006: "I hope they don't piggyback on my WiFi and download bad stuff." 2016: "I hope they don't hack my house." Am I a cynic or a realist? I can't always tell.

    • re: the H1-B comment - One of the strongest competitive advantages that this country has had through out the years is the diversity of its brain-trust. Getting folks in from all over the world brings new and fresh ideas in. It keeps us locals from getting complacent - competition makes people work harder. I suspect there is plenty wrong with the H1-B program as there is with most government programs, but that's why in this country, we value the individual over the government. That's why we have an inherent limitation in our level of trust in the government. Still, one of the major values of the H1-B program is that it brings very bright folks over here to contribute to society, pay taxes and help companies to be more successful.

    • Adherence to Coding standards is great advice as are code reviews. But what do you do if you're an independent or a one-person software team? Obviously, following best practices is vital in that case, as in all cases. If you or your company have to money to hire out for a code review, problem solved. But what do you do if logistically or financially those aren't options. No programmer should ever be forced into a "No time to do it right, but time enough to do it over" scenario, but the real world all too often has other ideas. Back through the way-back machine, I found myself in such a situation every now and then. No one to look over my shoulder and no resources to hire out a code review. I'd typically take my code to a friend or family member. They didn't have programming experience, but I'd walk through my code, explaining how each function worked. I certainly wasn't going to get someone to point out a flaw, but just through the act of explaining things to someone, I would frequently find a large number of my errors or usage of lousy coding practices. Anyone have any thoughts on how to best hunt down bugs in such a non-optimal environment where peer or independent code reviews are not an option?

    • I tend to fidget a lot. I also sometimes mumble and I use hand gestures while I speak. So, that means that as I ride my Segway down the road, my natural tendency to not stay still will cause me to veer out into traffic. My mumbling and speaking off topic will confuse the voice recognition that has taken over all of the automated phone systems. And soon, with gesture recognition, I'll be accidentally browsing to my bank and transferring all of my money to Nigeria every time I get into an emotional conversation.

    • I'll be very interested to see if HP follows the current trend by using lower-end hardware that is moving up in capability vs. moving higher-end PC type technology down. So here's a question: As the ARM-based systems move up in capabilities and the Intel Atom-based systems move down in power consumption and price, will we end up with a "format war" as we did with beta/VHS, Blue ray/HD DVD, Apple vs Microsoft? Does one have to win? Or is there room for both?

    • If my math is correct, at the age of 62, Jack got a new job with a major industry player. At the age of 67, he jumped on an airplane and headed into the middle of an earthquake disaster. And he did such things in an industry that at that time was in turmoil. I didn't have the pleasure of meeting Mr. Robertson, but based on what I've read here, I think we could and should all look to his example for inspiration. We talk about the aging of the engineering population and how many of those aging engineers are being tossed aside for younger folks or for cost cutting. Here's someone who could have accepted the twilight of his career, letting the new technology brush him aside. But he didn't. He seems to have attacked the world, in his 60's, with the same gusto that he did years prior. If we all did that, each and every one of us could add a decade or more to our productive career years (if we wanted to). When we're young, we tend to plow through life as if it needs and wants us to do so. As we move past the time when we can be called "young", we tend to reverse that and we start to spend more time navigating and avoiding dangers. Here's a person who kept plowing through and contributing long past the point at which most people give up.

    • In some ways electronics has become more complex, not so in others. Years ago, my "thing" was to experiment with crystal radios based in the old 1N914 diode. You can't get much simpler than that. Today, with the world of highly integrated devices and microcontrollers, the barrier to entry is a bit higher. I don't think it's insurmountable though. You can still build single-transistor inverters and 555 timer flashers on a proto board without much effort. You can also build a lot of things with simple 8-bit microcontrollers plus just a few external components. From there, it's only a short distance to a robot, provided you're willing to learn a little software too. I'd say the initial barrier is somewhat higher, but after that what you can build for a given amount of time and or money is dramatically greater than it was back in my crystal radio days. There are some tool sets, like Lego robots that make the entry even easier. One of the key differences today is in the amount of alternatives available for youngsters to spend their time on in place of experimentation and the need for instant gratification. Thirty years ago, a blinking LED was pretty leading edge but now, everyone wants to be a video game designer and create a new FPS as their first project. It can be a setup for a big let-down once reality hits. Companies like Adafruit and Sparkfun seem to be leading the way toward making hobby electronics accessible again to the entry level experimenter. They produce very simple (as well as complex) products. The put tiny smt parts on big break-out boards. They provide great tutorials. It is different than it was, but I think electronics is getting back to a place where youngsters can fairly easily get engaged and involved again.

    • [hot electron - "they haven't found yet a way to build bridges in China and ship them to the US." Actually, if you look at the new Bay Bridge between San Francisco and Oakland, CA, you'll find that big chunks of the bridge have been built off shore and shipped here.] Still though, we can all give up and crawl under a rock or we can find our spirit and sense of wonder at the world again and set our minds to finding ways to make engineering a viable career choice again. The government isn't going to bail us out. Big industry isn't going to give us security. Only we can. I was a kid in the 70's and started my career in the 80's. Now THAT was a depressing economy. The gas crisis was new and fresh. It seemed like the Soviets were going to blow us up at any moment. Japan was going to clean our economic clock. And who can forget double-digit inflation followed up by near 20% mortgage interest rates. The American century was over. Dead. Gone. I know because I read it in the news. But what did we get out of that depressing time? Apple, Microsoft, Oracle, Adobe, Maxim, 3com, Xilinx, Atmel and others. So, if you're starving, discouraged, depressed, unemployable and in despair, find some other folks in the same spot and go found a robotics company or a solar power company or something like that. Find the true engineer in yourself and go engineer a new start for yourself. Then in a few years, you may find yourself making that build vs. outsource decision yourself.

    • I'm glad to read some encouraging comments here. I hear so many comments relative to not encouraging young people into engineering careers. Yes, there are those to the contrary, but there are too many discouraging. Yes, it's a tough economy, but what did we do the last time the world came to an end, or the time before that or the time before that... Engineering isn't a stable career at the moment, but what is? Burger flipping? Not that either. The world has changed. We either adapt or get off the merry-go-round. Isn't that what engineers do best? Adapt. Every engineering challenge is really an adaptation of old knowledge and understanding to solve a new problem. We do need governmental encouragement like the USA Science & Engineering Festival. We need Industry encouragement like the FIRST robotics league. We also need engineers as a group and as individuals to be more encouraging. Don't look at the lousy economy, the unstable employment environment, the rampant offshoring as reasons to get depressed and live under a bridge. Look at them as just another problem to solve - a case where we need to use old knowledge and understanding to solve a new problem. Engineers solve problems and fix things. If we want to thrive in the new economy, we must stop longing for the past or looking for someone else to make things better. The past is gone. We need to reinvent ourselves, start companies, find ways to rediscover our creativity or just do things differently. The solutions to the problems with the future of engineering may not be there, but that's only because we haven't created them yet.

    • Eventually solid state will likely replace most spinning media. The amount of time between now and then depends on a number of factors, including cost curve, reliability, and density. I do like that Apple took the chips out of the archaic HD module. In the near term, I can understand the value in making SSDs emulate spinning hard disks, but long-term, it seems pretty silly to force chip memory to emulate a form factor and interface that was developed with spinning media in mind. It's time to create a new interface standard specifically designed around the strengths and weaknesses of solid state mass storage.

    • I don't know that I'd classify the need as a need for "wireless charging." I'd say the need is for easy and convenient charging. Wireless certainly could be an easy and convenient charging method, but not the only one. I can see Toyota's point about it not being something consumers would pay for, even if they might want it. How long did we last with the charger for pretty much every single cell phone being different before agreeing on a standard? The general public is rarely willing to pay for a specific technology (barring significant generated hype). They will, however, pay for a solution to their specific problem, phrased from their perspective. And the public will put up with a lot of peripheral inconvenience to get something they want. Perhaps more important than wireless charging to the adoption of electric cars is the accessibility of charging stations in general and the charge time.

    • I love this quote: "But we did it, because engineers can just about do anything." I think that statement says pretty much all that is needed to understand the mind of an engineer. As far as the future goes, we just need to adapt to changing realities. My son is planning on starting a Computer Science degree program next fall. My advice to him is not do do something else or be afraid because of out-sourcing, but to consider getting a minor in EE to make himself more versatile and more marketable. What is hard to do and what do you find fewer of as a result? Good analog engineers are much more rare than digital engineers - or so it might seem. I don't really think there is such a thing as a pure digital EE any more. Mixed signal is everywhere. So, if you like analog, study digital as well. If you like digital, study analog also. And for goodness sake, cross that barrier - HW people study SW and the reverse. Don't be afraid to throw in some mechanical engineering as well. That will help in robotics and a number of other embedded areas. If a kid's head is set on finance or communications or what have you, that's fine. But if a kids is of the engineering mindset, we don't need to be discouraging him or her. We need to be encouraging them to look at the world around us, make their marketability just another engineering problem to be solved with the same can-do attitude exhibited by those Apollo engineers.

    • I'd say that Microsoft certainly has it in them to create an ARM version of Windows 8 (or Windows 9 or Windows Electra) but I don't think that's the point. A few decades ago, few people though that IBM could build a personal computer, but they did and they changed the world while at it. However, they don't sell them any more. IBM is and was a big iron company. The real point comes from looking at what Microsoft is. They have built small system OS's like CE and their various phone OS's, but none have really had the legs to capture long-term, broad-based market share. As an outsider, I'd have to presume that Microsoft's core philosophy is to be the big all-things to everyone OS. It's ingrained in their DNA. Microsoft is the software equivalent of a big iron company. That's not a bad thing. It's just a different approach. My suspicion is that they will build the ARM OS, but it will stumble along as a low-market share product, always way in the shadow of their flagship.

    • Regardless of how clear the title is or is not, we need more articles like this. I and others on EE-Times may not find this information to be new because I immerse myself into the subject. But, the rest of the world doesn't necessarily do so. I really like the style used to introduce the subject. It covers a wide variety of robot types as well as specific real-world application examples. Very clear and well written. I'd add in another category though hobby and educational robots, or maybe add a bit about hobby/ed in with the development platforms. A good example would be the Lego robotics systems. Building one isn't the same as soldering a PCB and wiring up servos and such, but it gives a great introduction to the concepts of motion control and programming to the uninitiated. Further, with the FIRST Lego league, robotics is being treated like a sport in grade schools and middle schools across the country. It's competition for us geeks with teams, tournaments and championships. It makes robotics real, exciting and accessible. Thanks for the good article. I'll be reading parts 2 - 4 when you post them.

    • Sometimes I think about the way things were back in "the good old days." For a few moments, it seems like it would be fun to go back to that panacea. But then, my geek brain starts getting in to the specific details. Like, if I wanted a book - anything but the most popular titles - I had to either drive from my smallish town for an hour to get to a big bookstore in Portland, or order the book by mail. If I wanted to do one of my programming class assignment, I'd had write the whole thing, then spend hours tying each line onto a punch card, then load the deck into the card reader and wait for the next morning to see if the thing worked. If I wanted to get in touch with a friend away at a different college, I'd have to pick up one of those things called a pen and write my thoughts out on paper to (postal) mail to him. I had to learn how to work on cars, along with everything else, because the only cars I could afford required tinkering every other weekend just to keep them on the road. If I wanted to watch a movie, I'd ether have to catch it in the theater during its two week run or wait and hop to catch it on TV, the one time it would be broadcast. I could go on, but no need. I'll just stay here in the present with all of our cool technology that's invaded our environment since way back then.

    • I suspect that such practice was pretty common well before we got to the point: "we can no longer expect performance to naturally increase or power dissipation to decrease..." In the company I worked for in the early 90's, we had PLL issues and regularly ran them past the data sheet specifications. Of course, sometimes that did lead to problems, but by and large, it allowed the company to deliver a higher-performance product than would otherwise be available. I can certainly understand the dilemma faced by component manufacturers. They want their parts to be chosen even in the most demanding of applications, but without that built-in headroom, they just don't know if there will be problems or not. The application may be completely viable, but their customers are in test-pilot mode. We have to deal with the same thing here at Screaming Circuits. We can do an awful lot more than we promise, but outside of those promised parameters, the unknown rears its head and makes 100% certainty not realistic. It's a bold and brave move for Ti to release that white paper. Most companies won't do such a thing for fear that people won't read the disclaimers and will end up angry. I salute Ti for publishing it.

    • Cars are first designed without seat belts until enough people fly out the windshield to cause some sort of safety regulation. Hard drugs are sold as cure-alls and tonics until enough people get addicted or die to cause drug regulation. Home electrical devices are built willy-nilly until enough houses burn down to cause serious regulation and safety oversight. People knew all of those things were dangerous way back when. They just chose to ignore the dangers, hoping it wouldn't be "too bad." The smart-grid and other smart-infrastructure is built on insecure consumer compute platforms until some hacker takes down the national power system, shuts off the sewage plants and water supplies and de-orbits satellites. We know what's going to happen, yet we don't take this issue any more seriously than did the early auto manufacturers. We as a species (it's not just one nationality or culture) have done this to ourselves all through history. Are we ever going to learn our lesson?

    • My guess is that somewhere around ten years from now, smartphones will become the dominant compute platform. They'll wirelessly connect to input and output devices for desktop use, gaming or device controling. Their processors will be that powerful and that power-efficient. What we call conventional computers will be relegated to niche and maximum performance applications. The smartphone UIs are far better than regular cell phone UIs, but they still have a ways to go in the next decade. Fortunately, they have time - like a decade - to improve. We won't have to worry so much about user expectations messing with things when going from device to device to device because most will be in one device. Your computer, phone, camera, entertainment, car controller, home controller etc. will all share the same compact UI because they'll all be the same device. UI designers, you have your challenge: make one that will be logical, easy to use and yet still cover all of those applications.

    • This is a strange recession. Companies are sitting on loads of money. There are lots of skilled unemployed, many of whom will never work in the same job again. Yet, it still seems to be difficult to hire qualified people for many positions. More and more kids can't afford a college education, yet so many that get one can't find a job now. These same companies that have stockpiles of money are howling about unfair practices by our government and by other governments. Rather than solution finding, blaming seems to be the popular theme of the day. They claim that the government has made it too expensive or that anti-competitive practices have made them non-competitive. I don't know that we can blame anyone but ourselves (the business community). We're the ones not investing in the future. We're the ones looking for a quick-buck rather than pursuing long-term technology advancements. And we're the ones not insisting on adequate funding for education. I know there are points of hope. Out in Hillsoboro, Oregon, in the areas surrounding Intel, the schools have great programs in robotics, electronics and software. Yet, just twenty miles away, the high school laid off its single electronics teacher. I've read that despite our sliding position in the hierarchy of education around the world, that the US spends more than any other nation on education, even the ones that exceed our results. I say quit complaining about that too. A lot of things cost more here. We make more and spend more. Yes, we need to keep an eye on efficient spending, but putting 35 students in primary and secondary school classrooms and pricing public universities beyond the financial means of most people is not going to help. If we really want to get out of this financial crisis and regain out technological leadership, we need to stop waiting for the government to bail us out and invest in education and innovation instead of letting those piles cash sit and slowly diminish in value.

    • To further complicate the equation, as with programmable logic capabilities, the internal hardware peripherals set can have a pretty big impact too. For example, hardware SPI, I2C or PWM. All of those can be implemented in software via bit-banging, but having hardware do all of the work can significantly reduce the MIPS requirements. And along the lines of KarlS's comments about C, the choice of compiler can have an impact too. How well does your compiler optimize its code? Does it allow easy embedding of inline assembly code? Food for thought to make an already complex decision process even more so.

    • To me, the most significant aspect of these products comes in the close of the article: the fact that both are part of the longevity program. These are not targeted at smart phones or other personal consumer devices. That longevity program guaranteeing 15 years of supply means that these are targeted squarely at the industrial controls market. With consumer devices, it may be annoying, but it's acceptable to change the processor every year and if that component goes out of stock in two years, who cares. The world will have moved on and the device will be replaced with the newest latest and greatest. However, in the medical and industrial controls markets, it's frequent that electronics have to soldier on for ten, 15 or more years. Here at Screaming Circuits, it's not at all uncommon to get requests to repair or rebuild a ten year-old control module. With these type systems, replacing the brains is not as simple as tossing the old one and getting a new two year contract. Changing processors can result in tens or even hundreds of thousands of dollars in redesign and test work. In the medical world, it's even worse. Lack of a direct replacement processor could very well lead to scrapping a very expensive piece of equipment. By delivering powerful processors with guaranteed 15 year supply, a whole lot of future problems can be designed out from the start.

    • Related to the Scott NcNealy quote and the statement: "Don't break the law and you have nothing to fear." That's all fine and good if the people deciding what the law is are fair and just. If those making the law decide that it should be illegal to petition for redress or to peaceably assembly in support of a cause that the law-makers don't support, then all of us have something to fear. Our freedoms and privacy are not about having the ability to sneak around and do illegal things without getting caught. They are about ensuring that the government works for us rather than the other way around. There is so much potential good that could come about from a mass-scale connected society. From simple things like always knowing where the bad traffic is and how to avoid it, to life-saving things like instantly transmitting medical emergencies along with health history and location to first-responders. The potential for good is almost unlimited. Unfortunately, we live in s civilization where nefarious elements will take every opportunity to exploit weaknesses and governments without sufficient civilian oversight will try to over-control their citizenry. That's why privacy is still so important. The killer app of the next decade or so is a methodology allowing society to benefit from this new ultra-connected and sensored world without opening all of us up to abuse and exploitation.

    • "One drag on the runaway Apps Culture is that the four dominant app platforms—iPhone, Android, BlackBerry and Symbian" Perhaps, some organization could develop and propose a standard interface method and protocol set that those four different platforms could adhere to. I certainly understand the desire for companies to take the "walled garden" approach. It gives each a much greater opportunity to differentiate their platform against the others. For example, one of the platforms might migrate toward a specialty of graphic design applications. Then people who need that graphics capability the most would largely stick to that platform. Perhaps another platform might specialize in more wrote business applications. Business folks would then migrate to that platform. A third platform would specialize even further as a digital video platform and the fourth would get a little lost, focusing on home use and gaming, but never quite gaining enough following to be a long-term contender. Eventually the Mac would dominate in the graphics space and the Windows platform in the business space. The Amiga would soldier on in niche applications for a while before disappearing and the Atari ST would soon become just another forgotten footnote in computer history. After all of that mess, the web browser could come in and allow all remaining platforms to use the same code-base. App developers could host their platform independent applications in the cloud on remote servers, with the web browser simply rendering the UI and capturing user input. Wait. Sorry. I accidentally jumped from my prediction of the future to a recollection of the past. That's the problem with being old: so much of what is "new" is just a re-run from a few years back.

    • Intel is diving straight at the new generations of high-performance ARM processors with the Atom. I think it's a pretty safe bet that in a decade or so the desktop as we know it will be relegated to specialized applications and ultra high-performance niches. Whether the next broad-based consumer/office compute platform will be a tablet or a smart phone is still up in the air. Tablets are pretty handy, but they don't fit in your pocket. A smart phone that wirelessly connects to whatever display and input device you might want to use, and can still be used stand-alone, will be able to cover a substantial percentage of generic computing needs. One of the gating factors in my mind is the future of operating systems. OSs have bloated to fill the available processing capacity since forever. If that trend continues, the big high-powered personal computer will live on. If on the other hand, OS bloat peaks sometime soon, then we can have a race between ARM coming up from the bottom and Intel dropping down from the high-end to capture the small form-factor compute platform. ARM is doing some amazing things in improving performance per watt but I wouldn't count Intel out.

    • I've been hearing more and more about these virtual shows that were predicted to replace live events like ESC, but the ones I've tried out seem to be not quite ready for prime time. Maybe they will take off at some point, but convenient or not, you can't beat the energy and excitement of a crowd of actual (as opposed to virtual) people. Last year at this show, I might have been willing to accept that in-person shows were dead and it's time to go virtual, but this year, with excitement, energy and presumably attendance back up, I'm going to reverse that thinking.

    • The setup UI is really key to the wide-spread adoption of these products. Most of them will need to be wireless simply because few houses have Ethernet cables in wall and even fewer have outlets close enough to be useful. Given that these will be wireless, the ease of connecting to a secure home WiFi is absolutely critical. Unsecured operation shouldn't even be an option for home automation products and appliances. But imaging trying to connect to a secure WiFI router with a four button key pad and a one-line, 16 character LCD. It's not going to happen. Nor are the vast majority of users going to be willing to plug a device into their PC to setup. Find a way to automate the setup and the major obstacle will be gone. Then it's just a matter of price/value.

    • I'm with Peter on the question of ARM making chips with or without their own fab. From my perspective. ARM is a great example of a company finding a core competence and sticking with it. By doing so, they've become one of the more influential semiconductor companies and probably someplace in the top four influential processor companies. Some companies in some markets need to diversify or broaden to stay healthy and competitive, but not all do. For some, doing so is the start of a death spiral. I'm so tired of great companies with great products deciding that they need to be all things to everybody and within five years, we've lost them to the resultant financial and mismanagement mess. ARM may never be the next Intel, but ARM has a very important and very beneficial role to play in the processor market. It's a for profit model that carries some of the advantages of a commercial endeavor as well as some of the best advantages that would come with an open source business model. They charge money and they deliver a great product to a large number of companies/end users on an equal basis. It's a refreshing clear and concise purpose.

    • Perhaps that 18% of the population has a deeper understanding of physics and the ways of the universe than we give them credit for. Perhaps they are, in fact, well studied in the principles of relativity and are just being obstinate in terms of the literalness of their interpretation. If one was to stand outside of our solar system, in deep space, of course, relative to that perspective, the earth does revolve around the sun. If we go deeper. it will become obvious that the stars in our galaxy do revolve around the galactic core. However, back here on the ground, relative to the perspective of a person on solid ground, looking straight up, the sun does revolve around the earth every 24 hours. Now, as a relativity literalist, I think one would feel quite obligated to argue that the sun, and the rest of the universe revolves around the earth. SO, maybe they are just being literal and very smug about their literalness...

    • A lot of so-called "impossible" technologies have been invented, commercialized and accepted by the consuming public. On the other hand, even given that, I can't imagine building an infrastructure that would support the fast and easy swapping out of a 600 pound part of the car every 100 - 200 miles. Think about taking a drive up to a remote ski resort. You get there, enjoy your day and need to swap the battery at 10:00pm so you can get home. Sorry, but it was a great day for skiing and every battery station is out of charged battery packs because so many people came up to ski that day. And, when they do have charged packs to dribble out, you're number 42 in line. Not practical. I can understand the desire to focus on one aspect; the motors in this case. But to do so at this early stage in the emergence of the product is likely an immovable game stopper. In-wheel motors may be one of the best approaches to the electric powerplant, but without a three to five minute "refill", the specific architecture of the propulsion system is a moot point. Duane Benson Screaming Circuits

    • If a single core A15 can deliver five-times the processing power of an A8 or A9, we could be looking at a significant change in the low-end computing landscape. The current generation ARM is powerful enough to power the iPad and I've run a minimalist Linux on an A8 powered Beagleboard. Jump the processing capability up as is expected and we have a viable engine for the next generation of Netbooks. Especially with ARM style low power and low heat. The biggest problem keeping ARM out of the consumer computer arena might just be the OS, and not the capabilities of the chip. Linux may be close, but it still has far too many rough edges to be considered a viable main-stream OS.

    • Nice profile of the potential of diesel. And, fortunately, given that as the author states, his company wins with wide-spread adoption of either diesel or HEV, I'm betting this is more balanced than so much of what we read on the subject these days. I think it is important to have a lot of research going into HEV and pure electrics, and we need early-adopter purchases, but in the near-term, the planet could probably get the best reduction in fuels use and pollutants per buck by looking to solutions like modern diesel and super efficient gas power plants. I wonder how much fuel we could save simply by reducing the horsepower in all vehicles by ten or twenty percent. That's probably the real hold-up. We want power and lots of it. To satisfy that desire, we have big gas engines and hybrids that are more about increasing horsepower than fuel economy.

    • I'm a firm believer in the necessity of a good patent system. But it appears that ours is being assaulted on both ends. On the one side, patent trolls that have no hand in the invention buy and sell patents just for the purpose of suing legitimate innovative companies. I can see a design or manufacturing company licensing or purchasing a patent for the use in a product. I understand that, but to buy the patent for no other purpose than to sue people seems to be anti-competitiveness at its worst. Then on the other side, there are places in the world that simply ignore patents and intellectual property rights. A company will spend time and money engineering a product only to have it supplanted by a cheap knock off at a fraction of the retail price. The neat Dyson Air Multiplier funky "fan" already has a knock off competing with it. A few years back a company invented an easy to use indoor RC helicopter. Within a few months, imitations that looked 99% identical flooded the market with a retail price of about 20% of the original. I'm all for competition too, but without the opportunity to earn money off of design work, what incentive is there to do it? How can you even afford to keep designing if success means if successful, you'll be taken out by a cheap knock off in three months?

    • You could make the 12V definitely not required and simplify the circuit a bit by using one of the common 5 volt PIR motion detectors. Adafruit and a number of other hobby electronics suppliers sell 5 volt detectors for around $10.00. You could probably also change the regulator to a TO-92 form factor to drop the cost a bit and reduce the space requirements. I've used a couple of inexpensive MAX232 type chips with ceramic caps, so that could drop the size and cost a bit more too. It sounds like a very fun project. I could envision having one of these in the kids' bathroom that shows my face saying "don't forget to brush your teeth."

    • While STB manufacturers are trying to crack this nut and get people to happily pay, the world is changing around them. TV used to be a pretty social activity. It was something that people would gather around and set their schedules around. Tivo type devices and video on demand helped to change the paradigm of consumers setting their schedule around the TV. The Internet will take care of the rest. Have you ever watched a group of teenagers gathering with their laptops for a LAN party? They alternate between game playing and individually watching "TV" shows that they find in the depths of the Internet. TV isn't a scheduled activity anymore and it's ceasing to be a communal activity as well. Much of the younger generation has already dispensed with the idea of a stand-alone TV in the same way that they've dispensed with the concept of a tethered land-line phone. As that generation ages and becomes the primary buying demographic, the trend will sweep through the market and the set-top box, perhaps even the "Television" will become as much an anachronism as the public pay phone booth.

    • Rick - If you look at what has been done with the Beagleboard, running a Ti OMAP (same ARM Cortex A8 core as the A4), you can see that a netbook/notebook using the A4 isn't too far off. I've browsed the Internet and done word processing with Linux on the Beagleboard we built here at Screaming Circuits. The performance wasn't anything to write home about, but it's pretty darn close. And that was with the 600MHz OMAP. If ARM can keep it's power draw and heat generation down as it continues to improve performance, I'd guess that in two years the processor will be viable for low-end netbooks. I suspect that Intel is chasing after the same market with its Atom processor. ARM coming at it from the low-end up and Intel coming from the high-end down.

    • Romig - Good point. Ultimately, it is that junction temperature that counts. I suppose the best definition of "ambient" depends on where you are in the design through usage chain and from what perspective you are looking. If you're the design engineer, you may even have two "ambients" to worry about. You have to design based on right at or in the chip. But you also have to design your cooling system based on room operating ambient to keep the device internal ambient from raising to the point where the junction temperature is exceeded. And I think that may be the take-away from your article. It's an ambiguous, confusing mess. If you have a lot of space and a flexible budget, you can usually just put in a bigger fan, but when space and budget are constrained, then you need someone skilled in the black arts of thermal design. Keep those app notes like the one you linked to coming. We need them.

    • To me, larger vehicles like monster SUVs are really the only place that a hybrid makes much sense. If you can take a vehicle from 12 mpg city (more realistic than the 15 stated) and turn that into 20mpg city, assuming a 200K vehicle life, 50% in city driving and $4.00/gallon gas, you save $13,000 over the life of the vehicle. On the other hand, taking a 98hp 35mpg car and turning it into a 45mpg 134hp hybrid, saves you only $2,500, given the same assumptions (only twice that if you assume the same mpg boost comes on all 200K miles). Using a hybrid essentially as a means to increase horse power is not a "green" way to go. Much better would be to forgo the $10,000 extra power system and simply drive the 98hp 35 mpg car around. Small vehicle hybrids don't make economic nor environmental sense. Large vehicle hybrids can help as a stop-gap until we really solve the fuel problem.

    • Very informative article. From the outside looking in (an end user's perspective), Number one, or a slightly modified version: "the air temperature of the area in which the device is being operated" seems to be the most useful. Ultimately, isn't the purpose of the ambient temperature spec to determine if the device can still operate within it's temperature envelope based on the maximum thermal rise added to the ambient temperature? Of course, there are still plenty of variables such as wind and solar effects for an outside design.

    • And how about when the news agencies do as you illustrate and add in a sensationalized and misleading headline? "New graphene enhanced super conducting plug-in zero emission hybrid makes the Kessel run in less than 12 parsecs." I can't count the number of times I've seen such a headline, only to find that the article basically just discusses some research project in the manner you describe. It makes me feel cheated. Maybe entertained, though. But usually not informed.

    • I'm going to step out on a style/tact related tangent here. Is it just me, or do other people out there think it's high-time we did away with sexist slang identifiers like "Bitchn' Betty"? I know it may seem innocuous at first glance, and there certainly wasn't any intent to offend, but it and quite a number of other "...ist" slang terms really hearken back to the days when society was free, equal and happy - but only if you fit a certain profile. I don't think it adds anything to the article. The reference to it doesn't seem to do anything other than justify the article title. To me it distracts from what is otherwise a very informative article about a very cool project.

    • The article is well worth the read - but it's really depressing. It paints a picture that looks pretty bleak. Let me rephrase part of that. The article is worth the read if you have a secure job in any industry that doesn't depend on manufacturing. Otherwise, it may cause you to lose a lot of hope. One question I have though. Otelline states that it costs $1 billion more to build a fab here in the US and that 90% of that extra cost goes to taxes and regulations imposed in the US and not in other nations. I'm interpreting his thesis to be that if the US reduced corporate taxes and regulations, we'd be a viable world-manufacturer again. My question is this. Those taxes and regulations that cost $900,000,000, how much of that is regularly reduced by exemptions and other tax-lowering vehicles and how much of those regulations are in place to prevent toxic chemicals and hazardous working conditions from destroying us? Maybe none? Maybe most? I don't know, but I'd sure like to know.

    • I'd like to see some good demographic and usage data for the iPad. The product will, in a sense, have a lot of competition very soon, but it won't be as formidable as it might seem on the surface. My hunch is that most of the tablet makers that want to compete with the iPad will be missing the mark. They'll be looking at their product as a better replacement for a mini notebook than is the iPad. The genius of Apple isn't in raw hardware. It's in understanding the consumer. From what I can see, the iPad is really suited and will probably be used in much different ways than are notebooks and other conventional computers. By chasing a new market while the other companies that don't quite get it squabble against each other in the old market, Apple keeps that "competition" less than relevant.

    • As far as I'm concerned, the biggest problems with electric cars are the lack of ability to charge in near the same time that it takes to fill a liquid fuel car and the variability in peoples driving ranges and habits. I put both of those above the infrastructure issue because those two problems would still exist today if we had a wide-spread infrastructure in place. There are two places where both of those problems can be mitigated. 1) fleets - the fleet managers know exactly where and how the vehicles will be driven and can work that around charging needs. 2) a second car - the second in-town, bop-around car doesn't need to travel far. It takes short and infrequent trips. Both perfect applications for an electric car. If enough car companies, fleet owners and consumers look at electrics this way, we will build an industry and infrastructure in a lot fewer years than most folks believe. Plug-in hybrids can eventually take the place of gas cars, but even with that, why pay for the duplicate power plant if you don't have to.

    • I'm always in favor of a good quality requirements document. In most cases, it makes life a lot easier for everyone involved. I have worked in some environments where just a discussion starts off a development cycle and a number of discussions during development keep the project on track. That's a weird way to do it though and is too dependent on the cooperation and self-pacing skills of the engineers. In retail oriented OEM companies, the biggest challenge I've run across relates to your second rule: "A requirement is binding. The customer is willing to pay for it, and will not accept the system without it." A retail product tends to be judged and purchased based on a set of features. I like to say "there are features you can sell and features you can use." The fact that what you can sell and what can be used in a retail environment frequently makes the requirements document an object for fighting over.

    • A decade ago, people would have scoffed at the idea of a low-volume customized product - even more so at a physical product. It all had to be virtual and massive mass market. You had to shoot for a home run in order to get anyone's attention. How things have changed. Now, any high-volume product is at risk for having its intellectual property swiped. Investment money is hard to come by and the masses aren't spending much money. However, the people that do have money are willing to spend it. But they don't want an industrialized commodity. They want something of quality with a personality they can call their own. The time is rip for small businesses that want to sell quality custom or semi-custom products. And, as to "why two EE PhD's?" I say, "Why not?" Our economies need more adventurous folks like this.

    • I'd certainly take an electric car as a second car, but I couldn't do it as my only car. Today, and likely for many years to come, there simply isn't a viable way to extend a trip beyond a single charge range. I'm glad a lot of companies are putting a lot of money into proving my statement wrong, but my bet is we have a few decades before there is a usable way to charge cars in a filling-station like environment. Without that quick-charge capability, the electric as sole car will have very limited adoption in the US. That means it's really more of a marketing issue. By focusing on the second car market, electrics become very, very practical as they are today (well, next year when they're actually being sold at retail) Great products can be killed by bad marketing and by focusing only on the electric car as a green alternative to gas, we're missing the most powerful message - that this is a perfect second car. It's the running to the store car, the ten miles to school car, the going to the movies car. Position it like that and there is no question that a very large number of electric cars can be sold right now.

    • I'd like to see a bit more technical detail on just how a hybrid fossil fuel/electric aircraft power plant can really gain efficiency. Automotive gasoline engines spent most of their running time in the least efficient power bands giving a lot of opportunity to use the different efficiency characteristics of an electric power plant to improve economy or power. Aircraft engines tend to run in three places: full or near full power during climb, most efficient (or close to it) during cruise and low power during descent. I don't see the same opportunities for efficiency improvements there that I see in the automotive world.

    • One of the often overlooked aspects to this debate is the cost/effort to start up for non-experts. Sure if embedded design is your life, you'll take a more sophisticated approach, but more and more often, non-electronics folks are being tasked with adding a microcomputer into some other device, like a pump, valve, heater or other previously non-intelligent device. The learning curve, both in regards to the schematic / layout and the software can be much lower with a simple 8-bit controller. They are more often available in the larger and easier to build packages. The software tends to be quicker to learn and code and they tend to use long-standard 5 volt power. Such lower-end aspects can mean the difference between hiring an expensive embedded engineer vs tasking an ME to figure it out.

    • I wonder how much of the potential SSD performance is eaten up in translating between the FLASH and an interface system designed around a rotating disk of metal. It has been expedient to use an HDD emulator to get data in and out, but maybe the time for that is past? The mSATA sounds like a good step in the right direction of better optimizing for the solid state media, but perhaps now might just be a good time to develop a new interface system created from the ground up to take best advantage of solid state memory.

    • Conventional wisdom in the server world suggests that the most bang for buck and bang for watt comes from faster 64 bit architectures. However, in the early days, Google went directly against that conventional wisdom and build server farms out of masses of cheap motherboards. I don't know if they still do that, but I suspect their reasoning still holds, which would bode well for Smooth-stone. While much of the world still looks at to absolute numbers for clock speed, power consumption and data thru-put, what's really important are the ratios. If we go back to seemingly-ancient basic economics and apply the concept of the point of diminishing return to the the ratios of performance per watt, performance per dollar and even performance per unit of space, we may very well see that an ARM-based server farm can out perform a top-line x86-based server farm.

    • If we're not careful, we'll end up changing the definition of the wort "smart". "Smart" = dumb enough to be cracked and hacked. We'll have this issue with smart phones, smart cars, the smart grid, smart appliances, not to mention our regular computers. This speaks to our propensity to not completely design things. Cars were first designed without seat belts. Seat belts and other restraints were designed in after enough people flew out of their cars or crunched into windows. They didn't become ubiquitous until government mandate. I suppose you could say that with the first cars, designers didn't think about people falling off - people fell off horses all the time and rarely got seriously injured. With the first PCs, I suppose it must have been hard to imagine the virus problems we have now. But, come on. At this point, we all know that security is a massive issue and will only get more so. The first car designers and early PC designers had an excuse. We don't now. Designing a future car without serious security is tantamount to designing a car today without seat belts. It will lead to serious problems. Not "may lead to", it will. It should simply be a hard and fast requirement. And if we don't know how to do it right, then research needs to be put into new types of security in the same way that research is now going into new types of batteries.

    • Hearing that copper reserves are likely to be exhausted in the 2030s reminds me that when I was in school in the 70's, we studied that oil reserves would be exhausted by the late 80's. On the other hand, maybe the specific target years are just a guess, but it does speak to the need to look at more than just the ability to create something. We also need to look to the long-term ability to sustainably produce it. It won't help to create a fleet of electric vehicles and infrastructure to support it if all we are doing is setting ourselves up for another supply/ecological disaster a few more years into the future.

    • What we really need in terms of robot instrument players is a robot that can play the vuvuzela horn. The we can add annoying robotic fans to robot soccer tournaments for added realism.

    • Amazing how fast reversal of fortunes can occur in some markets. My first cell phone was an old Motorola analog flip phone. I think they were number one at that point, then dove in market share when digital came out. May favorite phone ever is still my Motorola Razr from their most recent pre-bust boom time.

    • I think the question "will mobile TV take off in the US" really has already been answered and that answer is yes. We're still in search of a viable business model, but it's already here; especially for the younger generation. The concept of static (both in terms of location and time) entertainment is on the way out. For the younger half of us, it's all about getting the entertainment when and where they are. In the car, on the street, at the office, in the park, in the house but in a different room from the TV set. The big screen TVs are there in the bars and airports, but a good portion of the people within viewing distance are watching something on their smart phone already. It's all about "what I want, when I want it, where I am." Static is dead or dieing.

    • Very interesting. The article primarily focused on the enterprise side, but to me, I'd be more interested in the iPad model. My suspicion is that ARM processors will continue to grow in processing power while keeping Watt power down, Atom processors will drive their Watt power down and in a decade or less, the desktop PC will be all but replaced by something in your pocket that connects wireless to keyboards/input devices and displays. From that perspective, it could be seen as a very forward-thinking move on the part of MS.

    • I think you're absolutely right Junko. It is about the business model. If it's free (and easy to use) people will come. The problem I see is that if there isn't profit in, people won't deliver it. We still really haven't found a business model that gets media to consumers inexpensively enough and still gives money to the media creators. As you stated, people will use their smart phones watch media - browse the net, watch YouTube, watch TV shorts and watch full-length movies. They do already. As long as companies keep trying it, eventually, we'll find a business model that works well. I hope...

    • More than the threat of a fully autonomous robot going rogue, the idea of human controlled robots being hijacked is of much greater concern to me. Over the next few years, most devices that we interact with will become more and more automated to the point that in ten years, our cars may operate more like robots than like a traditional driven vehicle. Think of a military scenario, where the enemy hijacks the battle field robots and, instead of the old-style strategic bombing back home, hijacks all of our cars and appliances. I'd call that much more likely unless we really come up with better general security methodology.

    • I would guess that much of the reason that things like the AML proposal was not responded to is the "thousands of unsolicited ideas." When you're inside an organization being flooded with ideas, just sorting through the chaff can take quite an effort. Then, once through the obvious chaff, you have to look at the technical details and try and decide if the offering organization really has the expertise and the means to follow through. And, then, determine if the idea will actually work or if it has the potential to make things worse. It's one thing for a company's PR organization to say "We have the answer, but no one's listening." It's an entirely different thing to genuinely have the technical know-how and the evidence and wherewithal to give a customer the confidence that a proposed solution will actually do the job.

    • Don't forget firmware and mixed-signal training too. It's getting increasingly rare to find pure digital logic applications so adding a little analog or high-frequency knowledge and some programming will only stand to make you more valuable to your current or prospective employer.

    • While, "the customer is always right" is an extremely important philosophy in business, it really is frequently next to useless without a good translation. Customers do tend to know what they want, but all too frequently don't have the proper vocabulary to articulate it in proper terms. Customer thoughts are further often tainted by a lack of knowledge of the cost benefit relationship for a specific product or feature. This is where a good product manager can save the day for everyone. That product manager can translate what the customer says into a language that an engineer can act on, and the good product manager will counter those customer needs with design and finance realities.

    • I envision that somewhere around ten to 15 years from now, most people's primary computer may very well be in their phone. All they'll need at home will be a display and a keyboardish thing. We'll still see some box computers for specialized purposes and some separate cameras, music players, TVs and devices of that sort, but for most folks, it will be one little hand held device with all the necessary processing power that will either use its built-in display, some sort of head's up display or a large screen connected wirelessly.

    • It's cheap and it's easy and it just works. Like the standard household light bulb base, it would never be designed or approved by a committee today, but "inexpensive" and "easy to use" are perhaps even more important in the consumer world today. Most people that buy products aren't engineers and the typical consumer electronics sales person hasn't been trained enough to navigate the morass of different connection standards we have to deal with today. Yes, we get higher resolution and better quality with all of the newer interconnect types, but us in the design world should take a lesson from the perseverance of something like the RCA connector. Yes, it does have some deficiencies, but it just works. It has elegance in its simplicity. How many of our standards today are good enough to live for 70 and counting years?

    • I really think that a lot of the EV research folks are barking up the wrong tree. At least, the public faces seem to be. One of the massive problems is lack of charging infrastructure. Another big problem is the variability in driving habits, making it very difficult to make an EV that is economically viable to a large enough segment of the market. Skip the consumers and go to fleet operations, such as delivery. You don't need a broad infrastructure because the vehicles will be recharged back at the shop. You don't have to worry about predictability, because the vehicles generally have pre-planned routes. Right there, you've removed two of the biggest obstacles and can focus on manufacturing and refining the vehicle instead of the infrastructure.

    • I'd like to hear a bit more about this in terms of the practical use of the parts. It's interesting in terms of cost savings and performance improvements but the use of the part is an important consideration too. Ti's OMAP processor ends up on a 0.4mm pitch BGA package. That means tighter space and trace limits for the PCB layout as well as vias in the BGA land pads (please fill and plate the vias in the pads). Expect to see this leading to even smaller pitches on the components. I haven't yet seen a 0.3mm pitch part I have heard that they are on the way.

    • I've always wondered how the "virtuoso syndrome" factors into productivity. Throughout my career, I've seen a lot of good engineers and as a group, they all tend to perform at a similar level. But I've run into a number of engineers that seem to be able to produce five or ten times as much work as everyone else. And I'm not talking about prima-donna types. These folks just have a much greater level of understanding or mental processing power than most people. If you've got one of those folks on your team, should you plan with that individual level of productivity in mind? Or, should you plan based on a standard engineer just in case the virtuoso leaves?

    • I too sometimes long for the simple days of cars with breaker points, vacuum advance and carburetors. There is something cathartic about spending a few hours with wrenches and grease and saving a few hundred dollars in repair charges. Until I remember just how much time I used to have to spend working on them. Yes, the repairs were much less expensive, but the downtime was much more frequent and the fear of being stranded someplace was always in the recesses of my mind. And, it wasn't just the maintenance items. Accelerator cable return springs could snap. Brake lines could leak. Steering linkage could break loose. All of those problems could be just as devastating as a firmware-induced problem, but back then, those problems happened much more often. Yes, engineers must do everything in their power to produce the best and safest electronics and code, but I don't want to trade the reliability and safety of a modern car for something from the bailing wire and duct tape era.

    • Perhaps there are other forces at work here, such as doubt. Yes, on the surface, it seems that we're back into boom time, but dig a little deeper and I think you'll find a lot of fear and uncertainty remaining. In the OEM and EMS industries, I still see a lot nervousness. If purchasers at the OEMs are ordering, but still very skittish, than perhaps the chip companies sense that and are using price as a little additional insurance against order cancellations. The thought being, whether well intentioned or not, that that extra bit of loyalty gained by price concessions may be enough to cause the customer to cancel someone else's order. The nervous OEMs are likely using the price card very heavily against the chip companies in order to hedge against their own uncertainty.

    • Am I missing something here? Wouldn't the solar cell capture photons coming from the sun to push the probe away from the sun? And wouldn't the trip to Venus require that the probe go toward the sun? I think Venus is about 25% of an orbit behind the Earth right now, so the probe would be heading just slightly in from perpendicular to the path of the photons. I wouldn't think a solar sail could tack like a wind sail here on earth because that utilizes aerodynamics, not Newtonian force.

    • I think that our society does have a problem with not knowing what's really important. I do see some progress though. Programs like FIRST robotics are putting robot competitions in schools using a model similar to sports. "Geek" celebrating is finding it's way onto TV with shows like Mythbusters. We have a long way to go still, but I do see technology edging toward the main stream.

    • This is exciting, but I'd like to see some serious real-world life test data. I'm a big fan of the potential of LED lighting, but I have difficulty in holding back my skepticism on the rated life numbers. I have replaced too many dead "6,000 hour CFL" bulbs after one to three months due to the difference between theory and the real-world applications that live outside of all of the lifespan caveats.

    • I do like the idea of the sledge or the pick axe, but it might be fun to see what you could do with a household item in a semi-plausible in-home accident scenario. Like if you were trying to dispense with a spider by throwing a can of beans at it, but hit the CRT instead.

    • Even if this is a long-term play, it's a great case of ARM eating their own dog-food (that's a compliment, but looking at the phrase written here, it doesn't really look like it). If they want to be in the server business, I don't see a better way then starting to use their own chips now and learning from the experience.

    • It's hard to say what's really going on, but having just returned from a couples of days at the ESC show, I'm optimistic. Pretty much everyone I talked to felt that this year is already better then last. I suspect that, to a degree, we're all just tired of being in a recession. Perhaps that's a large component of what gets us out of recessions -- we all just get antsy to get moving again, so we do. Edited by: ESD editorial staff: SRambo on May 5, 2010 6:07 PM