The Name Game Continues: Invisible Computing - Embedded.com

The Name Game Continues: Invisible Computing

In response to my earlier column on the problems we have naming and defining the emerging net-centric embedded computing alternatives, I have received a number of interesting alternatives. Not all directly incorporate any reference to connectivity, but they raise issues about which I think there needs to be some debate.

One nominee that was particularly thought provoking came from Eric Silva, a software engineering intern at RSA Security Inc. He suggests the term “invisible computing.”

“Desktop (or personal) computers today are very obvious,” he says. “When you look at a portable MP3 player or a cell phone you do not see a computer, you see the functionality that the computer inside is enabling. Although when you look at the beige box on your floor, you instantly think 'computer'.” Regarding embedded applications of computers he goes on to say “the computers cannot be seen, but their impact can be felt. They're embedded within another device, which is something very different than what most people think of as a computer.”

In many ways, I think that Eric is correct. However, I still think the concept of connectivity has to be incorporated into the new definition because ubiquitous connectivity is now an integral part of any embedded system.

But Eric has a powerful ally in Donald A. Norman, former vice president at Apple Computer Inc. and head of its Research Laboratories. Now professor emeritus of Cognitive Science at the University of California, San Diego, he is the author of almost a dozen books, including his most recent one, Invisible Computer : Why Good Products Can Fail, the Personal Computer Is So Complex and Information Applicances Are the Solution.

Norman uses the term “invisible” to describe the attribute common to all effective, easy to use and popular tools in human culture: no matter how complicated the functions they may perform, the technology used to perform an action is not apparent to the user of the tool. Such tools are human-centered not technology centered. The technology is invisible.

The focus of much of Norman's ire in his book is the most un-invisible of all computers, the personal computer. There the technology is in-your-face, obvious and forcing you to use or try to use it.

According to Norman, personal computer manufacturers have compounded the problem through an addiction to creeping featurism. A good example of this is the typical word processing program. In 1976 one of the first word processing programs, called “Magic Pencil,” fit into less than 8 kbytes of DRAM and had less than 50 commands to do basic typing functions. By 1992 Microsoft Word had about 300. Now, Word has nearly 1,100 commands.

I fear the same thing may be happening to the embedded systems now being built, which have moved from practically invisible to in-your-face obvious and overly complicated. The big question I have is whether or not connectivity will exacerbate the problem, or will it make it possible for embedded devices to return to their former state of invisibility?

Good examples of how overly complicated some embedded devices are becoming abound in the consumer market. The most obvious one is the venerable video cassette recorder. As it has become more intelligent, moving from 4-, to 8-, 16- and now 32-bit processors, more and more features have been added. But at the same time the ease with which it can be programmed and used to record TV programs has decreased. Now there are the new smart set-top boxes that can be programmed to delete commercials, and set up to record a sequence of programs for not just one show but several. I don't know about you, but I have spent enough time reading manuals on how to operate my PC and I am not about to begin to do it on a VCR, now matter how intelligent.

Another good example of a consumer device with a lot of embedded intelligence is the handheld global positioning satellite (GPS)device that is used to get your geographical location to within 15 or so feet. Half the stuff that has been built into these devices is totally useless to me and require so many key entry steps to program that is like exchanging the secret handshake with another member of a club.

With web-enabled connectivity added to these and other kinds of embedded devices, will they become less and less invisible and more like the personal computer? Or will they be insistent on the user dealing with technology rather than making use of it to make life simpler?

At first, I thought that such devices as an intelligent refrigerator or a washing machine with an embedded 32-bit controller would end up being as difficult to operate as an intelligent VCR, or (shudder) a Windows-based desktop computer. But that may not be the case.

That is because connectivity and a set of associated functionalities provided by server-based web services may simplify net-centric embedded devices. Why are many such devices so complicated anyway? For one thing, the developers want to build a device that will have as wide an appeal as possible and be useful in a diverse number of environment. So, a GPS needs a wide range of features that will make it useful in the forest, in the mountains, on the ocean or in the big city.

Now assume that same GPS has wireless connectivity, as it does in many expensive late model automobiles. With such things as Java-enabled applets that can be downloaded, it should be a simple matter to reprogram the GPS for the exact features required for the environment in which it would be operating. The device itself would not necessarily even need a 32-bit processor if, as is being assumed in the new web services computing environment, most of the information and intelligence would be maintained on the server, or several of them, distributed throughout the Web.

With wireless connectivity, the bandwidth of the connection would not necessarily be an issue. Even with a relatively slow 7,200 to 33,000 bps connection, a smart GPS could access or be accessed while it is not being used for normal operations or during the night and its functions modified for the new context (in a city with bay station triangulation rather than satellites, for example), while the user is asleep.

What do you think about Eric's suggestion? Or Norman's? Or mine, for that matter? Where do you think net-centric devices will take us, toward a more feature-laden environment or toward a new era of truly invisible computers?

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.