Once embedded systems designers can assume ubiquitous wireless communications, some really interesting things might happen. But ad hoc network architectures and increased public access to previously regulated bandwidth will be key.
At the moment, wireless communication appears to be on the verge of chaos. Although cellular phones are practically ubiquitous, the technology is advancing faster than it can be deployed by the carriers. The grand plans for all-encompassing satellite-based networks have pretty much plummeted to earth, and spectrum auctions got so out of control that they imploded.
On the other hand, wireless communications is growing like kudzu among the populace. Anything that sits still long enough will soon have a GPS receiver and a WiFi network link attached to it. The question is not whether wireless LANs (WLANs) will proliferate, but which WiFi standard will be king. And how long will that winner reign before standards based on ultrawideband (UWB) knock it off the throne?
It's the product, stupid
Within the next fifteen years, an inflection point will occur in which more data transmits over WLANs than cellular networks. As inevitable as death and taxes, this point looms ahead, and the embedded systems we create will be its primary agents. These embedded systems will be drivenas they've always beenby an economical use of resources and, this time, distributed more effectively and evenly via a product rather than service model.
The product-model scenario is already in practice in the current generation of WLANs and will be even more true in the next decade and a half. The standardization of access methods that's just beginning to take place today will create economies of scale. In other words, the setup hassles of wireless systems will become just as much a thing of the past as setting a distinct IP address is today. Most users have no clue what DHCP stands for, even though they use it every day. All they know is that it works. Wireless networks must similarly fade into the background of users' attention.
The specific role that embedded systems play in this scenario is critical. They're already key components in communications-specific applications, most notably the handsets that serve as the front line in cellular communications. The technical problems in embedding this communications capability into consumer and industrial embedded systems are trivial compared to answering the simple question: how is the service going to be bought and paid for? Do you want to buy a separate cell phone for your VCR or car to use to connect to the Internet? Of course not. On the other hand, why would you care whether or not the same device uses your home wireless network to communicate?
Wireless LAN coverage will become an expected norm, just as ubiquitous as AC power is today. Most of this coverage will be in the workplace and the home, but mobile WLANs will also be based around any number of vehicles. I almost feel bad calling this a prediction, because it seems so obvious. I'm hammering the point, however, because it represents a critical change in the default environment for embedded systems. If we can assume ubiquitous wireless communications that are free from incremental charges and provisioning problems for our customers, we can start doing some interesting things. We'll get into that a little later. First let's set the stage.
At least three major obstacles are standing in the way of the course of events I've proposed. Right now, they may seem insurmountable, but the solutions to these challenges will come into their own over the next few years. They can be summed up in three simple questions, each of which I'll attempt to answer:
Is there sufficient bandwidth?
The problem, of course, is not a true shortage of spectrum. The problem is in how it's allocated. The government currently allocates parts of the spectrum for fixed application-specific purposes. In other words, large tracts of valuable spectrum are reserved for applications that aren't always using them.
This allocation scheme made sense in the days of hard-wired radio applications and relatively few broadcasters. The Federal Communications Commission (FCC) firmly polices the boundaries of these spectrum allocations so that ham radio users won't interfere with soap operas on television and neither of them will interfere with radio broadcasts that support emergency services.
The weaknesses of this allocation scheme have been brought to light by the success of WLANs. All of the WLAN protocols use the Industrial, Scientific, and Medical (ISM) bandwidth that was set aside by the FCC for unlicensed (but still lightly regulated) applications. Not surprisingly, these slices of spectra tend not to be in the prime pieces of wireless real estate: signals in this spectrum often experience massive interference, attenuation, and inconsistent reception.
In spite of (or perhaps in part because of) this, much more creativity and evolution has occurred in these small sections of the spectrum than anywhere else. Emergency and utility radio services are incrementally inching towards digital data transmission, while WLANs have jumped in with both feet. While cellular carriers wring their hands and debate deployment of 2.5 or 3G capabilities, products for next-generation WLAN protocols such as 802.11a and 802.11g appear on the shelves before the ink is dry on the standards. This creative chaos provides the energy that enables these protocols to venture out into the world while 3G standards mostly sit on the shelf.
Back in late 2001 when I first started talking about a more dynamic allocation of the spectrum, the reaction I got from rank-and-file engineers was massive skepticism. After all, hard allocation of spectrum was The Way Things Were Done and it simply was not going to change. Since then the FCC and Congress have begun to awaken to the problem (www.fcc.gov/Speeches/Powell/2001/spmkp109.html).
It really doesn't make sense to simply ordain that This Spectrum Shall be Used for This Purpose with This Protocol. Instead, the spectrum should be defined by access rules that allow its use in a polite manner by applications and protocols that have yet to be invented. And I predict it will.
Who will build the network?
The cost of building the network has been a problem with cellular technology. It takes time and huge amounts of money to blanket a city with wireless coverage, much less an entire country. While installing the equipment is hard enough, political hassles over placement of base stations and sometimes amazingly arbitrary esthetic restrictions also complicate the task. This more than any other single factor has chewed up amazing amounts of the capital for cellular providers. How can we even contemplate going through that again?
The easy answer is to distribute the load. A great example is Starbuck's distribution of WiFi access points. I've heard many people talk about this event, but until I actually spoke to someone from Starbuck's on a panel discussion last year, I didn't understand why they did it. This Starbuck's employee pointed out to me that they have literally thousands of company-owned outlets, each with three to five analog phone lines and a fractional T-1 line. For the WiFi rollout, they're equipping each of those stores with a full T-1. I bet they're not paying all the costs of that deployment, given the impressive (and hungry) list of tech companies working with them. The next step is to consolidate their communications onto that full line, saving them tons of money. They make out in a corporate sense whether customers use the bandwidth or not.
This theme will be played out a thousand times in the next few years. As bulk back-end data communications become a fixed cost, why not open up service to whoever is in the immediate proximity? A real-estate developer here in San Diego likened it to drinking fountains, which are an expected amenity in a building. Perhaps a WiFi network will soon become a similar checkoff item for building developers and property managers everywhere.
Although this evolution will provide the critical mass of connection points to the Internet, it won't provide the truly ubiquitous connectivity I foresee. Ubiquity will only happen when the architecture of networks themselves changes from a hierarchy to a mesh.
A network with a mesh architecture extends with every node added to it. If there is a chain of nodes of arbitrary complexity between a user and an access point, then connectivity to the Internet exists. At major events like the Super Bowl, for instance, each new user would add to the capability of the local network. The users cannot overwhelm the network, because the users are the network; and when they're gone, the network capacity drops accordingly.
Which protocol will prevail?
VHS beat Betamax, the IBM PC beat any number of competing microcomputers, and Microsoft beat just about everybody else. When it comes to wireless, which one of the obscure acronymsGSM/GPRS, CDMA, 802.11b/a/g, UWB, OFDM, and so on and onwill become The One Ring That Rules Them All? My answer: SDR, or Software-Defined Radio.
The number of wireless protocols is not likely to lessen; nor is the rate of innovation that defines new protocols likely to decline. Rather, the complexity will be handled through general-purpose DSP hardware and software implementations of multiple protocols, la SDR. A single piece of hardware will be quite comfortable in any number of wireless-network flavors.
To date, embedded systems have largely been responsible for creating their own communications infrastructure. The current generation of consumer and industrial systems is beginning to use the wired infrastructure of the Internet. In the next fifteen years, this connection will become wireless.
In and of itself, switching to wireless will not be a major change. Portable and mobile devices will initially feel the most impact, but as wireless data communications matures and becomes more available, a new generation of devices will participate in the hive computer mind that is the Internet.
The opportunity here is in the change from a series of discrete hardpoint entries into the Internet to a continuum of connectivity. Consider the changes the cell phone wrought: if I'm late for a meeting, I can let others know (and even participate) instead of leaving them wondering. Dynamic communication networks can be set up and torn down continuously to meet any number of immediate needs.
Cheap, ubiquitous wireless data communications will enable embedded devices to contribute data from their sensors to the Internet and to use the data from other sensors. The communication protocols being defined today under the auspices of Web Services will become the lingua franca of the embedded world, allowing devices to say meaningful things to each other. Barometric pressure sensors in automobiles that allow efficient fuel utilization will combine with GPS data to send data to weather forecasting algorithms. In the office, data will be available wherever people are, rather than forcing them to come to workstations at their desks. Factory, retail, and warehouse networks will extend to the open spaces that are so necessary for their functionality but have proven so hard to extend the wired network into.
None of these changes require massive overnight transformations or huge infrastructure public works projects. Instead, they'll evolve as a series of small, good ideas to take advantage of a network that can be incrementally expanded. That's why it's so inevitable. Just like the Internet, there is no single point where failure can bring down the network, and the more it grows, the more it will grow, since embedded systems can always benefit from more data.
Build it and they will come
A large amount of backend capacity out there is going to waste, and I'm confident that engineers will find things to do with it. Whether I'm right or wrong about the details, I'm looking forward to the future.
Larry Mittag is a consultant specializing in systems designs and a contributing editor of Embedded Systems Programming. He holds bachelor's degrees in education and physics from Wright State University. You can reach him at .