There are more than one billion cell phones and various multimodal wireless mobile combinations now active in the world each containing a DSP/RISC multiprocessor core. The general assumption in the computing industry is that such mobile appliances, with sales at an estimated $400 million a year, are well on their way to becoming the dominant computing platform, replacing the desktop PC, which has long held that position.
Carrying over world views typical of the PC market, many semiconductor makers and consumer electronics firms are adopting the strategy that worked for desktops: build more features and they will come (and spend). But I have my doubts that such linear field-of-dreams thinking is valid anymore, given the still unsettled environment of the new world of “converged” computing and communications.
My view of technology evolution is that it's like water: it always follows the paths of least resistance. The trick is to know the terrain and find out what specific paths it will follow as it flows to its ultimate destination.
That was easy enough in the relatively stable PC-dominated market of the 90s. On stable terrain, with knowledge of the geography, it's possible to predict the paths of least resistance. But the connected computing environment is like a landscape after an earthquake. Water still flows downhill. But with an entirely new topology, it is hard to predict what specific path or paths of least resistance will be followed.
In the connected computing terrain, I'm not sure the “more is better” approach which will need the power of a mobile supercomputer is necessarily the line of least resistance.
But let's assume this is the case and see where it takes us. So far it has led to the emergence of PDAs and cell phones with cameras, WLANs, word processing, PIMs, dictionaries, music playing and recording. Now, the industry has got it in its head that if these devices can be made fully multimedia in nature, hopes for a new high volume, and dependably profitable, platform will become a reality.
The capabilities being talked about for the next generations of mobile devices are awesome. So-called natural I/O interfaces proposed include not only graphics qualities rivaling the desktop, but high quality audio input and output, both requiring continuous real time processing and conversion.
Also being considered are multiple high bandwidth wireless interfaces for voice and data communications, Internet access and local networking. Add to that the many computationally intensive functions that are being considered: soft radio, encryption, speech recognition and text/speech translation, not to mention all of the live video delivery applications that are talked about.
It is estimated that the compute power required to do all of this on a single mobile platform would be equivalent to 16 to 18 two Gigahertz Intel Pentium 4 processors. That doesn't sound like a line of least resistance to me.
There is a lot of evidence that the architectural optimization techniques that have allowed increases in computing power every 18 months are rapidly running out of steam. But, even so, putting that much on a chip is the easy part. The hard part is doing it within a typical battery-powered mobile appliance's power budget, which allows only a miniscule peak power limit of about 70 to 100 milliwatts.
Working against achieving this goal are some unpleasant quantum realities as we move fabrication technology further into the nanometer range, where circuit-level effects are already a significant barrier to continued performance gains.
At the current sub-100-nanometer level, the rise in static leakage current, capacitive and inductive coupling, and increased interconnect lengths are also having profound and negative effects on power consumption and dissipation. Static leakage alone accounts for as much as 50 to 60 percent of the power dissipation for high end processors built with sub 100-nanometer technology.
Are there path-of-least-resistance alternatives to achieving the kind of functionality that is needed? Two that I can think of are (1) shifting to alternative fabrication and logic design logic techniques; and (2) shifting to new processor architectures more appropriate to this new connected environment.
For a number of reasons, shifting from synchronous to asynchronous logic, or other more incremental alternatives such as mesochronous, pleiso-synchronous and adiabatic logic is not likely to be the path of least resistance that the market will need. Aside from the institutional momentum, and the resistance to learning new circuit fabrication techniques, corporate accountants, stock analysts and vice presidents of finance prefer the status quo. As expensive as the present synchronous logic techniques are becoming, they have the advantage of predictability and can be factored into calculations for years into the future.
New technologies, no matter how promising, are unpredictable, which makes it hard to project future profits and losses and to promise the clockwork-like stock price and dividend increases that the market analysts and stockholders have come to expect as their due.
Another possible path-of-least-resistance is a shift to another architectural model. If this is the technology path the market chooses, what's needed is a more efficient engine that addresses the application needs of the market.
Here there are a number of choices. But the lack of consensus on what is the most appropriate architecture tells me that these are not paths of least resistance: application and algorithm specific architectures with programs that automatically adapt; DSP; multiprocessing; MIMD; SIMD; reprogrammable architectures, and dataflow processing, among others.
There is a third path that I believe is the most likely. Paradoxically, it lies in the ubiquitously wired and wireless connectivity that the Internet and World Wide Web hath wrought and which have pushed the semiconductor industry to the very limits of fabrication technology.
But it will require that we throw out some basic underlying assumptions and ask ourselves: Why do all the applications and processing power have to reside on the mobile appliance device itself?
Why not go “back to the future” and use the dumb client-server hierarchy that the industry started out with, with a few extra 21st century improvements? To the user, it might look as if the handheld device was the ultimate provider, but with sufficient high bandwidth connectivity, reliability, and scalability, much of what is now done “in here” could just as easily done “out there.”
Wireless LAN and WAN bandwidth is increasing to the tens of megabits per second, wired network core bandwidths are in the 10-to-100 Gigabits-per-second. And at the wired network edge's on- and off-ramps connecting the information superhighway into enterprises, cable and non-cable service providers, SOHOs and home networks, the bandwidths are ramping up to the 1 Gbps range.
And the emergence of XML-based Web Services frameworks are providing the network middleware infrastructure by which many operations can be done remotely and cooperatively. Movement toward service-oriented architectures and server-blade based utility computing is also accelerating, which will provide the on-demand reliability, predictability and scalability this distributed computing approach to mobile functionality will require.
There are, of course, a hard core of functions being proposed for nextgen mobile devices such as video and high fidelity audio processing that will require as much compute power as possible in the handheld mobile. But there are many others which can be distributed throughout a wired and/or wireless network of cooperating Web services word processing, personal information storage, speech recognition, text/speech conversion, to name a few.
Other chores that everyone assumes will have to be done on the device itself, such as security and cryptography, are considerably reduced. With most of your personal processing done on the server and stored there, there would be little need for much security on what would now be a very thin handheld client. And with sufficient bandwidth, reliability and scalability, the need to store photos, video and audio files locally on the handheld everything device is also eliminated.
Looking at the problem from this perspective, maybe we can reduce the compute power needed on future mobile devices to a less surrealistic two or three times that of a 2 GHz Intel Pentium 4 processor. Just a thought. What do you think?
Bernard Cole is site editor for Embedded.com, site leader on iApplianceweb as well as an independent editorial services consultant working with high technology companies. He welcomes your feedback. Call him at 602-288-7257 or send an email to .
I think it's really an exact metaphor of ubiquitousness as the progressing wave. I would compare the distribution of mobile networks only with one phenomenon – ubiquitous advertising. It's everywhere – as well as cell phones.
Now we have the same situation as in the early days of Internet when online advertising became that boost for its active development – there are networks but mobile advertising has not the base for promotion. Most people use cell phone as phones. It's naturally because the other available functions are limited in functionality, especially, in comparison with the PC platform as the base for online advertising. So, there is a necessity in creating a cell PC platform. This is the project I'm working on http://geocities.com/gene_technics/
And the first step in the development of the platform – providing a mobile advertising channel on the second display for standard size banners, contextual advertisements on the Web portals and for opening SMS, MMS messages as slides for local advertising – this means an implementation of “the third screen” in advertising terms (recently, The New York Times wrote about it) without affecting performance of cell phone and privacy of users – really, we see ads everywhere when resting and if you have an additional ad screen, you can comfortably receive information on special topic when there's a real necessity in it for your business. The same benefits for advertisers.
Next, the second display allows to show a site map on it. This is an approach to providing a detailed view of Web content on the main display and navigating through the sections of a website. And that's what defines the success of mobile search – an effective access to information placed on the websites. As well as showing the section of contextual advertisements on the second display along with the search results on the main display. And the key to the cell PC platform – a compact implementation of US-International Keyboard for data entry.
And as a part of of a program interface, the second display shows menus and toolbars. For example, it allows to implement Live Preview using the Galleries as the elements of Microsoft Office 2007 Fluent UI.
In addition, the cell PC platform has a widescreen main display for playing/recording video and taking photos.
The most important implementation aspect here, of course – increasing power consumption. And I hope for the new mobile processors with power consumption less than one watt as a part of developing the UMPC concept.