I recently read a story describing Datdevices' Dartplayer software as away to deal with the lack of interoperability among net-centricconnected devices. The story got me thinking about “write once, runanywhere” software virtual machines and their future.
At its root,
Application virtual machines based on the “write once, run anywhere”concept go back 20 or 30 years, appearing over and over again indifferent guises to solve problems facing the computing industry. Theyare designed to allow application binaries to be run on many differentcomputer architectures and operating systems, using an
The power of Java's particular implementation of the virtual machineapproach was that the VM was an implicit property of the language'sprogramming environment itself. But Java was not the first. I have hadexperience with at least four previous incarnations of the VM conceptas an implicit property of the underlying programming environment:
Currently, there are a dozen or so other application virtualmachines available in addition to Dartplayer, primarily Sun's Java andPARC's
And as the recent efforts by Dartdevices illustrate, the VM conceptcan be expected to continue to evolve and adapt to new conditions. Thebig questions are: How will it evolve? What is occurring in thecontinually evolving connected computing environment and how will thatchange the nature of the application VMs? Is there something about thecurrent connected computing environment that requires a fundamentalchange, away from the traditional application VM model?
For one answer to these questions, it is instructive to look at thebasic programming model behind the
What is most interesting about the Cell is the underlyingprogramming model that its inventors assume will soon be the norm oncenetwork bandwidths are sufficiently high enough to justify distributingeven real-time operations over a large number of computing nodes. Andone of the things they think will be the first to go is the JavaVirtual Machine and its underlying assumptions.
With few exceptions, most of the online discussions of the technicalaspects of the Cell architecture pay little attention to this proposedmodel underlying the hardware. But in the
According to the inventors, the Cell processor architecturerepresents a fundamental shift to a new architectural paradigm thatreflects the new connected computing environment. Their assessment ofthe RISC processors and controllers in current use is that they wereall conceived in the era before the Internet and World Wide Web becamea mainstream phenomenon and are designed principally for stand-alonecomputing.
Thus the sharing of data and application programs over a computernetwork was not a principal design goal of these CPUs. And while theyall have a common RISC heritage, the processor environment on theInternet is heterogeneous. The sharing of data and applications amongthis assortment of computers and computing devices presents substantialproblems.
Java is not enough According to the inventors, the Java VirtualMachine “write once, run everywhere” model – which uses a platformindependent virtual machine written in interpretive form, rather thancompiled to make maximum us of each target processor's resources — isa partial and increasingly unsuccessful attempt to solve this problem.
They point out that it will become more inadequate as real-time,multimedia, network applications are become more pervasive. Suchnet-centric applications will require many thousands of megabits ofdata per second, and the Java programming model makes reaching suchprocessing speeds extremely difficult.
Therefore, they believe a new network-optimized computerarchitecture and programming model are required to overcome theproblems of sharing data and applications among the various members ofa network without imposing added computational burdens. These shouldovercome the security problems inherent in sharing applications anddata among the members of a network.
“Software cells” turn Java UpsideDown
At the core of the Cell's connected computing architecture is a new”software cell”-based programming model for transmitting data andapplications over a network and among the network's members that turnsthe Java VM model on its head, sending the two in a bundled fashion,rather than separately. While it can operate in the Java mode, whichdownloads a platform independent to run on a node, it can also bedescribed as a “write once, reside anywhere and participate everywhere”programming model.
At the core of the Cell model is the combining of application anddata in the same deliverable “software cell,” or apulet, designed fortransmission over the network for processing by any like CPU on thenetwork.
The code for the applications preferably is based upon the samecommon instruction set and ISA. Each software cell preferably containsa global identification (global ID) and information describing theamount of computing resources required for the apulet cell'sprocessing. The uniform software cells contain both data andapplications and are structured for processing by any of the processorsof the network.
If the application being sent or requested requires more processingthan is available locally, additional compute resources are on thenetwork, and, depending on timing constraints, are made availablelocally. Since all computing resources have the same basic structureand employ the same ISA, the particular resource performing thisprocessing can be located anywhere on the network and can bedynamically assigned.
The patents I have read are tantalizing in their brief discussion ofthis programming model, using it as the conceptual framework upon whichthe inventors base their hardware architecture. But it appears to methat this upside down view of connected computing is essentiallyindependent of architecture.
While it would be difficult to replicate on existing RISC basedarchitectures, any of a number of dataflow-oriented NPUs would do thejob as well. It also seems to me to be independent of programminglanguage and could be implemented in C, C++ and even Java.
What do you think? Is the traditional application VM approach asolution whose time has past? Can it be adapted to the netcentriccomputing environment? Is the apulet approach a viable one?
Bernard Cole is site editorfor Embedded.com, site leader on