Virtual Machines, Cell's "apulets" and the future of connected computingI recently read a story describing Datdevices' Dartplayer software as a way to deal with the lack of interoperability among net-centric connected devices. The story got me thinking about "write once, run anywhere" software virtual machines and their future.
At its root, DartPlayer is an application virtual machine a lot like the Java Virtual Machine (JVM). But where a JVM hides the differences of linked devices, enabling them to look exactly the same to Java applications, DartPlayer exposes such differences. Dartplayer uses an instruction set-based technology that allows Dart applications - written in C++ - to run across different devices without recompiling.
Application virtual machines based on the "write once, run anywhere" concept go back 20 or 30 years, appearing over and over again in different guises to solve problems facing the computing industry. They are designed to allow application binaries to be run on many different computer architectures and operating systems, using an interpreter or Just In Time compilation. Initially developed for use in C, C++ and other programming language environments, a number of virtual machine mechanisms were developed, but only as explicit add-ons incorporated into the application by the programmer.
The power of Java's particular implementation of the virtual machine approach was that the VM was an implicit property of the language's programming environment itself. But Java was not the first. I have had experience with at least four previous incarnations of the VM concept as an implicit property of the underlying programming environment: Smalltalk, Pascal's P-code machine, Bell Lab's C@+ (also known as CAT), and AT&T's netcentric OS/language/VM trio: Inferno, Limbo, Dis.
Currently, there are a dozen or so other application virtual machines available in addition to Dartplayer, primarily Sun's Java and PARC's Obje Interoperability Framework, but there are also some that are not as widely used such as C#'s Common Language Runtime, the Forth Virtual machine, the Perl virtual machine, the Python virtual machine and Portable.NET, among others.
And as the recent efforts by Dartdevices illustrate, the VM concept can be expected to continue to evolve and adapt to new conditions. The big questions are: How will it evolve? What is occurring in the continually evolving connected computing environment and how will that change the nature of the application VMs? Is there something about the current connected computing environment that requires a fundamental change, away from the traditional application VM model?
For one answer to these questions, it is instructive to look at the basic programming model behind the Cell processor developed by IBM and Sony. In terms of hardware, the Cell engine is essentially a data movement and dataflow-oriented architecture, similar in concept to network processors (NPUs) such as the dataplane portion of Intel's IXP architecture, Agere's APP550 and Xelerated's X11.
What is most interesting about the Cell is the underlying programming model that its inventors assume will soon be the norm once network bandwidths are sufficiently high enough to justify distributing even real-time operations over a large number of computing nodes. And one of the things they think will be the first to go is the Java Virtual Machine and its underlying assumptions.
With few exceptions, most of the online discussions of the technical aspects of the Cell architecture pay little attention to this proposed model underlying the hardware. But in the half dozen or so patents applied for and granted, the inventors were explicit in their analysis of the current JVM methods, where they were lacking and what would replace them.
According to the inventors, the Cell processor architecture represents a fundamental shift to a new architectural paradigm that reflects the new connected computing environment. Their assessment of the RISC processors and controllers in current use is that they were all conceived in the era before the Internet and World Wide Web became a mainstream phenomenon and are designed principally for stand-alone computing.
Thus the sharing of data and application programs over a computer network was not a principal design goal of these CPUs. And while they all have a common RISC heritage, the processor environment on the Internet is heterogeneous. The sharing of data and applications among this assortment of computers and computing devices presents substantial problems.
Java is not enough According to the inventors, the Java Virtual Machine "write once, run everywhere" model - which uses a platform independent virtual machine written in interpretive form, rather than compiled to make maximum us of each target processor's resources -- is a partial and increasingly unsuccessful attempt to solve this problem.
They point out that it will become more inadequate as real-time, multimedia, network applications are become more pervasive. Such net-centric applications will require many thousands of megabits of data per second, and the Java programming model makes reaching such processing speeds extremely difficult.
Therefore, they believe a new network-optimized computer architecture and programming model are required to overcome the problems of sharing data and applications among the various members of a network without imposing added computational burdens. These should overcome the security problems inherent in sharing applications and data among the members of a network.
"Software cells" turn Java Upside
At the core of the Cell's connected computing architecture is a new "software cell"-based programming model for transmitting data and applications over a network and among the network's members that turns the Java VM model on its head, sending the two in a bundled fashion, rather than separately. While it can operate in the Java mode, which downloads a platform independent to run on a node, it can also be described as a "write once, reside anywhere and participate everywhere" programming model.
At the core of the Cell model is the combining of application and data in the same deliverable "software cell," or apulet, designed for transmission over the network for processing by any like CPU on the network.
The code for the applications preferably is based upon the same common instruction set and ISA. Each software cell preferably contains a global identification (global ID) and information describing the amount of computing resources required for the apulet cell's processing. The uniform software cells contain both data and applications and are structured for processing by any of the processors of the network.
If the application being sent or requested requires more processing than is available locally, additional compute resources are on the network, and, depending on timing constraints, are made available locally. Since all computing resources have the same basic structure and employ the same ISA, the particular resource performing this processing can be located anywhere on the network and can be dynamically assigned.
The patents I have read are tantalizing in their brief discussion of this programming model, using it as the conceptual framework upon which the inventors base their hardware architecture. But it appears to me that this upside down view of connected computing is essentially independent of architecture.
While it would be difficult to replicate on existing RISC based architectures, any of a number of dataflow-oriented NPUs would do the job as well. It also seems to me to be independent of programming language and could be implemented in C, C++ and even Java.
What do you think? Is the traditional application VM approach a
solution whose time has past? Can it be adapted to the netcentric
computing environment? Is the apulet approach a viable one?
Bernard Cole is site editor
for Embedded.com, site leader on iApplianceweb as
well as an independent editorial services consultant working with high
technology companies. He welcomes your feedback. Call him at 602-288-7257
or send an email to firstname.lastname@example.org.