As a very high-level programming language, Java offers programmer andsoftware maintenance productivity benefits that range from two- toten-fold over uses of C and C++. By carefully applying Javatechnologies to embedded real-time systems, software engineers are ableto deliver higher software quality, increased functionality, andgreater architectural flexibility in software systems.
However, because of its heavy reliance on the use of automaticgarbage collection, using traditional Java programming in hard realtime as well as safety-critical embedded systems designs isproblematic. Years of research and experimentation have resulted inrecommended alternatives to traditional garbage collection that aremore appropriate for this class of applications.
One of the costs of automatic garbage collection is the overhead ofimplementing sharing protocols between application threads. Applicationthreads are continually modifying the way objects relate to each otherwithin memory, while garbage collection threads are continually tryingto identify objects that are no longer reached from any threads in thesystem.
This coordination overhead is one of the main reasons that compiledJava programs run at 1/3 to 1/2 the speed of optimized C code. Thecomplexity of the garbage-collection process and of any software thatdepends on garbage collection for reliable execution is beyond thereach of cost-effective static analysis to guarantee compliance withall hard real-time constraints. Thus, we do not recommend the use ofautomatic garbage collection for software that has hard real-timeconstraints.
For this reason the Open Group's Real-Time and Embedded SystemsForum is developing a set of guidelines based on making effective useof the traditional standard edition Java in combination withappropriate profiles of the Real-Time Specification for Java defined byJSR-1 under the Java Community Process.
The Real-Time Specification for Java provides a very generalframework for tackling a wide variety of real-time programmingchallenges. A profile is recommended for hard real-time andsafety-critical systems to improve portability and efficiency, and toreduce complexity, as is required in order to achieve safetycertification objectives.
These approaches are scalable in the sense that independentlydeveloped software components can be reliably and effortlessly combinedinto larger software systems, and code written for subset profiles(e.g., the safety-critical profile) can be repurposed for use on thelarger profiles (e.g., the hard and soft real-time profiles).
Hard Real Time systems arethose in which an action performed at the wrong time has zero orpossibly negative value. The connotation of “hard real time” is thatcompliance with all timing constraints is proven using theoreticalstatic analysis techniques prior to deployment.
Soft Real Time systems arethose in which an action performed at the wrong time (either too earlyor too late) has some positive value even though it would have hadgreater value if performed at the proper time.
To allow developers to maintain many of the productivity advantagesof standard Java in most embedded applications, special real-timevirtual machines have been implemented to support preemptible andincremental operation of the garbage collector.
With these virtual machines, the interference by garbage collectionon application code can be statistically bounded, making this approachsuitable for soft real-time systems with timing constraints measured inthe hundreds of microseconds.
The difference between hard real-time and soft real-time does notdepend on the time ranges specified for deadlines or periodic tasks. Asoft real-time system might have a deadline of 100 µs, while ahard real-time system may have a deadline of 3 seconds. Rather, thecrucial distinction is the certainty of meeting the specified timingconstraints. Hard real-time systems are provably deterministic and canbe analytically guaranteed, whereas the assurance of meetingconstraints of soft real-time systems is based on empiricalmeasurements and heuristic techniques.
Safety-critical Java codeis software that must be certified according to DO-178B or equivalentguidelines. Certification guidelines impose strict limits on softwarepractices, including peer review, traceability analysis, and softwaretesting.
The Thread Stack Memory Model
The need to support temporary memory allocation within real-timeprograms is well motivated. At the same time, there is generalagreement that developers of hard real-time components do not requirethe full generality and flexibility offered by automatic garbagecollection.
To address hard real-time programming constraints, the RTSJ uses thenotion of scoped memory as an alternative to automatic garbagecollection. The LTMemory data type represents a memory scope, within which objects may beallocated. The RTSJ run-time environment maintains a reference count torecord how many components are currently interested in each scope. Whenthe reference count reaches zero, all of the objects allocated withinthe LTMemory scope are reclaimed.
This service makes it possible to allocate new memory within adynamic scope in time that is proportional to the size of theallocation request. A developer of hard real-time software must faceseveral significant difficulties with the use of this abstraction:
1. Knowing how big to makean LTMemory region in order to reliably support the execution of a particularreal-time component is quite difficult and error prone. Furthermore, itis highly non-portable between different compliant RTSJimplementations.
2. Instantiation of an LTMemory regionis not a hard real-time operation. There is no bound on how much timethis will take, and there is, in fact, no guarantee that a request toinstantiate an LTMemory region will succeed even if there is sufficientavailable memory in the system at the time of the request. This isbecause memory may become fragmented during the course of a program'sexecution.
Many RTSJ programmers will overlook these difficulties with use ofthe LTMemory abstraction. They would regularly allocate and discard LTMemoryobjects, and successful execution of test programs may instill falseconfidence that the code will work reliably in the field.
This is a very dangerous practice, because it is not generallypossible to test all of the different ways that the allocation poolmight become fragmented. Further, the program may not behave the sameway if it is moved to a different vendor's compliant RTSJimplementation, or even if the same vendor provides a new maintenancerelease of the same RTSJ implementation.
RTSJ programmers who understand and appreciate the risks of memoryfragmentation find that the only way to reliably and safely useLTMemory abstractions in their hard real-time code is to allocate allof the LTMemory objects that their application might need during initialization of theapplication. This adds significantly to the difficulty of implementingand maintaining the software, and adds considerably to the amount ofmemory required for reliable execution of the application since many ofthe LTMemory instances allocated during startup sit idle throughoutmost of the program's execution.
Although the safety-critical Java profile also supports the notionof scoped memory, it hides the RTSJ APIs that manipulate memory scopes.Instead of relying on these APIs, safety-critical Java programmersdescribe their intentions with respect to use of scoped memory by usingannotations which can be statically analyzed and enforced at compiletime.
Like the RTSJ, the safety-critical Java profile also supports thenotion of immortalmemory, which represents the outer-most memory scope. Objectsallocated within the immortal memory region will not be reclaimed. Aswith inner-nested memory scopes, safety-critical Java programmers useannotations to describe their use of immortal memory as well.
The hard-real-time Java profile addresses scoped memory reliabilityand maintenance issues by providing static analysis tools to determinethe amount of memory required to execute particular program componentsand by requiring all creation and destruction of scopes to follow astrict LIFO (stack) ordering.
LTMemory objects at work
At startup, all of the temporary memory available to support executionof the program is set aside as the run-time stack for the main hardreal-time Java program. If the application needs to support more than asingle thread, the main program carves memory from its run-time stackto represent the run-time stacks for each of the threads it spawns.
|Figure1 ” Main thread stack after spawning three threads|
Figure 1 above illustratesthe organization of the main thread's run-time stack immediately afterit has spawned three new threads. This illustration assumes that allthree threads were spawned from the same context within the mainthread. Note that space has been reserved within the main thread'sstack to allow the main thread to continue to populate its run-timestack.
Note also that it is essential at this point in the program'sexecution to know the amount of memory that must be reserved torepresent each of the spawned thread's run-time stacks. These stacksneed not be the same size. In a typical application, the size of eachstack is custom-tailored to the needs of the given thread. The hardreal-time Java platform automatically determines the stack sizes forcomponents that are declared using the @StaticAnalyzabl eannotation.
As execution proceeds, each of the three spawned threads and themain thread will continue to populate their respective stacks. Assumethe stack memory is organized, as shown in Figure 2, below , at a subsequentexecution point.
|Figure2 ” Stack organization after each thread has populated its stack|
The scoped memory usage guidelines, as defined in the RTSJ, allowinner-nested objects to refer to objects residing in more outer-nestedscopes, but forbids references that go in the opposite direction. Figure 3 illustrates a number ofallowed object reference relationships and Figure 4 illustrates severaldisallowed object reference relationships.
|Figure3 ” Allowed references between nested scope-allocated objects|
Note that these scope-nesting restrictions guarantee that there willnever exist a dangling pointer from an outer-nested object to aninner-nested memory location that no longer exists because itsinner-nested scope has been reclaimed.
As shown in Figure 4, below, note that besides disallowing references that refer to low-memoryaddresses from high-memory addresses, these rules also prohibitreferences from spawned threads to that portion of an ancestor threadthat was populated after the point at which the child was spawned.
|Figure4 ” Disallowed references between scope-allocated objects|
This usage protocol requires that any data structures that need tobe shared between multiple threads must reside either in immortalmemory (the outer-most scope, which is never reclaimed) or must existwithin the parent or some other ancestor thread's stack above the pointat which the descendent thread was spawned.
Shared objects do not necessarily need to exist at the time thesubthreads are spawned, but the memory allocation context within whichthe shared object will eventually be allocated must be set aside withinthe parent thread's stack before the point at which the child thread isspawned.
Once the embedded system programmer has an understanding of thisalternative scoped memory alternative to garbage collection, developingcode appropriate to the application is relatively straightforward.
Within the hard real-time and safety-critical developmentenvironments, the use of scope-allocated memory is facilitated byconsistency checking performed by a special byte-code verifier, basedon programming annotations supplied in the standard libraries. This isdiscussed further in subsequent articles of this series.
Next in Part 2: Guidelines for soft real-time Java development
Kelvin Nilsen, Ph.D. is chief technology officer at Aonix North America .
 Guidelines for Scalable Java Development of Real-Time Systems,available at http://research.aonix.com/jsc/rtjava.guidelines.3-28-06.pdf