Implementing dual OS signal processing using Linux and the DSP/BIOS RTOSThe classical trade-off between system performance and ease of programming is one of the primary differentiators between general purpose and real-time operating systems.
GPOSes tend to provide a higher degree of resource abstraction. This improves application portability, ease of development and increases system robustness through software modularity and isolation of resources.
This makes a GPOS ideal for addressing general purpose system
components such as networking, user interface and display management.
However, this abstraction sacrifices the fine-grained control of system resources required to meet the performance goals of computationally intensive algorithms such as signal processing code. For this level of control, developers typically turn to a real-time operating system (RTOS).
From an embedded signal processing stand point, there are essentially two types of OSes to consider: Linux, a general-purpose operating system, and DSP/BIOS, a real-time operating system. Linux offers a higher level of abstraction while the DSP/BIOS provides finer control.
In order to leverage the strengths of both alternatives, developers
can use a system virtual
machine, which allow that allow programmers to run Linux and
DSP/BIOS concurrently on the same DSP processor.
(Editor's note: Unlike process virtual machine environments specific to particular programming languages, such as the Java VM, system virtual machines correspond to actual hardware and can execute complete operating systems in isolation from other similar instantiations in the same computing environment.)
An important question to ask however, is why not simply use a CPU+DSP combo running Linux and DSP/BIOS separately? CPUs are, after all, more efficient at running control code for user interfaces, etc. And separate cores avoid the overhead associated with virtualization. However, putting all functionality onto one chip is attractive for several reasons.
For one, today's high performance DSPs are much more powerful than
previous generation DSPs. This frees up more cycles for control
processing. In addition, most high-performance DSPs are more
general-purpose than they used to be, allowing for more efficient
control code processing.
If all functionality can fit on a DSP, the benefits are compelling. One less chip translates to lower cost and area, as well as lower energy consumption because power hungry interprocessor data transfers are eliminated.
One of the most beneficial and commonly used aspects of any operating system is the ability to concurrently execute multiple tasks or threads. The operating system employs a scheduler to manage the processing core in order to serially order tasks for execution.
A historical concern of embedded programmers when using Linux was the lack of real-time performance. However, recent improvements to the Linux kernel have greatly improved its responsiveness to system events, making it suitable for a broad class of enterprise, consumer and embedded products.
Linux provides both time slicing and priority-based scheduling of threads. The time slicing methodology shares processing cycles between all threads so that none are locked out. This is often useful for user interface functions to guarantee that if the system becomes overloaded, responsiveness may slow, but no user functions are completely lost.
Priority-based thread scheduling, on the other hand, guarantees that the highest priority ready thread in the system executes until it relinquishes control, at which time the next highest priority ready thread begins executing.
The Linux kernel re-evaluates the priorities of ready threads upon each transition from kernel to user mode. This means that any new kernel-evaluated event, such as data becoming ready on a driver, can trigger an immediate transition into a new thread (within the latency response of the scheduler). Due to the determinism of priority-based threads, they are often useful for signal processing applications where real-time requirements must be met.
Prior to version 2.6 of the Linux kernel, the main detraction to
real-time performance was the fact that the Linux kernel would disable
interrupts, in some cases for hundreds of milliseconds.
This allows for more efficient implementation of the kernel because sections of code do not need to be made reentrant when interrupts are disabled but adds latency to interrupt response.
Now with version 2.6, a build option is available that inserts much more frequent re-enabling of interrupts throughout the kernel code. This feature is often referred to in the Linux community as the preempt kernel, and while it does degrade performance of the kernel slightly, it greatly improves real-time performance. For many system tasks, when the preemptive Linux 2.6 kernel is used with real-time threads, it will provide sufficient performance to meet real-time needs.
For instance, the Texas Instruments DSP/BIOS supports only priority-based scheduling, in the form of Software Interrupts and Tasks. As with the Linux scheduler, these Software Interrupts and Tasks are preemptive. However, DSP/BIOS also provides application programmers with direct access to hardware interrupts, a resource that is only available in kernel mode in Linux.
Direct access to hardware interrupts allows application programmers to achieve the theoretical minimum latency response supported by the underlying hardware. For applications such as control loops where the absolute minimum latency is required, this fine grained control over hardware interrupts is frequently a valuable feature.
Currently no items