This “Product How-To” article focuses how to use a certain product in an embedded system and is written by a company representative.
Today, Linux is broadly used for development of embedded appliances such as DTVs, set-top boxes, DVR players, xDSL/cable/PON modems, home routers and gateways. It is especially well-suited for digital home and home networking, with advanced networking capabilities, wide availability of device drivers and royalty free run-time costs.
Beyond embedded appliances, Linux also powers enterprise-class appliances like servers and routers. Linux is the #1 embedded operating system in China, Taiwan, South Korea, and the rest of Asia.
Linux is also gaining traction in mobile devices. At the 2009 Consumer Electronics Show (CES), we saw several Android-based netbooks. According to market research firm In-Stat, mobile Linux will grab significant market share in China. The firm says that by 2012, total shipments of mobile Linux-based smartphones in China will account for about 25.4% of total shipments of smartphones in China.
The prolific nature of Linux is due to the fact that compared to the typical proprietary commercial OS, Linux is open source, inexpensive, fast and secure. It also scales better and has a smaller footprint than other operating systems.
With Linux, engineers have access to and control over the source code, along with ongoing support from the open source community—a large community comprised of engineers familiar with Linux.
Continuing Growth and Proliferation of Linux
As Linux continues to gain market share, there are still challenges for certain embedded applications, including those with small footprints or high levels of real-time determinism or security.
Improvements in toolchains, new debug tools and features, and evolving standardization efforts will continue to increase the value of Linux for these and all embedded applications. Ongoing improvements to the Linux kernel are of the utmost importance.
The Linux kernel is the interface between the standard Linux API (Application Programming Interface) used by application software, and the underlying hardware structure of the processor system that the application software is running on.
The kernel is a complex combination of internal components and externally loadable modules that provides a complete and stable processing environment for the application programs to execute, and provides the ability to safely trap software bugs, and to some extent hardware failures.
During the boot cycle, the kernel must recognize, then properly initialize the core system processor(s), system memory, hard drives, video cards, USB ports, network cards and audio processors in a timely manner and provide adequate indication of success or failure during this boot cycle.
Maintaining such a complex collection of code is obviously a difficult challenge. The kernel source code is separated into a standard 'tree' structure such that subsystems can be better isolated from one another, allowing the effort to be distributed among several key kernel maintainers.
This division of labor minimizes the effect of major changes in one part of the kernel on other parts of the kernel. Changes to each subsequent subsystem get rolled up to the key maintainers and then eventually to the top maintainer of the Linux kernel. These collections of changes are known as 'patches' and are created and applied in a standard format.
Improvements to the kernel are made as vendors and developers give back their knowledge to the open source community. The open, decentralized nature of Linux, backed by strong developer communities, makes Linux-based operating systems a good choice for cultivating innovation.
Because vendors and developers continually share what they have learned, chances are that when a developer needs a component, it will already be available somewhere in the Linux community ecosystem, and can be adapted.
Much of the work on making Linux a more viable and appealing operating system for embedded applications comes from commercial Linux vendors.
Commercial versus Open Source
Embedded Linux developers have two main options when selecting a Linux OS: “free” versions that are available as downloadable object/source code and covered by open source licenses, and commercial distributions that are maintained and supported by companies as commercial products.
Most developers today use free distributions. A 2007 survey by market research firm VDC asked current Linux users which Linux OS they would use in their next project, and a whopping 71% of the embedded system engineers who responded said they would use Linux on a free distribution basis (Figure 1 below ).
|Figure 1. Linux Operating System planned for next project (Source: VDC, 2007)|
As embedded developers choose the Linux distribution for their next design project, it is important that they recognize the limitations of free distributions.
These limitations include the lack of availability of rich Linux tools, a larger footprint than competing real-time operating systems, the challenge of optimizing Linux to address real-time requirements, and the fact that support and development tools for free distributions of Linux are limited at best. With all of these considerations, “free” may mean free to use, but it is not necessarily free of cost.
“Free” to Use, not “Free” Cost: the Debug Challenge
Beyond the decision of whether to choose a commercial or free distribution of Linux, developers must understand the capabilities and limitations of their debug/development tools.
Any meaningful design in the embedded SoC domain requires well-integrated software development tools targeted specifically at the embedded space. Open source tools—from compilers to applications—are expected to be production quality. And developers expect that tools will work together seamlessly with a small learning curve.
Leveraging free distributions of embedded Linux has become a widely accepted practice in the consumer product space. In these markets, product run rates are high, enhancements to the code are frequent, and cost of goods sold is critical; all of which make the open source model attractive.
But while the general expectation is for open source tools to be close to production-quality, “free” and “commercially available,” are not synonymous. Tools that can integrate open source/freeware and also provide a seamless debug environment for a processor core require a deep understanding of the core and SoC component interaction. Expert knowledge is required to make today's open source tools work.
There are several “freeware” debug solutions available, and designers need to fully understand their limitations. For example, probably the most popular freeware debugger for the Linux kernel is KGDB.
The major drawback to KGDB is the requirement to recompile the kernel. This is not always possible for applications where the product is already deployed to the field. Patching the kernel can also introduce code changes that can affect system performance.
GDBServer is another popular freeware debugger for application debug. It too is severely limited. A major issue is the lack of support for simultaneously debugging a large number of threads/processes.
As the number of threads/processes being debugged goes up, the performance of GDBServer deteriorates rapidly to the point that response times are so slow that it can cause the target system to fail. Additional concerns include the inability to debug drivers and applications over the same target connection, debugging device drivers and debugging shared libraries.
The bottom line is that it's extremely important to understand the debug demands for your particular application, and choose your tools accordingly.
Care and Feeding of Linux on MIPS processors
As developers determine which OS they will choose, it's important in reducing overall costs and time-to-market that they choose one that is fully supported for their specific processor.
This means that much of the kernel maintenance is already being handled, so if they have issues or challenges, the processor vendor and its ecosystem can help solve them. Plus, by using a kernel release that has been officially tested by the vendor, a customer is reducing—if not eliminating—the risk of problems as the design continues.
The processor-specific Linux ecosystem can enable quick and accurate isolation of differences in customer applications against a set of standard configurations and testbenches.
From the processor vendor's point of view, it's critically important to be actively involved in the maintenance of the Linux kernel that is relevant to its cores.
These vendors are in the best position to implement additional new features, and also to increase the stability of legacy core features, since they know all the intimate implementation details of the core design.
As a processor IP company, MIPS Technologies must ensure that its new processor cores are properly integrated into the Linux source code tree, and that these changes are properly validated through regression testing on legacy cores and platforms.
MIPS has its own key kernel maintainer that must 'sign off' (approve) every suggested 'patch'. These patches are submitted by MIPS and our customers, and they address not only new core features and enhancements, but also improvements to existing core support and the occasional bug fix.
While MIPS IP cores are proprietary, it is in the best interest for both MIPS and the Linux community to ensure that all performance and power management features available in MIPS cores are fully implemented in the Linux kernel.
This provides the best user experience for customers using Linux as the core OS of the design, and also allows for peer review of the kernel enhancements by hundreds of Linux kernel programmers. This will only add to the stability and robustness of core or architecture-specific patches.
Support for new core designs must not break or degrade the existing core support structure, and must also allow existing customers to quickly migrate to a new core technology with minimal internal effort.
The configurable nature of MIPS cores makes the maintenance of the Linux kernel code base even more challenging, since many combinations of core configurations must be tested to ensure the newly added features are functional across all combinations.
Kernel Optimization ” Multi-core Support
One area where we spend a lot of effort on Linux kernel optimizations is for multi-core support. Today, with the aim of achieving the best computing power per area (MIPS/square millimeter) and computing power per unit power (MIPS/mW), many processors leverage multi-core technology to distribute the processing load across many cores running at a lower clock frequency.
The applications can be distributed in a symmetric fashion known as Symmetrical Multiprocessing (SMP), in which a task is more or less equally shared among the cores; or Asymmetrical Multiprocessing (AMP), in which specific tasks are assigned to a specific core. In either case, proper support must be available in the Linux kernel to allow these types of programming models to be implemented, while being as transparent to the application developer as possible.
MIPS Technologies' multi-threaded 34K core and multi-threaded/multiprocessing 1004K core require slightly different approaches to multi-core management within the kernel, since the 34K core provides the facility of multiple virtual cores or VPEs (Virtual Processing Elements) on one physical instance of a single core, and the 1004K core provides a coherent implementation of a multi-core device.
For each of these cores, the Linux kernel multi-core support and optimizations we implement must be able to correctly identify the core in use, and properly initialize and implement the specific multi-core features seamlessly.
The implementation model of task sharing in a 34K-based device must understand that one physical core is actually appearing as more than one virtual core, and these cores are not automatically coherently managed.
This type of multi-core environment in some cases lends itself better to an AMP environment with a separate operating system running on each VPE. The true coherent multi-core design of the 1004K core makes traditional SMP models more attractive where a single operating system has full control of both cores.
Kernel Optimization ” Power Management
Another example of the importance of Linux kernel optimizations is in power management. In today's green computing environment, power management is ever-increasing in importance not only important for portable devices requiring maximum battery life, but also to minimize wasted energy and heat in A.C.-powered systems.
A typical modern cell phone must manage in excess of 20 different power planes, not including the power islands within the application processor SoC and the core itself.
Current Linux kernel power management support concentrates mainly on standard PC through the ACPI (Advanced Configuration and Power Interface). The ACPI interface is however not suitable for advanced multi-core SoCs, which must extend a coherent power management scheme across cores, internal SoC peripherals and finally external system peripherals such as RF power amplifiers.
At MIPS, we implemented an advanced power management IP block called the Cluster Power Controller, or CPC (Figure 2 below ), which allows for individual control of every core within a specific 1004K implementation, allowing cores to be brought into and out of coherent operation and powered completely down if required.
This power management model can be further extended to bring core voltage and frequency modulation under control of the operating system itself. The functionality of this CPC block must also be extended into the Linux kernel.
We are currently architecting the foundation of this power management structure to implement a comprehensive API to both the Linux kernel itself and to applications running under the standard Linux application space.
|Figure 2. The Cluster Power Controller allows for individual control of every core within a specific multi-core implementation.|
Development Tools for Linux
When looking at processor support for an OS, development tools are critical. The latest generation of Linux development tools take advantage of On-Chip Instrumentation to “hardware assist” the debugger.
These tools are architecture-specific, and not all processors support this unique approach to debugging. For example, there are Linux tools available that can profile the Linux kernel and loadable modules.
These tools rely on the processor being able to transparently sample the PC register at extremely high rates of speed and pass this information to the debugger.
Coupled with symbolic information from loadable modules (the typical form of a Linux device driver), a developer can quickly profile the Linux kernel and determine what demands the device drivers may be placing on the kernel. Optimizing the performance of the Linux kernel can have a huge impact on system performance.
Complimenting kernel profiling tools are Linux event analyzers that are able to profile the entire system. Typically these tools capture user-selected Linux events occurring on the target, and then graphically display the events over time. Captures can sometimes collect up to 20 seconds of Linux system activity.
Regardless of the application, developers should make sure that the processor architecture they choose includes a seamless development environment that includes compilers, debuggers and performance and profiling tools.
Tools of this nature are mandatory to meet time to market requirements and to extract maximum performance from an embedded system design. Investing in fully integrated and tested vendor-provided tools and environments—including complete documentation, support forums, call centers, undocumented insight, integration, standards, and a connection to an entire ecosystem—can reduce time to market for current and future designs.
All for One and One for All: The Linux Community
The Linux kernel has evolved over the years to become one of the most scalable and reliable operating systems, powering embedded appliances from low-end, single-core to high-end, multi-core.
The availability of a stable, highly portable Linux kernel, hundreds of supporting royalty-free middleware components, thousands of Linux developers, and a growing number of commercial Linux software and service providers means that Linux is an effective operating system, both in terms of time to market and cost of development.
We encourage all developers to consider Linux for their next RTOS, and to look for a processor vendor that provides the dedicated Linux support, vast ecosystem, and debug/development tools that are needed to bring quality products to market quickly.
And when developers choose to leverage Linux, we encourage them—whether they are using a “free” or commercial distribution, to give back to the community. It is only through continued maintenance, care and feeding of the kernel that Linux will continue to grow and evolve as the RTOS of choice for the next generation of embedded applications.
Rick Leatherman is a Vice President for Development Tools at MIPS Technologies. He was the founder of First Silicon Solutions (FS2) which was acquired by MIPS in 2005. He has over 20 years' experience in development tools at Intel, Microtek and Microcosm.
Yakov Levy is a Strategic Marketing Manager for MIPS Technologies. He has 25 years of experience in the semiconductor and silicon intellectual property (SIP) industries. Prior to joining MIPS in August 2006, Mr. Levy held various positions with Adimos Inc., Transmeta Corporation and National Semiconductor.
Bob Martin is an Applications Engineer for MIPS Technologies, having joined the company in 2007. He was previously Principal Systems Engineer at Portal Player for two years. Mr. Martin has also held engineering positions for companies including National Semiconductor and Scientific Instrumentation Ltd.