This series on how to get started using Embedded Linux is on my Open Mike blog. This installment discusses development models.
There are two different models of Embedded Linux development: Cross-platform and self-hosted.
The traditional model for Embedded Linux (and all embedded system development) is cross-platform development. In this development style, you create your software on a powerful host system (like your desktop computer) and then transfer the binary image to a much smaller target system. Your host computer has many features which make it an excellent development environment: a fast processor, lots of memory and disk space, a big display, and all of the tools you need. You have documentation and access to the Internet for articles, support, or software.
Your target system is likely designed for a specific application, such as router, media player, file server, controller, or some other purpose. The processor is generally limited in power and speed, selected to meet the requirements of the application. Often, the processor architecture for the target system is different from your host system, selected for low price or integrated peripherals. The target's system memory, both RAM and persistent, is limited, just the amount required to run the application programs. Connections from the target system to the “outside world” are only those needed to run the application. For example, a file server would have a network connection and connections for hard disks, but no display or keyboard connections.
Figure 1. Typical cross-development setup for working with embedded board, in this case, a Linksys WRT54G router.
Click on image to enlarge.
You use cross-compilers on the host system to compile your kernel and application programs into a binary image that you transfer to the target system. The cross-compiler, as well as a cross-debugger, may be different versions than the host system compiler and debugger, designed to support the target processor. For the most part, they work exactly the same as the corresponding tools on the host system. Other than the cross-development tools, you use the host tools for everything else, from editing files, to building the kernel or applications. The versions of the kernel or application programs used for the target system may be different from those on the development system. Your development system may have a standard distribution, like Ubuntu or RHEL, while your target system has a newer kernel which is still under development and may not be completely stable.
There are some added complexities with the cross-platform development model. When you compile your program for the target, it is important that the system header files for the target are used, and not the system headers for the host. You can imagine the problems which might occur if one of your compiles incorrectly uses a host system header which defines an int to be 64 bits when the target uses 32-bit ints. Most of the time, the cross-compiler takes care of insuring that the headers for the target system are used when compiling programs for the target.
You will need to build a complete file system, including the kernel, application programs, and all of the directories needed to populate a Linux file system. There are several tools which will help do this, such as buildroot, OpenEmbedded, or Yocto. We'll discuss these in a future article.
Another complexity is that the file system on the target may be different from the host. Many targets use flash memory for storage, which might use the Journaling Flash File System (JFFS) instead of the EXT file system used on hard drives on the host. You will need to convert the file system on the host into the binary format needed to transfer to the target and write to the flash memory.
Depending on how the target boots, you might need to configure a TFTP server and DHCP server on your host. That usually means that you will use a different network connection on the host system to connect to the target and not the one you use to connect to the Internet. (Corporate IT departments are understandably unhappy when they discover rogue DHCP servers responding to requests on the corporate network.)
Finally, there's the problem of communication between the host system and the target system. There are many ways that you might create a connection between the host system and the target system. These include JTAG or another low-level connection to the processor, which might or might not allow you to transfer binary images. There's the venerable serial port, usually used as a system console, which remains pretty common on target hardware, even if they seem to be scarce on development systems. (You can use a USB to serial adapter, as shown in Figure 1.) Many target systems have network ports which allow you to connect them to a network. These targets might support booting from a binary image on the host using TFTP, rather than having to write the image to a flash file system, and you may be able to run with the target root file system on an NFS server.
We'll discuss all of these additional complexities in future articles.
With this style of development,you are developing programs on the target. This is a reasonableapproach when the target system has a powerful processor and adequatememory to build the kernel and applications. The source files for thekernel and applications, as well as build directories, might be on localhard drives or mounted over the network using NFS.
Figure 2. Self-hosted development using Raspberry Pi.
Click on image to enlarge.
In the past, this development model was only used for very large andexpensive embedded systems, such as the switch for a telephone centraloffice. Now there are a number of embedded systems which package a fastprocessor, lots of RAM memory, flash memory for a file system, a videocontroller, and a wide range of peripherals on a small, inexpensiveboard. Popular examples include the BeagleBone and the Raspberry Pi. These boards have fast ARM processors (600 MHz to 1GHz), ample RAM(256-512Mb), removable SD cards which serve as a file system of 4Gb ormore, USB ports to connect to a keyboard or mouse, and video output to amonitor. (Compare this with the hardware used in the first LinksysWRT54G router: 125MHz processor, 16Mb RAM, 4Mb flash memory.) In manyrespects, using one of these single-board computers is like working witha laptop computer on a board.
There are a number of advantagesto this model, especially with the high-powered single-boardcomputers. The first is that pre-packaged Linux distributions areavailable. In some cases, you have your choice of severaldistributions, such as Ubuntu, which are also available for desktopsystems. Development tools (compiler, assembler, and debugger) for thetarget processor are contained in the distribution. Building a kernelor applications for the target is very much the same as building for adesktop or server Linux system. There's no way to accidentally mix hostand target header files, since the compiler on the target onlyreferences the target header files.
There are disadvantages, aswell. This model can only be used when the target system is one ofthese powerful systems. Building an entire root file system, with manyprograms and libraries, can take many hours even on a fast desktopsystem. On a slower target, like the Raspberry Pi, it might take days. These target systems may have much more hardware than the plannedapplication actually requires. It may be difficult to pare down thedistribution to only those components needed by the application, sincethe system has to support both development and the applicationenvironment.
We'll explore this self-hosted development in future articles as well.
Hybrid development models
Thereare ways to combine the benefits of both of these development modelsand at the same time avoid some of the disadvantages. You might buildthe root file system on a host using the cross-development model andserve it to the target using NFS, while doing driver or applicationdevelopment on the target using the self-hosted development model.
Anotheralternative is to emulate the target hardware on the host system usingQEMU. QEMU is a processor and system emulator which has support formany different architectures and system designs. You run QEMU on yourhost development system and connect to it using a virtual network. Thishas the advantage of higher performance on the more powerful hostsystem, while using the target environment. Additionally, you can startQEMU had wait for GDB to connect to it, so that you can trace code fromthe very first instruction, something with may be difficult orimpossible with a self-hosted target.
Michael Eager is principal consultant at Eager Consulting inPalo Alto, Calif. He has over four decades experience developingcompilers, debuggers, and simulators for a wide range of processorarchitectures used in embedded systems. His current and former clientsinclude major semiconductor companies and systems developers. Michaelhas been a member of the ISO C++ Standard Committee and ABI Committeesfor several processor architectures. He is chair of the DebuggingStandards Committee for DWARF, a widely used debug data format. He isactive in the open-source and Linux communities.
- Learning Linux for embedded systems: Installing Linux
- Getting started with Embedded Linux–Part Two: How Linux works in embedded environment
- Getting started with Embedded Linux–Part Three: Program development for Linux and Embedded Linux
- Getting started with Embedded Linux–Part Four: Libraries & Free Applications
- Getting started with Embedded Linux–Part Five: Linux kernel
- Getting started with Embedded Linux–Part Six: Linux Kernel Modules (LKMs)
- Getting started with Embedded Linux–Part Seven: Character Device Driver
- Getting started with Embedded Linux–Part Eight: Development Models