Adding shades of color and the illusion of illumination can bring a displayed object to life. Here's how to achieve the desired effect.
I've recently been using the Sharp LH79520. It contains an ARM7 core and a built-in LCD controller. For the work discussed below I used a Sharp evaluation board with an LCD display.
The Sharp LH79520 provides up to 16 bits of color and up to 1024×768 pixels of display area. As with many controllers you can trade off resolution against color depth. If you drive the display at 1024×768 then you can only use eight bits per pixel (BPP). These eight bits are used to look up a color map that allows the selection of a 16-bit value. This means that we can simultaneously display 256 colors from a possible set of 65,536. This compromise is necessary because the number of bytes required to represent 1024×768 pixels at 16 BPP could not be transferred by direct memory access (DMA) from RAM quickly enough to render the screen in the time available. At lower resolutions, we can afford to store 16 BPP and avoid the restriction of the color map.
If we want to save RAM, we can also use two, four, or eight BPP at any of the resolutions available, but these modes require the use of a color map. The modes that use 16 BPP and no color map are known as direct modes.
In some cases, you might want to consider using fewer bits per pixel for faster redraws. For example, filling 1,000 pixels at four BPP requires writing to 500 bytes. Filling the same area at 16 BPP requires writing to 2,000 bytes. This is significant on controllers such as the LH79520, where there is no hardware acceleration for features such as line drawing or box filling.
Splitting the spectrum
Typically, the value for a pixel is divided into red, green, and blue components. We'll look at one of the modes available on the LH79520 to see how the colors are controlled. Most LCD displays of VGA or greater resolution have an 18-bit interface that is divided into six bits of red, six of green, and six of blue. If we try to store 18 BPP in RAM, we'll encounter all sorts of boundary issues as we move from one pixel to the next. The extra programming complexity would be minor, but the masking involved would slow down operations that we want to happen quickly.
|The scoop on VGA
The term VGA (video graphics array) is used somewhat indiscriminately when describing graphics components. It originally referred to a particular type of controllerused mainly in IBM PCsthat offered a resolution of 640×480. The term VGA is now often misused to describe any controller or screen with that resolution; the programming model of the controller doesn't necessarily bear any relationship to the original VGA specification from IBM.
While all controllers used on PCs still support the original VGA modes, controllers in the embedded market have no such legacy requirements. If the controller is described as having VGA resolution, it simply means that it can display 640×480 pixels. Of course this is not a problem because programming a VGA controller in the original modes is a difficult and confusing business, best avoided if possible.
Similarly, XGA has become generic for a screen with 1024×768 resolution. The term Quarter VGA (QVGA) refers to 320×240 dots, a common resolution in portable devices. While I consider these definitions to be inappropriate based on their origins, this is the way they are commonly used by display vendors.
The greatest color depth available on the LH79520 is 16 BPP. It uses five bits for each of the primary colors. Those five lines are tied to the most significant five bits on the screen for each primary color, leaving the least significant bit for each color unused, since that is the bit with the least impact on the appearance.
So far we have accounted for 15 bits. The remaining bit of each primary color can be tied to the least significant bit of each of the colors. This doubles the number of colors from 215 (32,768) up to 216 (65,536). Since this last bit contributes to all three color components, it needs to be managed separately when we calculate colors. It's known as the intensity bit.
The LH79520 offers an alternative mode, in which the extra bit is dedicated to one of the primary colors. This may make sense if, for example, you know that shades of green are more important than the other primary colors in your application. Either of these direct modes require 16 BPP, which translates into approximately 512KB of RAM for the frame buffer, assuming VGA resolution.
In an embedded application, you often want to convert bitmaps generated on a PC application into a form that's usable on your target. Color representation on the PC may differ from that of the target, so you will need to sacrifice some color depth to get a representation that works for the target.
For example, a 24 BPP bitmap in BMP format would have to be reduced to 15 BPP on the LH79520. The three least significant bits of each of the primary colors would be dropped. Each of those three-bit values could then be averaged; the most significant bit of the result would be the intensity bit. See my July 1999 article for a more detailed description of how to convert the Windows .BMP format into a C array suitable for compiling into an embedded system.1
The choice of color depth on the target may already be made if you're using a third party library. The PEG library, WinCE, and Linux have all been ported to the LH79520, but they may limit the set of modes that can be used. If you are choosing a graphics controller and plan to use a particular mode, make sure it's supported by your driver. If you plan to write the low-level graphics routines yourself, then try to write the routines in a style that makes it easy to change the mode later.
I learned an interesting lesson about color depth and type definitions when I started working with this chip. I was porting a set of routines that had previously been used on a target with four BPP. Anywhere that I needed to store a color, or pass a color as a parameter, I used an unsigned char. At the time, I had no intention of porting the code, and if I did I could still move up to eight BPP with few changes. Now that I wanted to pass around 16 bits of color information, I had to trawl through a large body of software changing the unsigned char types to a new type, which I called ColorType. I then used a typedef to define ColorType as an unsigned short, which is 16 bits on this platform.
The lesson is to plan on changing your color type from project to project. To my shame, I made a similar oversight in the graphics chapter of my book, Front Panel, where the color was stored as an int, which could be a range of sizes depending on the compiler and processor in use.
Shading and curving
Now that we have 65,536 colors, what use can we make of them? In my January column, I mentioned antialiasing as one use of color depth. This month, we'll discuss shading.
Figure 1: A beveled button
I looked at some of my touch buttons and wanted to find a look that was different from those I had previously used. The beveled edges and shadows shown in Figure 1 were a style I used on some previous GUIs. The beveled edges emphasized the three-dimensional nature of the buttons, but the number of sharp angles were not pleasing to the eye. They also used a lot of space, and text within the button could not overlap with the bevel. I wanted something that suggested that the device was rugged and tough, and I thought a metallic look might be good. The metallic look also allows us to have a mostly gray background, which is not as jarring on the eye as stronger background colors. I wanted the buttons to have a rounded look, and some of the narrow spaces to be curved and almost look like pipes. A metallic blue would then highlight selections or other emphasized items. Rows of small circles that look like rivets would complete the metallic motif.
Figure 2: Flat, curved, and curved and shaded buttons
The key to creating the impression of curves on the screen is having enough colors to transition smoothly from the color of the front of the button to the bright color on the side of the button where it is catching more light. Figure 2 shows a flat version and a curved version of a button. The curved effect is more appealing to the eye. The third button in Figure 2 is shaded in a different color, which could be used for highlighting selections.
Unfortunately, gradually changing colors does not involve a linear numerical change from one value to the next. The number representing the color has a red, green, and blue component and each needs to be altered individually across the range. The amount of change in each component is distinct. A fraction of that change is applied to each line of the box as it is drawn. The resulting red, green, and blue components are then recombined to form the final color. I've placed some C code to do precisely this into the code archive at ftp://ftp.embedded.com/pub/2003/03Murphy.
If our system used a color map, these calculations would not be possible, since the number representing the color would be an index into a table. The order within the table is arbitrary. When I use a restricted palette, I try to establish which color changes I will need and place a range of colors next to each other on the color map. The gradual change is then achieved by using these entries in the color map in sequence. If you need to generate a set of colors at run time, you could calculate the colors needed, add them to the map, and then use them. This technique is limited by the number of entries in your map, so the number of shades that you can display simultaneously has a maximum.
The effect used here is fairly crude. The assumption is that the light source is directly above the button. A full lighting algorithm is far more complex. If you're keen to keep the embedded software simple, one option is to produce the background of the button, or other graphical object, as a bitmap on a PC. On the PC, you can use any of the effects available in the editing package. Then convert the bitmap to a form that your embedded target can display. At run time, on the target, you simply copy the bitmap to the display and draw the text over it. The approach you take will depend on the amount of run-time flexibility you need.
Many of the more subtle details of the graphic design of the interface are best handled by a professional graphics designer. When the software engineer comes to implement those designs, it just takes care to be sure that the right colors are available or can be calculated at run time. These subtleties often make the difference between a GUI that looks good and one that looks great.
Niall Murphy has been writing software for user interfaces and medical systems for ten years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Murphy's training and consulting business is based in Galway, Ireland. He welcomes feedback and can be reached at .