GUI Development: Embedding Graphics, Part II -

GUI Development: Embedding Graphics, Part II



Last month in part 1 of Niall Murphy's two-part look at GUI development, the author discussed the use of  fonts and bitmaps. This month he continues by showing you how to integrate simple shapes and objects into your user interface.

Last month we looked at the work involved in copying the bits of a bitmap, or a character of a font to the display. But sometimes deciding when to draw the bits is as difficult as the drawing itself. How do you know what color the piece of text is? How do you know if the item should be currently visible on the screen? These questions apply to simple shapes, such as lines and boxes, as well as the bitmaps and strings examined last month. In this issue we'll examine the structures that describe objects which are capable of drawing themselves.

Software levels
Coding an entire GUI is an intimidating task. Fast graphics require special knowledge of the exact hardware configuration. Complex interactive graphics demand a set of graphical objects that can be reused in many dialogs . (A dialog is the term I use for a display configuration or layout, the term screen being too ambiguous.) On the desktop, a dialog would normally be a single window, but embedded systems rarely have desktop-style windows with overlapping and scrolling.

A complete application can be divided into a number of levels. Device drivers handle the lowest levels of putting pixels on the screen. Drawing libraries provide the functionality to draw lines, curves, bitmaps, and text. Higher-level object-oriented libraries supply controls such as buttons, menus, sliders, and tick-boxes, and support screen real-estate management with windows. The code to control these facilities is sometimes automatically generated by a GUI builder, which allows the developer to drag and drop the graphics and controls into a window.

The higher-level libraries that support objects manage the events and refreshing of the display. I'll outline the features that can be implemented at this level. It will be up to you to decide if you need any or all of this functionality; then you can investigate whether you can buy the library from a third-party vendor. If only some part of the functionality is required, you may be able to implement the library yourself.

Choosing the set of primitives
Libraries of drawing primitives supply a variety of features. The functions themselves should be written with direct access to the hardware because any wasted CPU cycles at this level will be magnified many times when you start to render complete screens.

If you have to write a low-level library yourself, the one consolation is that you'll be able to use the library again if you use the same hardware on another product. The graphics code at this level rarely has any dependencies on the specific application.

One thing to be aware of if you're considering buying into VGA technology is that the people who sell the chips tend to sell vast amounts to few customers. Those customers, as you would imagine, build PC compatibles. So all of the programmers who use these computers can use a single BIOS call to initialize the adapter. The embedded programmer doesn't have this luxury and must initialize each register individually. This seems straightforward, but the problem is that so few people have to write that code that the initial state of the registers for each mode is rarely documented properly. You have been warned!

What do you want to draw?

What primitive drawing functions might you want? The typical set includes drawPixel(), drawBox(), drawLine(), drawArc(), drawText(), and drawBitmap(). A set of variations of these functions might exist to allow for arrowed lines, numerous formats for the bitmaps, and specific fonts for rendered text. Some functions could be written in terms of the others. For example, the box could be drawn as a collection of horizontal lines. However, the hardware often supports faster ways of producing filled rectangles, in which case you'll want to bypass the use of drawLine(). The code for drawText() and drawBitmap() might rely on fontDrawMono() and imageRender(), which we implemented last month.

The algorithms for drawing lines and arcs are covered in Foley and van Dam's seminal work on computer graphics.1 If you're using a VGA-compatible display, Michael Abrash's Zen of Graphics Programming covers all the low-level bit twiddling and optimizations you might ever want to do with that ubiquitous graphics adapter.2 You may find that you only require horizontal and vertical lines for some applications, which are each simple to code as a single loop.

Many attributes are required to draw something as simple as a box. What is the line thickness? Is it filled? What color is it? Are the corners rounded? You could map this list of questions into a large number of parameters to the drawBox() routine. To avoid long lists of parameters which would consume CPU cycles, as well as require more work on the part of the programmer, most libraries allow a pointer to a graphics context to be passed to each drawing function. The context defines many of the parameters I've described. If you need to draw many similar boxes, the context need not be changed between calls. If one attribute–such as color–changes, the context can be altered for that single attribute before the next call is made. A context can be shared between different drawing primitives. Some of the attributes will not always apply, such as the filled attribute when drawing a line. In those cases the redundant attribute is simply ignored.

The context is implemented as a structure containing the attributes we want to store. The application programmer could change the fields of the structure directly, but the structure is usually protected by a set of functions that manipulate the values and ensure they're legal.

Support for flexible drawing
For each of the functions I've listed, the arguments would typically describe the location where the image will appear. There might be a number of alternative drawing areas, however, on which you could put the image. Some video controllers allow for a number of virtual screens, only one of which is actually visible at a time. You may also wish to have the ability to draw to a bitmap somewhere in memory, which would be later copied to the screen.

Implementing separate coordinate systems is also possible. An origin and a scale for the x and y coordinates could be set, which would allow drawing to be performed in the units relevant to the application, rather than in pixels. This setup is particularly useful for graphs. If on a bar graph each pixel represents 100 revolutions per minute (RPM), and each motor speed is spaced by 20 pixels, then the x scale would be set to 0.01 and the y scale to 20. The origin would be set to the origin of the graph on the screen. Now the line representing the speed in the third motor as 4,000RPM could be drawn with drawLine(3, 0, 3, 4000), which will draw a line from the x -axis to the height representing 4,000RPM, which is 4000/100 = 40 pixels.

Another important use of scale is when the same diagram is redrawn at different sizes at different times. The application can change the scale and then draw the diagram with the same coordinates as it did previously.

Moving the origin is also useful if a number of objects are to be drawn inside one window or container. Moving the origin allows the entire group of graphics to be drawn in a new location without having to calculate a new location for each one. Each graphic is simply redrawn with the same arguments as before, and the new origin causes them to appear in a new location.

You may also want to utilize the clipping feature. Graphics are clipped when their appearance on the display is limited to a particular area—usually a rectangle—as seen in Figure 1

Click on image to enlarge.

Clipping is useful in a number of circumstances. The graphic may be inside a container or window, and you may want to limit the user's view to that container; the rest of the display may be designated for other information. Sometimes the program will want to refresh one area of the screen without affecting any other part because the rendering algorithm may only be rendering the objects that overlap the clip rectangle.

Three types of clipping are available. At the highest level, a single shape such as a line can be checked to see if it is completely outside of the area, and removed from the list of objects to be redrawn. The second level is applied when the primitive is called to render the shape. A new shape can be calculated to remove the portion that falls outside of the clipping area. For example, a shorter line than the original may be calculated, removing the portion of the line outside of the clipping area. This shorter line is then drawn. Another example would be a circle that is truncated to form an arc, which is known as pre-clipping. Post clipping is implemented by calculating all of the pixels in the shape and checking that the pixel is inside the clipping area just before rendering it. Post clipping is so called because the clipping occurs after all of the rendering calculations have been performed. Post clipping is far less efficient, but is sometimes suitable if it's implemented in hardware.

The next level: do you need objects?
The interface to the primitives I've described is functionally oriented. Only a minor amount of state information, such as the current drawing color, is stored in the context between calls. This information is shared across all calls, so it isn't stored as a per-object state. You could write a routine to paint a scene on the display with a series of calls to these primitives. When a different screen is needed, the display is blanked and a different routine could contain the sequence of primitive calls to paint a new masterpiece. This is the same structure that is used in programs that conduct a text dialog using printf() calls. Any new information is simply output, and the old information is overwritten or scrolls out of the way.

The process stops being so simple once you want to change a part of the display that has already been rendered. Why not erase and redraw everything? Speed is one reason. Plus, the flicker it would cause could lead to blindness or insanity. More importantly, the information required to construct the whole scene may not be available from one place. The information may have to be gathered from many parts of the program, leading to maintenance problems. A change in a data structure in one area would lead to changes in the code to draw a scene in many other places.

So what's the alternative? You can build a model by designing structures that describe each box, line, button, or container on the display. By maintaining these structures, previously drawn graphics can be redrawn each time an attribute is altered. Such an object-oriented graphics library may be purchased from a third party, or if your needs are simple, you could write one yourself. The example I'll provide later implements a simple object-oriented graphics library that provides for a couple of simple shapes, text, and containers.

A certain amount of overhead is involved in implementing a general scheme that manages a structure for each entity on the screen. On a simple embedded system this overhead may or may not be justified. If the layout of the display doesn't change much and little movement occurs, this extra level of functionality may not be necessary. If the display is used for output only and isn't interactive, you could probably get by without an object-oriented layer. But if the user interacts with individual controls on the display, you're going to want to implement an object-oriented model to control the events. If separate parts of the display are dedicated to separate functions that behave independently, you'll want containers to define such areas and to allow them to be displayed and hidden at different times. Hopefully, by the time you've finished this section, you'll fully understand how to apply an object-oriented structure to such graphics, and the considerable advantages it offers.

Figure 2 shows the levels of software that have been described in the last few sections. The top level is the application code, which varies from program to program. That level creates the objects and manipulates them. Whenever the refresh algorithm is applied, the data stored in the objects is used to construct the calls to display the view of the objects. Refreshing all of the objects at the same time isn't necessary, as we will see. The significant difference between what happens at the object level and at the primitive level is that calls to the object level always record the parameters in some way. Calls at the primitive level render the shape, but don't store any data.

Click on image to enlarge.

The next few sections develop objects that can refresh themselves and are managed by containers. Functionality that would allow objects to overlap each other, or to clip at the borders of containers, is not implemented. In many applications the programmer has enough control over the exact positions of objects that these issues aren't a concern. Building a more powerful set of objects would lead to a more processor-hungry implementation, making it less applicable to small embedded systems.

Structures to define graphic objects
We want to represent lines, boxes, text strings, and circles. Each one requires its own structure to store data unique to that particular graphic. The box may be filled or not. The text object must store the characters that it's going to display. The line object must store start and end points.

A number of attributes, such as location and color, are common. If we extract them into another structure, we can include that structure in each of the structures above. By creating this Drawable structure, we can write functions that will use the area of the graphic without having to worry about which particular type of graphic is being manipulated.

Having a single structure for the Area is another useful abstraction, so we'll store the top-left point and the bottom-right point. While storing the width and height may seem more intuitive, a number of calculations, like checking for overlap, are simpler using the bottom-right point representation.

The following structures define the Area, the Drawable, and some of the graphical shapes we require:

ttypedef enum {CONTAINER, BOX, CIRCLE, LINE, TEXT}   DrawableType;typedef struct{  int left;  int top;  int right;  int bottom;} Area;struct drawableStruct{  Area area;  int color;  DrawableType type;};struct boxStruct{  Drawable drawable;  Boolean filled;  int fillColor;};struct circleStruct{  Drawable drawable;};struct lineStruct{  Drawable drawable;  int x1;  int y1;  int x2;  int y2;};struct textStruct{  Drawable drawable;  char *string;}; 

The structure names here are typedefed according to the following definitions. This avoids having to use the keyword struct each time one of these structures is referenced:

typedef struct drawableStruct  Drawable;typedef struct boxStruct Box;typedef struct circleStruct Circle;typedef struct lineStruct Line;typedef struct textStruct Text; 

The Circle structure is empty because the radius and the center can be derived from the Area structure stored in the Drawable. On the other hand, the Line structure contains some redundant information because the area itself cannot unambiguously identify the line. Once the rectangle containing the line has been defined, it's still necessary to distinguish if the line is from the top-left to the bottom-right, or from the top-right to the bottom-left. I dislike such redundant information because the bugs caused when the two forms become inconsistent can be difficult to track down, but in this case it's unavoidable.

The Drawable structure is included as the first field of each of the graphic structures. This allows us to access the Drawable using a pointer to one of the other graphics by simply casting it to a pointer to a Drawable. We'll hide this cast inside a macro which we can apply to any of the shapes defined above:

#define GET_DRAWABLE(d) ((Drawable  *)(d)) 

This prevents the application programmer from having to be aware of the cast.

The typedefs for the structures are separate from the structure definitions, so we can make the types visible in a header file while keeping the structures in a .c file. This technique implements opaque types and allows the caller to hold pointers to the structures without having access to the members of the structures themselves. We can ensure that any changes made to the data stored in the structures is via the functions we've provided. By extracting common data into a single data structure, we've implemented a simple form of polymorphism.

Memory management and initialization
Declaring these structures statically or on the stack isn't particularly suitable for graphics applications. If the structures are declared on the stack, they'll cease to exist when the function exits. This means that the object will have a short life. For a desktop application, many programmers would simply allocate these objects on the heap and think no more about it. In embedded systems, which may have to run continuously for many months, the heap can be the source of some problems. In C, malloc() and free() allow blocks of bytes to be allocated from the heap and returned to it. If these functions are called often, and for blocks of varying size, heap fragmentation will eventually render the heap unusable for large allocations and the program will fail. A heap is fragmented when the chunks of memory allocated are scattered throughout the heap's memory space. The remaining space is broken into so many small pieces that allocating a large block is impossible, even though a reasonable percentage of the memory is actually free.

For these reasons, many embedded programmers eschew even the most cautious use of malloc() and free(). This approach is not unreasonable. You simply have to decide on all of the structures and buffers that your program may need and provide for them up front. They could be declared statically, and the compiler will set aside space for them.

Understand that by not using the heap, your memory requirements will be greater than an equivalent program that uses the heap. Consider a program that needs 10 settings structures to reflect settings that the user can change, and related information such as limits, resolution of change, and data specific to the type of input device used to change the setting. If the largest number of settings in use at a time is three, then the memory consumption is three times the size of the settings structure. If all 10 are allocated statically, the memory consumption is 10 times the size of the setting structure. So by allocating all structures statically, we're stuck with the worst-case memory consumption, but we're guaranteed to have no leaks. This is often acceptable in embedded systems because the number of elements on a display is often limited by the physical control panel. But the situation can change dramatically when you start to use graphics.

If we don't want to use malloc() and free(), declaring the structures statically may not be the best alternative. Two problems can arise with this method. The first is that each structure must be given a unique name in the global scope, and if there are many objects, you may find it difficult to find meaningful names for them all. The second is that the structures will exist in an uninitialized state until the program has enough information to set initial values. The danger is that the program may use one of these structures before it is initialized (with unpredictable results).

We can get some of the convenience of heap allocations, and none of the dangers, with the following approach. Set a piece of memory aside by allocating a static array of unsigned chars. The salloc() routine allows memory to beallocated from this block, but never freed. If the block is used up, an error handler prevents the program from running. The intention is that the salloc() routine is only used during start-up, so that any problems would always be found as soon as the system is run. This approach would lead to any problems being discovered in test, and not in the field after release. This goal could be ensured by adding a function that disabled salloc() after the start-up is complete:

#define SALLOC_BUFFER_SIZE 5000unsigned char GS_sallocBuffer [SALLOC_BUFFER_SIZE];int GS_sallocFree = 0;void *salloc(int size){  void *nextBlock;  if(GS_sallocFree + size >    SALLOC_BUFFER_SIZE)  {    errorHandler();  }     nextBlock =   &GS_sallocBuffer[GS_sallocFree];  GS_sallocFree += size; return nextBlock;} 

Now that we have a way of allocating the memory for the shapes, we want to be able to initialize them at creation time to avoid the possibility of using an uninitialized object. We achieve this goal through a number of functions that create and initialize an instance of each structure, as seen in Listing 1 . Since each of the structures contains a Drawable, calling drawableInit() on that part of the structure is necessary. I'll show the creation functions for Box and Text. The others follow much the same form. The most important feature of Listing 1 is that the application that calls boxCreate() doesn't have to call salloc(). The allocation is done just before the values in the structure are initialized, which protects the application from having access to an uninitialized Box. The use of uninitialized storage is always a plentiful source of bugs.

Click on image to enlarge.

Container hierarchies
The ability to put the shapes into containers that can then carry them around is very useful. Compound objects can be moved as a single unit, or inserted and deleted by a single function call. Containers also provide their own coordinate space. The location of each object within a container is relative to the container's origin, not to the display origin. To draw a container you must first draw the background color, and then draw each of the elements within the container. Since containers can hold other containers, this drawing algorithm can become recursive.

By breaking the screen into a of regions, each of which is occupied by a container, the refreshing of the display becomes more efficient. If the container in one of the regions is replaced by a new container, only that region will be redrawn. The containers in other areas remain unchanged.

One root container occupies the whole display. The subcontainers of the root and all of their descendants will be displayed, while any objects not connected to the root are invisible. Such invisible containers can be useful places to build up screens, which can be attached to the root at a later time. A visible flag in the container indicates whether it is attached to the root. This avoids having to follow parent pointers to the top of the tree each time a decision has to be made to draw an object. Containers are a simple form of window (with no frame) that allow us to control different areas of the screen independently.

In the example code, the coordinates of any object are the coordinates within the parent container. When the object is actually being drawn, we must calculate the absolute coordinates to render the object on the display. For efficiency, the containers maintain their absolute position as well as their position within their parent (remember, containers are contained in other containers). The absolute position is meaningless unless the object is connected to the root container and has a position on the display.

The Container structure is:

struct containerStruct{  Drawable drawable;  Drawable *containedListPtr;/* The absolute location is maintained to optimize drawing the contained objects.*/  int absoluteLeft;  int absoluteTop;  Boolean visible;};9typedef struct containerStruct Container; 

The container uses Drawable to control its area, just like the other shapes. The containedListPtr points to the Drawable part of the first child of this container. Each child is then linked to the next with a null pointer as a terminator.

The containers implemented in the example code don't clip the graphics, so the contents may extend outside of the boundaries of the parent container. It's up to the application to ensure that this doesn't happen. If some shape does extend outside the boundaries of its parent container, some of that shape might not be erased from its old position when the container is moved. This point will become clearer when we examine how drawableErase() works.

Because a container has color stored in the drawablestructure, each container can have a different background color. So the containers' boundaries are obvious to the user. If this isn't the desired effect, make the color the same as its parent container, rendering the container invisible.

Since containers can hold other containers, a hierarchy of containers exists. This hierarchy changes at run time, as shapes are added and removed from the containers. Figure 3 shows the container hierarchy for a simple display that has a title and a subcontainer with a house. If the location of the subcontainer is changed, all parts of the house have changed automatically because their locations are relative to the origin of the subcontainer in which they're held.

Click on image to enlarge.

With the creation functions described in the last section, shapes can be created and added to containers without having to set aside a unique name for them. This tactic is useful for some of the objects which, once added to the Container, do not require any further manipulation. For example, once the following function returns, there isn't a unique name reserved for the Text structure, though the Text continues to exist and will be visible whenever the Container is visible:

void addText(Container *cPtr){  Text *t1 = textCreate(20, 40, “Hello World!”);  containerAddTo(cPtr, t1);} 

Refreshing the display
Now that wehave objects we can draw, and containers to hold them, we must decide how and when to refresh them. If we tell the outermost container to redraw, the entire screen will be redrawn. This would work but would be extremely inefficient if such a redraw was performed for every change. We must identify the specific areas that need to be refreshed.

Figure 4 shows an object being moved. The request at the application level was simply to move the man to a new position. To refresh the display, two actions must take place. First, the man image must be removed from his old position. The second is that he must be drawn in a new position. Although this seems obvious, the important point is that at the primitive level the man cannot be moved; he must be completely redrawn. The other images on the display must not be corrupted by this change. Ideally, the only areas of the display that are refreshed are the old position of the man and the new position of the man. These areas are known as dirty areas .

Click on image to enlarge.

Refreshing the dirty areas isn't as trivial as it might first appear. Consider erasing the man shown in Figure 4. He must be redrawn in the background color of the parent container to make him disappear. If we want to allow overlapping objects, we have to reproduce any objects that have been uncovered. I shall describe two approaches here: refresh by dirty objects and refresh by dirty area. The first is simple and fast, and the second is more complex butallows far more flexibility.

Refreshing by dirty object

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.