More Prototyping Tips -

More Prototyping Tips

Niall delves deeper into the benefits of prototyping a user interface on a PC. He shares pointers on graphics, gotchas, and event scheduling.

In my last column (“User Interface Prototypes,” October 2002, p. 33) I showed you how to use Borland's C++ Builder (CPB) to develop a virtual user interface on a PC before committing to a hardware implementation. This month, we're going to look at some more situations in which this approach is useful. We'll also examine how to prototype graphical user interfaces.

Custom LCDs

When you order a custom LCD-the sort you get in a digital watch or calculator, with a few digits and some application specific icons-you have to get it right the first time. With lead times of several months to get tooled up, the product might miss its market window completely if you get the LCD wrong. You need to know if the icons work in practice long before you have the actual part.

On one such project, I mocked up the LCD in CPB. Each segment was simply an image, an object that allows you to import a bitmap. For the digits, I made each of the seven segments of the digit into an image object.That way, the visibility of each segment can be controlled independently. In my project, the microcontroller had a built-in LCD controller that dedicated one bit to each segment of the display. We had 64 segments to control. The microcontroller performed some multiplexing, and some demultiplexing took place in the display to ensure that the number of I/O lines required was less than the total number of segments. The multiplexing and demultiplexing was transparent to the software, so, in the code, we mapped one bit to one segment.

In the real target, once the correct bits were set, the hardware would look after the appearance of the appropriate segments. In the CPB environment, I defined the block of bits to be regular RAM:

#define NUM_SEG_BYTES 8

#ifdef USING_CPBBYTE G_segments[NUM_SEG_BYTES];#else//Set pointer to LCD controller

BYTE *G_segments = (BYTE *) 0x40;#endif /* USING_CPB */

I always define USING_CPB when building in the CPB environment, but not when building for the target. The #else of this piece of code causes G_segments to point into the registers that the microcontroller has dedicated to the LCD controller. Location 0x40 on the target is the start of the block of eight bytes.

In the CPB code, I set a timer to regularly read the contents of G_segments and turn each image on or off according to the corresponding bit. The CPB program effectively imitates the LCD controller that is built into the final hardware. As long as the user interface software toggles the correct bits within that 8-byte block, the correct icons turn on, regardless of whether the software is running on a PC or on the target.

This allowed us to develop most of the user interface software on a PC. When we wanted to try a different icon, we edited the bitmap associated with a particular bit. Often, no code changes were required. At the same time, we were experimenting with the user interactions and the key sequences that would cause certain icons or digits to appear and disappear.

When the design of the LCD was finalized, the prototype remained a sufficient development environment to use for a few months while waiting for target hardware. Figure 1 shows this prototype running as a PC executable. Since the bits that control the LCD were so crucial to correct operation, I displayed them on the screen, for debugging purposes.

Figure 1: A prototyping example

Another debugging advantage of the CPB environment is that you can add controls for arbitrary variables. The slider for adjusting the global variable G_batteryLevel controls a value that would be set by reading an analog to digital converter (ADC) in the target environment. Controls such as these make it easy to exercise the user interface software to test the reaction to a decaying battery. These prototypes often have dozens of controls that simulate real world events. Sometimes you want to simulate those events to test that the user interface software is working correctly and other times you want to simulate them in order to get feedback on the usability of the interface.

Event handling

The examples I've used so far demonstrate how to simulate the output of the user interface. We also have to consider how to handle user input events, such as key presses. On the target, you might be scanning input lines that are connected to a keypad. This might happen in the mainline code or in an interrupt handler. In either case, you'll eventually get a key value, which requires application-level decisions to determine what actions result from that key. So there will be a function called handleKey(KeyId key) that maps a key to some resulting action. Any hardware issues, such as debouncing, should be resolved before this function is called. This key handler will be the common entry point for the target code and the CPB code.

In CPB, all visible objects are associated with events. In the IDE, you can click on an object and then select its list of events. You can then name the function to call when an event occurs, such as, for example, when the mouse is dragged over that object. In the event handler for a mouse-down event, we can call the handleKey() function to activate the appropriate action in the user interface. The object that you use to attach this event handler will most often be a button or an image object, but mouse clicks can be detected on any of the other visible objects if needed.


Most user interfaces feature events that happen on a timed basis. For example, a light may flash regularly. You may wish to read a sensor regularly and update the display accordingly. In the simulation, you may not be reading a real sensor, but you might generate random data, read data from a file, or generate data based on some mathematical model.

To drive these timed events, we need to create one of CPB's timers. The timer has an associated function that is called regularly. This regular event allows us to update a counter to measure out any time periods we require. This is analogous to a timed interrupt on the target system.

Let's assume for the moment that the target system will not have an RTOS, and that we have a timed interrupt that increments a counter every 10 milliseconds. We would possibly have a simple super-loop architecture that might look like this:

int main(void){  initialize();  while (1)  {  if (counter%INTERAL_1 == 0)  {  doWork_1();  }  if (counter%INTERAL_2 == 0)  {  doWork_2();  }  }}  

where counter is a global variable that will be incremented in our interrupt. We can use any multiple of this single counter to detect the times when we should do other work. Numerous scheduling algorithms could be applied here, but those variations are beyond the scope of this column. The doWork() functions could include scanning the keyboard and reacting to those keys.

There is one important difference between the way the target code and CPB manage timed work. The CPB environment does not support continuous loops such as the one shown in the preceding code. After an event is handled, control must return to the Windows operating system to allow the application window to be updated. So the loop in my code is not practical.

Instead, we need to call the scheduleTick() routine shown below on a regular timed basis. We do this using the CPB timer objects:

void scheduleTick(void){   static int counter = 0;

if (counter%INTERAL_1 == 0) { doWork_1(); } if (counter%INTERAL_2 == 0) { doWork_2(); } counter++;}

We now have two ways to handle a key press in CPB. The first is an event handler that directly calls the handleKey() function in our user interface code. The second is to drive the polling with a timer object, which will then check the state of each clickable object to see if it has been activated. The latter might be a better simulation of what will happen on the target, but the first method is easier to implement.

Most of the challenging scheduling issues are not of great concern in the user interface, since any deadlines on the user interface are almost always soft. If a flashing light is a little early one time and a little late another, it doesn't affect the behavior of the rest of the system. This is just as well, since the timing behavior of the user interface running on a PC will always be different from the target (the Windows environment makes it difficult to get accurate timing measurements at the millisecond level). So, while CPB is a good environment for simulating the user interface, it's not suitable for simulating the hard real-time aspects of your system.

High fidelity

If the prototype is going to be shown widely, a close resemblance to the real thing will help users understand the final product. One of the best ways to achieve this is to take a picture of the casing and use that as the background of the window being presented as the interface. To do this in CPB, you create an image object and import a bitmap of the interface into that image. The image might come from your CAD package or from a scanned picture of the plans for the device.

For each area on the picture that should respond to mouse clicks, place an empty image object over the area, and associate mouse events with that image. Since the clickable image is empty and therefore transparent, the user will see the background image, though events will go to the clickable image. In this way, a whole set of buttons can be implemented. Each button-event handler will then call a function such as:

void handleButton(ButtonId key);

The code inside this function is common code that runs on the PC or on the target. It doesn't care whether the event came from the CPB environment or from a keypad-scanning routine on the target.

Figure 2: The five-button interface simulation

Figure 2, which was also used in last month's column, shows a CPB-simulated, five-button user interface. The buttons you can see are the ones in the single large background image, but the clicks are captured by empty images that are placed in front of the background image.


The more complex the user interface, the greater the benefit of doing the initial development on a PC. Since graphical user interfaces tend to be more complex than nongraphical ones, the debugging environment of a PC is a big advantage.

Third party graphics toolkits generally provide a library for a PC platform, allowing much of the development to be done there before porting to the target hardware. I usually program the graphics from the ground up, without a third party library. If you're working with a small display and a low-power CPU, you end up doing the same thing. The third party toolkits are generally only ported to 32-bit CPUs and graphics controllers that have a 640×480 resolution or greater.

So I find myself wanting to test my graphics code in the CPB environment. Fortunately, the image object allows us to set individual pixels. Each image has a canvas, and we can draw directly to it. We create an image with width and height equal to the dimensions of the real screen. We can then access the canvas as a property of the image, and set individual pixels. For example, we can set the pixel at coordinates (20,30) to black with:

FrontPanelWindow->GraphicsDisplay  ->Canvas[20][30] = 0;  //black  

where GraphicsDisplay is the name of the image that we created for displaying graphics. We can set other colors by assigning an RGB value, with eight bits for each component, or a total of 24 bits per pixel. For example, for purple, the red component is 0x80, the green component is 0, and the blue component is 0x80, so the 24-bit value to assign to the pixel is 0x800080.

The CPB environment is typically more powerful than the physical screen you are imitating, so you may have fewer than 24 bits per pixel. In practice, you'll define constants for the colors commonly used in your application with separate definitions for the CPB environment and the target system. For example:

#ifdef USING_CPB// CPB with 24 bits per pixel

#define BLACK 0x000000#define WHITE 0xFFFFFF#define PURPLE 0x800080#else// Target with 6 bits per pixel

#define BLACK 0#define WHITE 0xff#define PURPLE 0x22#endif

If you write a macro or function to perform plotPixel (x, y, color ), the lines and bitmaps and other routines can often be written in a platform independent fashion. The exception is when the graphics hardware supports functions such as line drawing or bitmap copying. To take advantage of these features, your code will have to be target specific.

Once you move up a level and you are writing code to render a screen layout, the code will always be platform independent. In most projects, I find that this portion of the code is the largest and changes most frequently. Once your line drawing works, you are unlikely to change it, but the layout of a screen might go through many iterations before you find a balance between fitting all the required information and making it pleasing to the eye.

CPB gotchas

The CPB environment's default settings do not always suit the uses I've described, so here are a few of the options that I set for any project I create. These properties generally only cause difficulty when using the application on a PC other than the one on which the application was originally developed. I often send an executable file to other parties, and I want them to be able to run that executable without having to install the CPB environment or any of its libraries.

In the Project-Options dialog, there is a Linker tab. On this tab I turn off “Use Dynamic RTL.” There is also a Packages tab, and under it I turn off “Build with runtime packages.” These two options enlarge the executable file, but negate dependencies on other libraries at run time.

On each form, a property called Scaled is defaulted to true. I always set this to false. This property allows the layout to be changed depending on whether the fonts available are the same size as the fonts used in the initial design. I find that this property, if set, sometimes upsets the layout of the background picture of the interface.

Waiting for a download

Hopefully these last two columns have given you a feel for implementing prototypes on a PC before building hardware. Even if C++ Builder is not your chosen tool, most of the principles still apply.

As I write this, I am waiting for a three hour download of a software update to a target where I no longer have any debugging tools available. This is exactly the sort of environment where I like to try my code changes in a prototype first, to give me the best possible odds that the code changes will work the first time. I am keeping my fingers crossed!

Niall Murphy has been writing software for user interfaces and medical systems for ten years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Murphy's training and consulting business is based in Galway, Ireland. He welcomes feedback and can be reached at . Reader feedback to this column can be found at .

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.