Usability for Graphical User Interfaces - Embedded.com

Usability for Graphical User Interfaces

Adding a graphical display to your product may allow you to add more features in a smaller space, but it also raises usability issues.

Last December I discussed usability principles that applied to any user interface, but concentrated on those with knobs, buttons, and switches (“Principles of User Interface Design,” p. 55). Once you add a graphics display to the user interface, a host of new issues arise and the embedded programmer has to address many of these issues in a very different manner from PC application programmers.

With a non-graphical display, one layout of buttons and displays has to be designed and evaluated. With a graphical user interface (GUI) there is no limit to the number of possible layouts. Making each one user friendly, while remaining consistent with the others, is a big challenge. If you do it well, the pay off is an interface that may look simple to the novice, but contains enough sophistication to satisfy the expert user. The interface should always display just the required information, keeping any superfluous information hidden unless specifically requested.

Disadvantages of a graphical interface

While a GUI has many advantages, it is important to note a couple of the disadvantages. Though a GUI allows a number of different controls on the screen, they all have the same tactile feel when making an input. If the input is via a touchscreen then they all feel flat. If the input is via a trackball, then the same roll-and-click motions are used to manipulate any of the controls. With custom controls, a throttle controlling the speed of an aircraft will be physically larger and have a heavier feel than the volume dial for the radio. This communicates the significance of the action to the user. Imagine trying to drive your car with a mouse and screen as the only controls and you will get an idea of how the feeling of control can be lost.

Custom controls can be laid out in positions that fit with the function they perform. If a VCR has an eject button, intuition says to place it beside the slot through which the tape will emerge. If a GUI is the only means of controlling the device, all controls must appear on that display, which means being further from the related hardware. Another disadvantage of the GUI is that space does not generally permit the important controls to be permanently visible. This may not be acceptable if the device is used in a situation where the user may need emergency access to certain controls, or where some monitored information must always be visible.

A related problem is that if only a GUI is used, it will not be possible to have all of the controls visible at all times. This means that the user may have to explore the interface to find some of the functions. Users may not choose to explore the interface, unless they have reason to believe that functions are present that they have not yet discovered. With all of the controls visible they are more likely to ask themselves “What is that for?” or even better, “That dial must control the time delay,” if the dial's purpose has been made obvious by its location and labeling.

Many embedded products get the best of both worlds by adding a graphics screen to support peripheral information, while the most important user dialog still takes place using custom controls. This is an attractive option. It allows little-used modes, such as configuration modes, to be implemented using the GUI alone, while normal running utilizes both the GUI and the custom controls. While the user is manipulating the custom controls, information related to the changes may be displayed on the graphics screen. For example, as the flow of water in a pipe is adjusted on a dial, a diagram depicting the tank could show the water level rise and fall as the user turns the dial up and down. Such graphics are particularly useful for novice users who are building up a conceptual model of how the system works.

Getting the most out of a GUI

Now that you have the power to make your interface graphical, you still have the task of designing interactions that are intuitive. Cooper discusses many of the design issues that concern the graphical designer in the desktop domain.[1] Most of the advice given there applies to a greater or lesser extent to embedded systems. Be warned that the task of designing screen layout is more befitting an industrial engineer or a graphic designer, though many programmers are obliged to perform this task. Most organizations do not see usability and interface design as separate concerns from software engineering. On a small project, this may be an opportunity for a programmer to learn a new skill.

Once you have made the commitment to place a graphical display on your product, you have to decide to what extent it will be used. Some displays will be output only. The display only provides information, possibly related to the changes being made on the custom controls. An output-only display is not really a GUI since it only provides one-half of an interface, which ought to be a two-way dialog. A number of input devices are possible. The simplest is to have a line of buttons at one or more of the edges of the screen and to print labels on the screen next to those buttons. This technique is commonly employed on automatic teller machines. In that situation the thickness of the glass used to protect the screen can make it difficult to judge which label matches which button. This offset is known as parallax. Figure 1 shows two views of an automatic teller machine that demonstrate this problem. The further a display is from the plane of the front panel, the more exaggerated this problem becomes.

To allow any area of the screen to be selected, some form of pointing device is required. Mice are ubiquitous on the desktop but they have a number of disadvantages in embedded systems. The mouse's tail, which is the cable connecting the mouse to the device, may get caught up in any nearby moving parts. If the mouse is wireless, perhaps employing infrared technology, then it will be prone to getting lost. The mouse needs a flat resting area where it can be manipulated. If such an area is not available, it could be mounted on the device, or the top of the device itself could be used. One of the problems here is that the flat surface will then accumulate manuals, charts, and abandoned cups of coffee. Laptops offer some inspiration for mouse alternatives that would still allow a pointer to be used on the screen. Trackballs and pens for input are two promising alternatives, though the pen still has the disadvantages of the mouse of either having a tail, or being prone to getting lost. Touchscreens are quite a popular choice, and are covered in their own section. Some simpler forms of input are also possible, such as arrow keys, or less conventional controls such as voice or foot pedals.

Having decided on the mechanical form of input, the designer must now decide the level of the interaction. It could be simply a text interface, with text labels for the mechanical buttons, or, if a pointer is used, the sensitive areas of the screen could simply contain text. Text menus could be implemented on such a platform. The next level of control is where the pointer is used to manipulate some graphical controls such as sliders, pull-down menus, and 3D buttons. Usually a standard set of controls is available which could be reused many times. Once a user learned a particular control, he would recognize it and immediately know how to manipulate it. For example a check-box, as seen in Figure 2, provides a way of turning some attribute on or off. The number of controls used in any application should be kept low to minimize the learning curve for new users. When choosing the set of graphical controls, be careful of control mechanisms that work on the desktop but may not transfer to your embedded system. Double-clicking is hard to learn, especially if your target users have never used a desktop computer. Dragging may be difficult with some input mechanisms, such as touchscreens.

Direct manipulation

The next level of GUI is to provide interactions where the input and output is graphical by nature, not just a set of controls that could have been implemented with mechanical switches, dials, and sliders. Instead of outputting a numerical value, the value could be graphed over time giving the user a better sense of the changes within the control process. A robotic arm could be controlled by moving a graphical representation of the arm on the display. This type of control is known as direct manipulation. When done well it is far more effective than the controls previously described, but it usually takes more programming effort to implement.

Direct manipulation is not necessarily a manipulation of the image of the physical item being controlled. It is often a manipulation of a more abstract representation. Figure 3 shows a flashing light being controlled, though this interaction could apply to any digital signal. If the user was required to enter the duty cycle as a percentage and the period in seconds, then the user would actually have to know what duty cycle and period mean. With a graphical control, the diagram makes it obvious what the result of a change will be.

While graphically representing the entities to be manipulated is a good thing, it is not always clear if you want to imitate the controls that would have been on the device had the GUI not been present. For example, if previous generations of a device had analog needle indicators to show temperature, should you graphically display a needle that sweeps an arc, as an imitation of the old control panel? This will look more familiar to the users of the older device. While this familiar appearance will help with a good first impression, be careful not to sacrifice the overall qualirty of the interface for a feature that scores good points in appearance only. If the concepts being presented to the user are the same as a previous device, users will adapt quickly to a different appearance.

Loss of real estate is another issue. A line graph could show a number of temperatures in the space occupied by a needle as seen in Figure 4. Showing more information may not be desired when dealing with very novice users. In those cases, using imitations of the controls that the user is accustomed to may be reasonable, but a more powerful interface will be possible once the users are comfortable with the new technology.

Color

When I was in college, one of the standing jokes whenever a group of computer scientists gathered to ponder a new project was for someone to ask “Well, what color should it be?” Of course they were software projects so color was irrelevant. Since then graphics has become such a large part of the software world that most of us, at some point, have actually had to decide what colors we will use. Again, this is an area where some professional graphic design advice might be useful-the number of horrific home pages of software engineers shows that some of us should be banned for life from ever holding a paintbrush. Do be careful not to use every color under the sun. Video games make much use of color, but their intent is to be visually striking, or even shocking; for control applications, you probably want more subdued colors that hint rather than scream.

Color is best used as a redundant clue. If there is some other visual indication of the groupings that you are defining, then you will not have excluded the one in 12 males that suffer from some degree of color blindness. It also allows for black-and-white documentation, which may be an issue in cost-sensitive applications.

Menus

Many embedded systems display a simple menu, or maybe nested menus. Menus make it easier for the user to make a choice from a limited set. In many cases, once a menu option has been chosen the user is then presented with a number of questions-if the interface is small and text oriented. Alternatively the user could be presented with a form to fill, though a form-filling interface generally requires a greater number of keys, and these may not be available on your interface.

In nested menus, knowing where you are in the hierarchy is an interesting challenge. With small displays there will be cases where the user can see the current option, but cannot see the name of the parent menu. If the option facing the user is “Halt” then it will be important for the user to remember if the parent menu is called “Printer” or “Download.”

The ideal solution is to display the current position, along with the names of the parent, grandparent, and other ancestors. This is done effectively in the left-hand pane of Microsoft Windows Explorer. An interface of limited size may not have the luxury of displaying the information so explicitly. It may, however, be possible to show the name of the parent menu at the top of the menu area at all times.

When designing a menu, there is a trade-off between how wide and how deep the menu is. If each level has too many options, the menu may only be a couple of levels deep. By keeping the menus short, you end up with very deep menus that can be a challenge to navigate. Having a long list when the items differ in value, but not in kind, such as printer types, is quite acceptable. On the other hand, if the menu is itemizing different concepts, such as different commands that may be sent to the printer, longer lists are more intimidating.

The magic number of seven is often quoted in cognitive psychology texts as the number of chunks of information that can be retained in short-term memory.[2] That would suggest that menus greater than seven choices in length will be harder to use than those smaller than seven. By the time the user is looking at option eight, option one has been pushed into a less active part of his memory. If keeping this rule leads to deeper menus then it is better to break the seven rule. Despite what the psychology texts would suggest, experiments have shown that few levels with many alternatives work better than many levels with few alternatives.

In many small-screen situations it will not be possible to make all of the menu visible at once. Scrollbars are useful in this situation. On a desktop interface the user can usually manipulate the scrollbar with a mouse. The scrollbar is acting as an input mechanism as well as an output mechanism. However on many restricted embedded devices, the scrollbar is used for output only. Figure 5 shows two scrollbars that can be drawn with a minimum number of pixels. The position of the middle portion of the scrollbar shows the user where he is relative to the top and bottom of the list. The size of the middle section shows the user what percentage of the entire menu is currently visible on the display.

For nested menus it is often a good idea to make menu items that lead to a sub-menu to be visually distinct from menu items that perform an action. One method is to add three dots to the end of an item that leads to a further menu. In Figure 5 the Service item leads to another menu while View and Edit do not.

A common navigation aid in menus is to number the items in the menu. This lets the user know how many options he has viewed, and whether the menu has wrapped around. There is a danger that the number may be misconstrued as a value. If the user sees “3 Pressure” as the third menu item in a list, there is a possibility that he may assume that the value of the pressure is 3. The user may not realize that he must go down one level in the menu to see the actual pressure value. This is only a problem if space is so restricted that the user can only see one menu item at a time, since the order of the numbering of the items is obvious if many items are visible together.

If you want to be adventurous you could try inserting earcons (an icon that you can hear) into your menus.[3] If a different sound is made in response to key clicks in the menu then this may help prompt the user. This could replace the key click. You would not want a different sound for each menu item, but the sound at a leaf could be different that the sound on reaching a node. The top level menu could have a unique sound to let you know that you cannot go any further up the hierarchy.

Windows

On the desktop, the GUI is synonymous with windows. It is assumed that any GUI has to use a mouse or other pointer device and provide the ability to have many overlapping, scrollable windows which can each run an independent application. The embedded environment does not have the same needs. Effectively, one application is running. This application may have a number of different states, each with a corresponding layout. I call these layouts dialogs. There is generally no need to allow the user to navigate arbitrarily from one dialog to another. The application can switch as the state of the system changes, or can allow a button on certain dialogs to enable the user to switch to another one. The idea of a generic mechanism to minimize one dialog to an icon while the user uses another dialog does not apply as it would on a desktop PC.

With a window manager there is a learning curve as the user figures out how to resize windows, move windows, and switch between active windows. All of this configuration work is overhead that does not contribute directly to solving the problem at hand. The learning time will be short for computer-literate users, but it will be non-trivial for any users who are not used to computer systems. A lot of embedded system users consider themselves to be computer illiterate. These people withdraw money from automatic teller machines, record programs on their VCR, and cook food in a microwave. They do not realize that any of these acts involve using a computer. If they are introduced to a piece of equipment with a windowing interface they will immediately perceive that piece of equipment to be a “computer” and may resist using it.

When using a window manager, the frames for individual windows take up real estate on the display. If you are using a small LCD screen, that real estate might be a big price to pay. The space occupied allows the user to distinguish the boundaries of the window, but does not provide any useful information.

Some mechanisms can be borrowed from full-featured window managers. In a window manager, each window maintains its state when the user manipulates another window. This means that there is one thread of control per window. An embedded system could choose to allow the same power for different dialogs.

In cases where you want to use a full-scale window manager, ask yourself: will any important system state information ever be hidden because it is displayed on a window that is covered by another window? On a desktop system, most changes on the screen are in response to user input. However, on an embedded system a lot of the output may be in response to external events. You may need to use windows that pop up in response to external events to notify the user of some event in the system. If this pop-up has to be confirmed in some way, it will interfere with the normal flow of work and could be quite annoying. An alternative is to devote a small portion of the screen to status, and ensure that this area is always visible and never covered by a window. The advantage of the pop-up approach is that the system knows if the user has confirmed that an event has occurred. Some events may lead to an alarm if the user does not acknowledge them.

When discussing menus I pointed out that people prefer menus with many items at each level, but which form a shallow tree. Much the same question arises with a GUI that has a number of different dialogs. The more dialogs available the greater the navigation problem. Once navigation becomes difficult, you have to add more buttons to provide shortcuts from one screen to another, using up valuable real estate. Try to make each dialog reflect a role, or a phase of a task, so that it is less likely that the user will be changing dialogs often.

If many dialogs have to be designed, it is easy to get sloppy in the layout. One automatic teller machine that I use regularly has a screen as shown in Figure 6. The customer has just asked for the account balance. The balance is displayed in the middle of the screen, but the question “Do you want any further transactions?” and the two answers are separated by the display of the balance. Simply reading this screen out loud would tell you that something was wrong. After the user has read the question, the next thing that his eyes should find are the possible answers, not some other unrelated information.

Wizards

If you have a number of different dialogs of information to present to the user, then you may want to consider a wizard approach. Wizards are much maligned in the desktop usability community. They work well for installation of software packages because the goal is very well defined, but can become clumsy in other circumstances. They are designed to suit the novice user who wants to be led along a path, but the more sophisticated PC user finds them restrictive. Wizards are a very directed mechanism.[4]

It is worth giving wizards more consideration in an interface for an embedded device, since the level of user will often be lower (some customers will actually want to use the device as soon as they take it out of the box, believe it or not), and the number of once-off users may be higher. The wizard scheme can be used on the whole screen, with a left and right arrow key as the only means of navigation. Having left and right as the only means of navigation means that the user can't get lost, and will easily recover to a known dialog should he get disoriented.

Places where there are two branches may require a whole screen dedicated to navigation, while other dialogs are output only, but give the user a sense of progress. Many simple screens will work better than a few complex ones.

Going pointerless

On the desktop a mouse can be assumed, but an embedded GUI may have to function with no pointing device. Keys may be aligned with one or more edges of the screen, allowing the key labels to be displayed on the screen itself. These keys are sometimes called soft keys to indicate that their meaning can be changed by the software. If you take this approach then it is worth noting that aligning the soft keys on the sides of the display allows for longer labels than aligning the buttons on the top or bottom edge. If the buttons have to go on the bottom, staggering the labels as shown in Figure 7 will allow for longer labels, though it does take up a lot of real estate. This may work well in a situation where the soft keys are not used often. Note also that this layout does not suffer from the parallax problem (examined in Figure 1) as badly as soft keys that are aligned at the side of the screen. There could still be a misalignment if the viewer is looking at this display from the side, but typically the parallax problem is caused due to the varying heights of users.

Another approach is to have keys with generic functions, such as arrow keys, which do not have to be aligned with the screen. In this case there will usually be some sort of select or accept key to indicate that the currently selected item is the one that we wish to apply. The reason for this is that the arrow keys may traverse a number of options before reaching the desired one. As the user traverses the options, their images on the display will change in some way to show the currently selected item. The user does not want to activate the options as they are traversed, so a separate action, such as an accept key, is used when the desired control is reached.

Microsoft Windows is designed so that it can be completely controlled from the keyboard. When selecting items in a dialog box, the Tab key can be used to jump from button to button, or from selectable item to selectable item. The user presses return once he has reached the intended item. This can be imitated on embedded systems when no pointer is available. On MS-Windows this mechanism is rarely used, because the visual indication of which item is currently targeted is subtle. It is a transparent rectangle drawn with a broken line. If this is the only mechanism that you are making available, you can afford to make the visual distinction of the target item more striking. In Figure 8 the background color of the target item is changed. As long as the number of options is limited, the “Next” key can simply wrap around to the first item after the last item has been passed. This removes the need for a second key to traverse the list in the opposite direction. An improvement from a “Next” key is to use a dial. As the user turns the dial the target item changes. The “Accept” key would work as before. Once a dial is available it can also be used for changing numerical values. It can be useful to place an LED beside such a dial to indicate the times when turning the dial is valid.

One advantage of this scheme is that the selectable items could appear anywhere on the screen. This is useful if the selectable items are placed on a diagram. If the screen displays a map, the “Next” key or the dial could step through each town on the map until the desired one is selected. This would not be feasible using soft keys aligned at the side of the screen, since the names of the towns would have to be placed beside the keys rather than at their appropriate positions within the map.

Touchscreens

Touchscreens are a popular form of input for embedded systems, especially in information kiosks where the walk-up user is likely to be a novice. Initially the user must be shown that the screen itself is touch sensitive. Users do not immediately guess this, since years of watching TV has conditioned most people to assume that a screen is an output-only device. For a walk-up application it can be useful to display a single button in the middle of the screen with the label “Touch me to begin.” This teaches the user that the screen is touch sensitive. It also illustrates the visual hinting used to indicate which parts of the screen are touch sensitive-for example, a 3D look on the buttons.

Sensitive areas of touch screens have to be large if they are to accommodate relatively stubby human fingers. A pen input allows far more precision, but connecting a pen to the device is not always convenient. We are left with the problem that while the graphics displays on embedded systems tend to be smaller than their cousins on the desktop, the amount of space a control occupies is greater if we use a touchscreen. This can lead to a shortage of real estate on some interfaces. Such considerations often make the investment in a larger screen worthwhile even if the resolution does not increase.

It is possible to make the sensitive area larger than the area visually occupied by a button or other control. An algorithm to achieve this can be found in my book.[5] This may alleviate the problem to some degree. Once the display becomes crowded, it is important to realize that vertical lists will be harder to use than horizontal lists. The reason for this is that the user's hand may slip down as it is being removed from the display. This will be most noticeable if there is no perch for the user to rest the heel of his hand. Even when there is a heel rest, this will be an issue because the knuckle will bend as the user moves his finger, possibly accidentally selecting an item lower down on the display. This is not an issue for horizontal lists since the finger is unlikely to move sideways.

Some manipulations are far more difficult on a touchscreen than with a mouse, since pixel-perfect motion of the finger is quite difficult. On one home automation project the initial version of the touchscreen interface used clock-faces to set timers.[6] The user was required to touch a hand of the clock and move it in rotation until it pointed to the appropriate time of day. Users found this type of motion difficult to control. Satisfaction with the interface rose when the clock faces where replaced with timelines, where particular times were marked with flags. The user could move the flags with a straight line motion, which is less challenging than a rotation.

Touchscreens combined with audible feedback and speech synthesis can be made quite usable for the visually impaired. Such interfaces are particularly applicable to publicly available information kiosks and automatic teller machines. A gesture such as running a finger along the bottom of the touch screen can indicate that the user wishes to operate in a talking finger mode. Verbal feedback can be provided as the finger moves into a touch sensitive area. Alternatively each option currently available on the screen could also be made available via a speedlist. The speedlist may be selected by placing a finger on the top left of the screen, and dragging it down. As each item in the vertical list is touched, it is verbally announced. The speedlist has the advantage of not requiring the user to navigate a two-dimensional area that they may not be able to see. Once the desired selection has been reached, an off-screen confirmation button can activate the selection. These and other techniques for making touchscreen appliances accessible to the visually impaired are described in Gregg Vanderheiden's article.[7]

As you can see, the variety of problems facing the designer of a small-scale GUI is wider than those faced on the desktop. Fortunately the selection of solutions is also quite broad. No single graphics library will dictate the look and feel of all embedded GUIs because no one set of widgets will fit all applications. For this reason, there is still scope for real innovation in GUI design for small devices.

Niall Murphy has been writing software for user interfaces and medical systems for ten years. He is the author of “Murphy's Law,” a regular column in ESP and is also the author of Front Panel: Designing Software for Embedded User Interfaces, published by R&D Books. Murphy's trainin

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.