Moving from a set of mechanical controls, such as dials, sliders, and buttons, to a graphical touchscreen is often a developer's dream come true. The developer is no longer limited to a fixed family of settings limited by the number of dials on the front panel.
Menus allow infinite user-configurable options. Scrolling means that you can provide endless lists of data. The GUI provides a window into the internals of the device that was previously unthinkable.
Many designers get carried away with the new toy and foist upon users far more complexity and data than they want or need. Making some of the controls mechanical rather than graphical can have major usability advantages, which we will explore in this article.
While a GUI has many advantages over custom displays, it's important to note a couple of the disadvantages. A GUI allows a number of different controls on the screen, but they all have the same tactile feel when making an input. If the input is via a touchscreen, they all feel flat. If the input is via a trackball, the same roll-and-click motions are used to manipulate any of them. It's possible to build some tactile feedback into a touchscreen, but it's limited.
In contrast, a throttle controlling the speed of an aircraft will be physically larger and have a heavier feel than the volume dial for the aircraft radio. These differences communicate the significance of the action to the user. Imagine trying to drive your car with a mouse and screen as the only inputs and you'll get an idea of how the feeling of control can be lost. The car is quite an extreme example. In a car, it's important to be able to perform one action with each hand at the same time, such as steering and changing gears.
Operating two controls simultaneously on a single GUI is generally not possible. Even when the technical challenges are overcome with hardware capable of interpreting multiple touches, you still face the trickier problem that if two independent controls are on the screen, you're likely to cover information with your hand while performing one action, but that information needs to be visible to perform the second action.
The iPhone allows multiple touches, but they're generally connected to the same action–for example two fingers are used to shrink a single picture. The gesture is multitouch, but it's still a single gesture.
Custom controls can be laid out in positions that fit with the function performed. If there is an eject button on a DVD, it's intuitive to place it beside the slot through which the disk will emerge. If a GUI is the only means of controlling the device, all controls must appear on that display, which means that those controls are further from the related hardware.
Another disadvantage of the GUI is that space doesn't generally permit the important controls to be permanently visible. This may not be acceptable if the device is used in a situation where the user needs emergency access to certain controls, or where some monitored information must always be visible.
Custom controls can scale the user's action or exaggerate the process being controlled. A bigger steering wheel allows a driver finer control over the angle of the front wheels. The da Vinci surgical robot enables the surgeon to move his hand several centimeters to control the robot, which only moves several millimeters, allowing the surgeon a level of control that he could not achieve if he were holding the scalpel himself (Figure 1 ).
Many embedded products get the best of both worlds by adding a graphics screen to support peripheral information, while the most important user dialog still takes place using custom controls. This combo is an attractive option. It allows little used modes, such as configuration modes, to be implemented using the GUI alone, while normal running uses both the GUI and the custom controls.
While the user is manipulating the custom controls, information related to the changes may be displayed on the graphics screen. For example, as the flow of water in a pipe is adjusted on a dial, a diagram depicting the tank could show the water level rise and fall as the user turns the dial up and down. Such graphics are particularly useful for novice users who are building up a conceptual model of how the system works.
Consumer items often find a good balance between graphical interfaces and custom controls. Many camcorders have a touchscreen interface (Figure 2 ), but they retain the record button and a spring-loaded lever to control zoom. Zooming with a touchscreen control would involve covering the picture that you're trying to film and wouldn't give the same level of physical control of the lens.
Cell phones also keep vital buttons, such as the answer button and the volume control, outside of the graphical interface. While the iPhone achieved its minimalist look by having only one visible button–the Home button–it still allows calls to be made and answered on a button attached to the earphones, and the volume control buttons are discretely placed on the edge of the phone.
Two automotive designs
I recently drove two different Toyota cars. One was Prius and the other a Highlander, both hybrid vehicles, but it is their usability rather than their fuel consumption that we will examine here.
The dashboard controls on each model were quite different. The Prius has a graphical touchscreen which included a sat-nav system, while the Highlander has a more conventional set of controls with physical dials and buttons dedicated to each task.
I will not try to make a judgement on which was better, since they achieved very different aims. Advanced features such as sat-nav, which are very graphical in nature, could never have been implemented on the Highlander's dash, so the comparison would be fairly meaningless. Instead I am going to focus on one feature that was very different on each; examining those differences will teach us a couple of things to watch out for in our own design.
I was driving these cars in sunny southern California, so the air conditioning was on and needed frequent adjustment. There were two settings: a temperature and fan speed. The Prius allowed control through the touchscreen, while the Highlander has two dials, one for temperature and one for fan speed (Figure 3 ).
After a week in each car, there was no doubt in my mind that the dials made control far easier. The old-fashioned low-tech method out-performed the sophisticated GUI by a mile. Let's look at the reasons why.
First of all the mechanical dials were always in the same spot. You just put out your hand, and you could find them without looking. On the Prius there was an off-screen “Climate” button that you pressed first, and then you had to select the fan speed using an onscreen button, if that is the item you wanted to adjust. So in terms of navigating the GUI, the Prius forced you to do some work before you even got to the setting you wanted to change.
This difference is a fairly inevitable consequence of designing the controls in a GUI–you can fit far more functionality in a smaller space, but the compromise is that you have to navigate to some of those features. It does make me wonder if the Prius should perhaps have taken a few of the most frequently used features out of the GUI and given them dedicated controls.
Of course, there will always be a cost trade-off here. Part of the motivation for designing with a GUI is to get rid of all of the other dials and buttons to save cost–just remember that the cost is not just in money, but sometimes the cost is in usability, too.
This navigation issue is more noticeable in an automotive application than on other devices, because you often want to keep your eyes on the road. In the Highlander, I found that I could reach out and turn the temperature dial with barely a glance. In the Prius, I would have to focus my eyes on the GUI and then back at the road multiple times before the setting change was complete.
Relative and absolute settings
Let's remove the navigation issue and compare the two interfaces with the assumption that the climate control settings are always available on the GUI. Even in this scenario, the interface with dials was superior.
Figure 4 shows the layout of the screen for adjusting the fan speed. There are five settings. The inverted box highlights the current setting. The user can press any one of the other boxes to select a faster or slower speed.
For some settings, the user cares about the absolute value that is set. For example, if I am entering today's date, then one number is right and all other numbers are wrong. In the case of fan speed or temperature, the user rarely cares about the numeric value. They want the fan to blow faster or slower, regardless of what the current value is.
The layout shown for the touchscreen requires the user to read the current value and touch to the left or right of it to make it faster or slower. This means that it will be impossible to touch the screen to make the car warmer without first looking to see which box is currently inverted. The box to touch to make the fan go faster may be different every time.
With the dial, the user can put their hand on the same dial, in the same position, and turning clockwise will always make the fan faster. This is a control that can be used without looking at it. It's not particularly important whether the user does look at the display, what matters is that a control that can be used blindfolded is usually more intuitive than one that requires the user to read the display first.
I believe that the touchscreen control of fan speed would have been far better implemented as two buttons, one for faster and one for slower. That would have left the user with two touch targets, rather than a row of five. Two buttons could then be far larger, which is a big advantage. Remember these users have to move their hand from the steering wheel to the touchscreen and the further the hand travels, the harder it is to land accurately on a small target.
The first reason for the difference in the quality of the user experience on these two Toyotas was the difference between mechanical controls and GUI controls. The second issue was the treatment of the individual settings and considering whether a setting is set as an absolute value or as a relative setting that is being incremented or decremented. There is an opportunity here to reduce the user's workload a little–and every little bit helps.
Hard versus soft option
Once a design reaches the point where it has been agreed that the product will include a GUI, there is usually a debate as to how much or how little goes in the GUI and how many controls should be dedicated mechanical controls outside of the GUI.
It's very tempting to put all control inside of the GUI. The advantages to the design team are huge. It allows the hardware to be constructed before the final family of settings have been chosen, since software can add or remove new controls later on.
Since there are no off-screen controls, there is no need to put permanent labels on those controls. This allows the manufacturer to build units without concern for which country will eventually receive them. Once a single permanent text label is added to the device, that text will need to be translated and then all inventory has to be managed to ensure that each market receives the unit that has labels in the correct language. This is a headache that the manufacturing facility would love to avoid.
Manufacturing cost per unit is lower if all controls are left in the GUI since the sliders, dials, and buttons increase the bill-of-materials for the product. They also increase the number of moving parts in the system, which usually means a shorter product life.
There are many advantages to the design team of putting all of the controls in a GUI, but you have to remember that the goal is to make the user happy, even if it occasionally causes a little extra work for the engineers. In any debate about which controls are on the GUI (soft controls) and which are mechanical (hard controls) be sure to factor in usability as a concern, as it's sometimes traded off against cost and manufacturability.
Hard and soft controls cooperate
When the mechanical team are working on the hard controls and the software team are programming the GUI controls, it's important that an overall view of the combination is kept in mind. Some devices do a beautiful job of marrying the two types of input. On my digital camera, there is a dial which allows the user to choose the mode. The dial contains small icons. When you turn the dial, the display shows the name and description of the new mode. This vanishes after a few seconds, so the extra information does not clutter the image of the picture to be taken. By showing a rounded outline around the icons being selected, the on-screen image looks like an extension of the physical dial. In other words, rotating the off-screen dial also rotates a disk segment that is displayed on the GUI as shown in Figure 5 . While this is tricky to describe, the brief video available at www.panelsoft. com/clip.htm makes it a lot clearer.
While many other cameras show the icon for the selected mode on the GUI, Sony makes this feature far better in two ways. One is that the extra text means that the user can learn the meaning of the icons without resorting to the user manual. The second aspect is the smooth integration of the on-screen and off-screen controls makes this device feel like a single user interface, rather than two distinct interfaces to two different parts of the device.
If you can't achieve a smooth integration of hard and soft controls, then at least try to ensure that they do not interfere with each other. Buttons above the screen cause the hand to obscure the GUI while they are being used. Usability reviews should check that icons and text used on and off screen are consistent. If different developers are doing the hardware and software, it's not unusual for terminology and symbols to differ unintentionally.
References on the GUI to hard controls should be carefully checked for any causes of confusion. Figure 6 illustrates an on-screen message where the placement of ACCEPT and CLEAR are the reverse of the off-screen positions. While the instructions are not incorrect, they do increase the chance that the user will press the left hand button thinking that it is the ACCEPT button.
Staying in control
Using a GUI opens up boundless possibilities for the designer, but hopefully some of the advice above will help you to carefully consider the tradeoffs when moving some or all of your functionality from mechanical controls onto a graphical interface.
Niall Murphy has been designing user interfaces for over 14 years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Murphy teaches and consults on building better user interfaces. He welcomes feedback and can be reached at . His web site is .