Designing user interfaces means striking a tricky balance between being helpful, being too helpful, or getting in the way. Here are some examples to guide you through these treacherous waters.
Driving home from work one day, I enjoyed the feeling of total control over the driving experience. On the small back roads cornering was smooth. Speed and direction responded positively to slight changes in the steering wheel and pedals. A slight oversteer in one direction was easily and instinctively corrected by turning the wheel a small amount in the opposite direction, all without even being conscious of the effort. On the freeway, traveling at speed, I was aware that an error of one or two feet to the left or right could be catastrophic for me and my neighboring drivers, yet keeping the car between the lines just wasn't that hard, and somehow all the other drivers around me avoided colliding as well. The accuracy of positioning the car over the course of 10 miles was impressive. Although I'm a good driver, I can't claim the complete credit for avoiding a collision since all the other drivers around me were having just as much success.
Arriving home, I reversed the car into a tight parking spot. A combination of low-tech mirrors and high-tech proximity detectors that give a helpful beep as the rear bumper approaches an obstacle enabled me to park within a few of inches of the wall. As a percentage of the size of the car, this was an impressive piece of control if I do say so myself.
Apart from the achievement of getting from Point A to Point B, many people find driving a pleasure for its own sake. Travelers often prefer driving to relaxing in the passenger seat, not because driving is easy but because they've got a sense of control. Having your hands on the steering wheel gives a sense of control that must appeal to some of our deeper instincts.
Entering my home, I realized the dinner I'd left in the oven that morning wasn't cooked. I had set the timer but apparently forgotten to press the button with the little clock icon one final time to tell the oven to transition to timed mode. Bummer. Settling for a bowl of cereal, I sat down to watch some TV. Surrounded now by three remote controls, I used one to change the channel, one to set the volume, and a third to select the input source: broadcast TV rather than the DVD player. The “universal” remote control I bought was still sitting idle–I'll have to read the user manual for the fourth time before I can work out how to program it.
There was nothing on TV so I went for my evening run. I'd like to check if my times are getting faster. My digital watch claims that it can record the last 60 timed events. The watch has more computing power than the original Apollo mission, so recording 60 numbers should be trivial–yet after 30 minutes poring over the user manual I was still baffled. Instead I used the blank back page of the manual to write down today's time and gave up on finding out about my previous runs. Something tells me that the designers of this watch do very little running themselves.
The feeling of control that I'd enjoyed while driving was now long gone. I was struggling with devices that conspired to make me feel less in control, and I hadn't even turned on my PC yet. Some user interfaces make the user feel like they're in the driver's seat and totally in control. Others make them feel like they're watching a car crash.
A recent Dutch study found that 50% of returned consumer goods are in perfect working order. In most cases the device had not failed but that the user had failed to master the device. Why does this sort of widespread customer befuddlement afflict the electronics business but not other highly technical industries? This doesn't happen in the automotive industry, for example.
What have the car manufacturers gotten right that the consumer- and industrial-electronics designers fail to emulate? Any high-tech device should make the user feel like they're in control. In this article we'll look at properties of the interface that make the user feel more in control and draw a few analogies with the motor car to see why these techniques work. Along the way we'll learn a bit about two important user-interaction concepts: the gulf of evaluation and the gulf of execution.
Some interfaces try to guess the users' next moves and perform the actions for them. Deciding to forecast users' intentions can be a tricky design choice when there's a possibility of guessing wrong. For example, some devices automatically change scale. Many digital voltmeters and oscilloscopes automatically switch from displaying millivolts to volts as the amplitude of the signal increases. This feature means users won't need to think about selecting the correct voltage scale. The problem is that the scale may change without the user noticing it. If the user doesn't spot the change from V to mV when the voltage drops, he might think he's looking at 6.5V when he's actually looking at just 6.5mV. The units indicator is often much smaller than the digits themselves, making it more likely to misread the units than the value. If users had to manually switch scales, they would be in control and know the current scale because they'd chosen it.
The same logic applies to graphs and charts in other applications. Many of the medical monitors I've worked on can display graphs that indicate the pressure and flow characteristics of a patient's breath. It's possible for the software to always select a scale that allows the image of one breath to fill a large portion of the display. The problem with this is that as a patient changes from taking small breaths to large breaths the screen changes scale to compensate. Once the smaller breaths have scrolled off the left hand side of the display, the large breaths now appear about the same as the small breaths did a few moments earlier. The user (a doctor, in this case) has to notice that the scale has changed in order to realize that the patient's breathing characteristics have changed. A significant change in patient condition may therefore go unnoticed by the physician.
Some designers like the approach of constantly adjusting the scale to match the incoming data because the user avoids the work of having to manually adjust the scale. My view is that the best solution may be to make the scale-changing mechanism easier to use instead of trying to automate it. Alternatively an “AutoScale” button instructs the device to change the scale to whatever scale suits the incoming data. That way the user decides when the action happens without having to work out the ideal scale for the current conditions.
Some designers believe that the more they do for the user, the happier that user will be. This is not always the case. Would you prefer to have someone tie your shoelaces for you or would you prefer to be able to tie them yourself? Most people like the feeling of control and self-reliance. This is why many enthusiastic drivers choose a stick shift over an automatic transmission–they want to be in control of exactly when those gear changes occur.
In some medical ventilator designs the user changes settings in a two-step process. First, the user adjusts a patient flow setting, which alters the pressure delivered to the patient. Second, the user adjusts the pressure alarm level to suit the new situation. If the user forgets to change the alarm level the alarm activates because the new flow setting indirectly changes the pressure. The user then realizes that the alarm level needs to be adjusted.
In one design we eliminated the user-settable alarm level by allowing the machine to calculate appropriate pressure alarm levels based on the patient flow setting. This change dramatically reduced the number of keystrokes required to make a change and reduced the cognitive load on the users since they only had to think about one setting instead of a setting and an alarm level.
But our new design didn't receive the widespread approval we'd expected. Many users complained that they preferred setting the alarm level manually. Setting two independent values reduced the chances of a mistake, they felt. If they'd accidentally set a flow of 50L/min instead of 5.0L/min, for example, they'd notice their mistake when setting the alarm level. Setting two values also eliminated the hazard of a child visitor mischievously pressing some buttons on the device. If this mis-user made a change to the flow setting, the pressure alarm would sound, attracting staff attention and scaring away the meddlesome kid.
In many cases reducing keystrokes and automating parts of the user's job are useful, but bear in mind that you may have also reduced the user's perceived level of control.
Similar issues have been raised in aircraft cockpit design. As technology allows designers to automate more of the pilot's tasks, the pilot can feel relegated to mere second-in-command. This in turn reduces the pilot's “situational awareness,” which in turn reduces his ability to respond to an emergency situation. When an emergency occurs the pilot might not be fully in tune with the aircraft's current behavior because he wasn't controlling it when the emergency started. Some cockpit designs now seek to balance the work between the pilot and the automated system to ensure that the pilot is kept in the loop. The more control the pilot has during normal flight the more capable he'll be when he has to maintain control in an emergency. It also improves the chances that the pilot will spot trouble coming before it turns into an emergency.
PC applications are prone to give users too many configuration options. For example, it seems like I can reorganize the menus on Microsoft Word in a million ways. The vendor might try to convince me that these options allow me to personalize the product, but really it just means that if someone else sits at my computer they can't use my copy of Word. The programmers would have served their audience better by designing a good interaction experience and then not allowing the user to change it. These armies of user options aren't common in consumer products–with the exception of cell phones, which are by nature very personal items. Still, I regularly see medical and industrial equipment that provides far more configuration options than the user really needs.
To draw on the earlier driving example, would you buy a car that allowed you to swap the brake and accelerator pedals? Technically, this wouldn't be difficult to implement, and the salesman might even convince you that changing pedals would prevent fatigue in your right leg! Keep this example in mind the next time you're tempted to add gratuitous options for configuring the user interface. Most users want to get on with using the product, not spend time tweaking the interface that the programmers should have gotten right the first time.
One exception to this rule is sound volume. Beeps and sound effects coming from your device may disturb other people in the vicinity, so it's important that the user can control the volume or silence the device completely to avoid becoming a source of nuisance noise for others.
The gulf of evaluation
When deciding on a course of action, a person collects the required data and uses it to make the right decision. No matter the source of the data (electronic device, mechanical tool), the person goes through a process, often in ordered steps, to arrive at the answer. If using a device that produces some data, the user may still have to travel a mental distance between the data presented on the device's interface and useful information required for decision-making. The user must make some mental effort to derive the useful information from the raw data presented on the display. The “length” of this mental distance is the concept known as the gulf of evaluation .
If a GPS locater can tell me my longitude and latitude, I might be able to use that data to establish where I am on a map. Adding a street name might help me decide whether to turn left or right. A map display further reduces the amount of work for me, the user. The gulf between the data (latitude and longitude) and the answer (turn left or right) has been significantly reduced. We can take this a step further. If the GPS system knows my destination, it can simply tell me when to turn. The whole map-reading step has been passed from the user to the device. Again, the amount of evaluation performed by the user has been reduced.
In this example, reducing the gulf might be very expensive if the device didn't already have a display and a mapping database. In other cases the resources are already available and all that's needed is to think about the problem from the user's point of view.
Another example from the desktop PC is the evaluation that takes place when you read file names. If I'm looking at a directory full of photographs I may be trying to guess from the filenames who and what is in the picture. Most file managers now reduce that evaluation step by displaying a thumbnail of the picture alongside its name. The user's evaluation is now much quicker and more accurate than any possible evaluation based on file names alone.
Similar mappings exist on consumer products. Older TV sets would only show you the channel number you were watching. You had to remember that the comedy station was on channel 5. (The problem is much more complex in Europe than in the U.S. because European stations are not assigned fixed channels, so “Channel 5” on one TV wasn't necessarily the same station as on another TV.) Newer televisions allowed the user to enter the name of each broadcaster when the TV was being tuned so that Comedy Central would be displayed when the user selected that channel. Again, the processing performed by the user has been reduced. Digital program managers have reduced workload even further since channel names are now transmitted to the TV set (or set-top box) along with information about today's program timetable. All of this progress reduces the number of things the user must remember, or the amount of information that the user must find from some other source.
In some cases you have to decide exactly how much information to reveal to the user to support his evaluation of the system. A device that controls temperature will detect variations as the control loop allows the temperature to rise a little above and then a little below the target. By displaying the temperature to one decimal place and tracking it in real time these variations might be visible–and confusing–to the user. A better alternative is to display the temperature only to the nearest whole degree and filter it over time. Averaging the value will disguise the minor rises and falls of the control loop. Which option you choose depends on the user you're serving. If the user has a technical background and is likely to care about the minute variations in temperature, by all means show the real-time, accurate data. For more novice users, there's no reason to confuse them with too much detail. Their brain cycles may be better spent on some other part of the system. In some cases the instantaneous and filtered value can both be shown, but this should only be done where you've established that users will actually use both–otherwise it forces them to decide which measurement they will use, increasing the user's mental load rather than reducing it.
This “level-of-detail” decision is akin to tuning the suspension for a car. You can soften the springs and shock absorbers to disguise bumps on the road, and that's what a lot of users want. On the other hand the “power user” might want to feel all those bumps to allow better control, especially if that driver is in a high-performance car on a race track. Giving your user greater control, but a bumpier ride, is a decision that should be made after careful analysis of your audience.
Gulf of execution
Now that we've looked at the gulf of evaluation, let's examine its counterpart, the gulf of execution . When the user decides on a course of action, he executes the action by following a series of steps, like a mapped route, from the starting point of the action to the desired end result. If a driver decides to turn left, the physical action will be to pull his left hand lower and to raise his right hand in order to rotate the steering wheel, which then makes the car turn to the left. Spelling it out in this detail makes it sound like there is a lot to think about when turning left. In practice, the gulf of execution is so small that most drivers automate this mapping the first time they sit in the driver's seat.
As the mappings from action to result become more complex in your design, the gulf of execution will increase.
Reducing the gulf of execution is often a natural extension of reducing the gulf of evaluation. In our TV example we mapped the channel number to the station name so that users would know what they were watching while flipping though the channels. Evaluation is about knowing what you've got.
Let's say that our user has made the decision to watch Comedy Central. He needs to press 5 on his remote, but to make the decision to press button 5, he has to perform a mental mapping from the station name to the channel number. This is the same mapping that we used earlier but in the opposite direction. It can be trickier to make the user aware of this mapping because multiple stations must be displayed at once to allow the user to chose. In practice this mapping is taken one step further in digital on-screen program guides. Before the user maps station name to channel number, he'll usually have decided what specific program he wants to watch, so the complete mapping is from program name to station name to channel number. The program guide can show the programs currently available and allow the user to select the program without having to think about the station name or the number.
The closer the user interaction is to the initial decision the user makes, the smaller the gulf of execution. The gulf between deciding to watch The Simpsons and picking that title from a list is small. The gulf is larger if the user has to establish that the program is on Fox and then remember that Fox is on channel 24 of his set.
Time delays can contribute to the gulf of execution. If you press a button and there's a delay before some action takes place, you may be left wondering if the keystroke and the resulting action were even associated with one another. In some cases the delay is part of the system requirements. Many home alarm systems don't arm at the time that the command is given because the user needs time to exit the building. The time delay between issuing the command and its execution is a time of uncertainty for users. They're likely to wonder just how long they have left or if the command was definitely accepted and will be executed. If the device displays a prompt that says, “Arming . . .”, the ellipsis (dots) might suggest that something is to follow, but the user doesn't know whether that will be seconds or minutes later. This uncertainty reduces the user's feeling of control. Users will feel much more in control if the prompt includes a countdown timer to let them know exactly how much time is left. This can be reinforced by sounding a sequence of beeps that increase in frequency as the deadline approaches. This is especially useful since the user will most likely not be able to read the display as they turn to exit the building.
This last example is so simple that it barely seems worth presenting here, but an alarming number of products settle for the simple “Arming” prompt and assume the user has read the appropriate section of the owner's manual that says the unit arms after 15 seconds. Using a product should be an exercise in controlling the surroundings, not a test of how carefully the user analyzed the documentation.
Multiplexing and modes
Imagine you could press a button that converted your car's pedals into radio tuning and volume controls, and that releasing the button returned the pedals to their normal function. As bizarre as this sounds, many devices reuse the same buttons for many features in their misguided quest to reduce the button count.
Don't be afraid to add lots of controls. The driver of a car has a lot of controls yet few drivers claim to be baffled by them. That's because each control has its own purpose. When you're adjusting the radio, you pay no attention to the air conditioning. In this way the majority of the controls can be ignored at any given time. There's no particular need to reduce the number of those controls and doing so wouldn't benefit the user. BMW famously tried this with the iDrive system that collapsed many of the entertainment, climate-control, and navigation controls into one mouse-style device. It was universally hated by car reviewers who described the radio and climate-control functions as almost impossible to use.
The iDrive fiasco was an exception in the automotive world. Cars do not, in general, suffer from the problem of cramming too much control into too few buttons. Many embedded devices do suffer from this problem. Cell phones shoehorn a text keyboard into a numeric keypad. While this keeps our cell phones tiny, it's a usability disaster. Its use has become so widespread that familiarity can lead a designer into thinking that it is a reasonable general-purpose input mechanism. I recently reviewed a GUI design where the user had to enter text occasionally via touch screen. Instead of displaying the alphabet with one button per key, the designer displayed a numeric keypad with three letters below each number. This allowed the user to enter text on the on-screen numeric keypad. Although displaying a full keypad needs more screen real estate (or smaller buttons) it's always going to be a more usable solution than the numeric-keypad approach.
Some cell phones include a fold-out keypad that does provide one key per letter. This allows the user to spend their time thinking about the message they want to send rather than counting the number of times they've pressed the number 3 in order to get the letter . The gulf of execution between the user's decision to insert a letter and the action taken in order to make that letter appear has been reduced in two ways. The physical activity in terms of the number of key strokes is smaller. And the cognitive load has been reduced because the user doesn't have to figure out a complex sequence.
While mechanical and industrial designers will try to reduce key count to improve the look of the device and maybe reduce cost, the result is often a reduction in usability.
Don't go cheapskate on the hardware. Physical control surfaces are expensive but worth it, unless you've tried driving a car with a joystick and want to argue otherwise. When considering dials versus up and down arrow keys there is a cost versus usability tradeoff. On a membrane panel the up and down keys will be cheaper, and they have other attractive properties such as resistance to dust and liquid spills. But a dial will almost always be preferred by the user. As the user turns a dial the size of the change applied to the numeric value is proportional to the physical rotation performed. This provides a very natural mapping which reduces the gulf of execution.
Buttons: do what I say
The gulf of execution can be reduced by limiting the number of widgets the user must manipulate to execute an action. Consider a GUI with a search feature. Next to the Search button there are two radio buttons that indicate Forward or Backward. The user can select one of these and then press the Search button. Of course, if the radio buttons are already in their desired state, the user doesn't need to select either one of them and can go straight to the Search button.
Having two decisions to make is not a direct way for the user to express his intentions. We could replace this with an interface where the gulf of execution is smaller. If we provide a Search Forward button and a separate Search Backward button, then the user only has to press one button. Because the button says exactly what the user wants to do, the feeling of control is greater. The user doesn't have to work out what the button will do based on the state of a second widget. This interface will probably take about the same amount of real estate on the GUI. Previously we had three clickable items and now we're down to two, which means that the clickable items will be bigger and therefore easier to select.
PC interfaces regularly make the mistake of not writing the action on the button. Figure 1 is a classic case of enlarging the gulf of execution by forcing the user to read a question and then perform a mapping from his answer into a choice between the Yes and No buttons. This popup would be far easier to use if the buttons were larger and contained the text “Save and Exit” and “Exit without Saving.” Arguably the third button could contain the text “Do not Save. Do not Exit,” but “Cancel” would probably be understood in this context.
Time and cost limits may prevent you from producing a device that makes users feel like they're driving a Porsche. But small improvements in the interface can be the difference between a user who feels he's in control and one who feels he's being controlled–or out of control.
Niall Murphy has been designing user interfaces for over 12 years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Murphy teaches and consults on building better user interfaces. He welcomes feedback and can be reached at . Reader feedback to his articles can be found at .
Good article. But some remarks: the software business is young compared to the automotive branch, with which you compared the user interface experience. This makes that the automotive branch has long learned from mistakes. But more important, the automotive branch have always used the “my car is operated the same as the competitors, so there is no problem switching to this brand”, while the software business wants you to get hooked to their weird user interface so it becomes difficult to switch to another platform. Also the IT branch is full of “know-betters”, who try to convince you that their idea of user interaction is superior to the others.
But I must say, the automotive industry is quickly adapting; the only thing really similar in all cars are the steering wheel and pedals. Switching the lights on can be a button on dashboard, rotary button on dashboard, switch on steering column, rotary on steering column and always on. The ignition key can be on the steering column, near the handbrake (Saab), a smartcard with starter button. And the really smart systems like BMW's I-Drive are complex, but that's IT again.
– John Janssen
AM Software Engineering
Voice mail! I think a standard would be nice. Systems use the '4' or '7' or … key to delete a message. Your phone may have access to 2, 3 or more voice mail systems, all with different key mappings.
If all systems used one mapping, that would be great. Or a choice of mappings, so you could get them all the same.
Or best, one voice mail system.
– Tim Flynn
During a recent product upgrade, I was told that customers didn't like the dial on our existing product to change values. It is speed-sensitive, to increase by either 10 or 100 for one revolution, and has a good-sized aluminum knob with a finger indent so it spins quite well. Some values range from 1 to 10,000 and require an input resolution of 1. If a customer wants to increase a value by 200, they usually have to twiddle the dial back and forth to get the exact value. The new product has up/down/left/right/select buttons. Left-right selects the digit, up/down increments and decrements (with rollover to next higher digit).
My concern with a numeric keypad involves max-min limits and the resolution of some values. Some input values have resolutions of 2, 5, or 25 (ignoring decimal places). For example, only 0, 5, 10, 15, … may be legal values. If someone enters “7”, what is the proper reaction–round up, or round down, or change to a warning screen and force the user to re-enter a value? A similar issue arises with max-min limits, probably the best solution being a warning screen showing the limit and allowing the user to accept the limit value or change the value and enter.
With the left/right/up/down numerical entry, values can change by 1, 2, 5, or 25 with a single push of a button, a value of 7 cannot be entered, and values stop at limits. When the increment is 25, two digits are highlighted, so the change to two digits with one push is more obvious.
Any non-numerical values allow the user to select from a text list, e.g. Fast/Medium/Slow using the up/down buttons.
– Ed Barney
Sr. Elect. Eng.
North Springfield, VT
“This doesn't happen in the automotive industry, for example.” You mean didn't. Try out a new car with a new integrated navigation/audio system. Electronics is already bringing down the intuitiveness of the auto.
– Grant Beattie
I admit that the basic controls on cars are really nice, once you are used to them. However, it's a bad example for a usability/control example. Just consider how long you spent learning to drive and how many people had to coach you in the process. All of that effort went into controlling just two settings, i.e. direction and speed.
For most any embedded device a similar learning curve would be completely unacceptable. Granted, much of the complexity in learning to drive is in recognizing situations and learning what response is appropriate rather than learning how to make the car execute that response. However, that just reinforces that this is a bad example.
– Virgil Smith