It's Worse on a Browser -

It’s Worse on a Browser


User interfaces are always less usable when they're served up via HTTP. But there are ways of improving the user's experience.

Alan Cooper uses the slogan “It's worse on a computer” to convince developers to make computer applications more productive than their manual predecessors. One example is an application intended to replace the paper diary. Once you move it to a computer, you lose the ability to turn down the corner of a page, or insert a bookmark. You also lose the ability to carry it around in your pocket, assuming the application is running on a conventional desktop PC.

The developer can compensate for these losses by providing other facilities that were not possible in the original, like being able to search for a name. When the developer applies improvements, it is important to remember that the starting point of the computer version is worse than the mechanical version; that many of the extra features available on the computer are merely compensating for the fact that you have already made the user's life more difficult. A computer diary weighs 40 pounds and takes five minutes to boot up.

I mention this because user interface transformations often have to take a step backward before they move forward. In the current enthusiasm for web-enabling the embedded world, we should acknowledge that moving the interface to a browser makes the usability of the interface worse than a more conventional GUI, or the set of mechanical controls that might be available on the body of the device. A browser may be available from more locations, but that is a functional improvement that does not necessarily balance the loss of usability. In many applications, the design is changing from mechanical to graphical and from graphical to browser-based in one product generation, and it is important to recognize that those are two distinct evolutions. If you end up with a bad interface, you'll need to figure out whether it's worse because it's become graphical or because it's on a browser.

In this column, I'm going to draw comparisons between browser-based interfaces and custom applications written for Windows, Macintosh, or other desktop environments. These comparisons are relevant to embedded designers because a desktop application can provide a better front end to your Internet-enabled embedded device than a browser. You may have to do a little more work, and you add an initial user investment of installing that front end, but, in terms of pure usability, the custom application will always be a winner.

If writing a custom application as a front end is not an option for you, and you have to provide a web-only interface on your product, then read on, because some of these comparisons may help you avoid common pitfalls of browser-based interfaces.

How much worse?

To give an example of how much worse a browser makes the interactive experience, consider the difference between reading e-mail with Outlook, Free Agent, or some other desktop package, and reading mail with a web-based package such as Hotmail or Yahoo. While the web-based interface makes the messages more accessible on the road, it just cannot compete with regard to usability. Both interfaces have the same amount of screen real-estate available. They are both driven by a mouse and keyboard, but there is no doubt that the user experience is worse on the browser. This is not the fault of the innovative folk who brought us web-based mail services (for which I am endlessly grateful); but it is a fundamental limitation of web browsers as they currently exist.

The Perforce source code control product is another example of an application that can be accessed from a client interface in MS-Windows or X-Windows, and they also provide a web interface. The web interface is not as usable as the dedicated client applications, but it provides access for environments and locations that are not supported by the client applications.

Turning the page

The page metaphor forced on the browser user is good for just that-browsing. If the user is trying to interact or control the application, the page metaphor does not work so well. The designer has little control over the order in which pages are visited: entering an exact URL allows the user to jump to a page that is in the middle of a planned sequence. On a conventional GUI, the designer can gray out options that are not sensible, and force actions to happen in a certain order; on a browser, these things cannot be done.

Another usability limitation of the web is the multithreaded way in which some browsers behave. Once you select a link, or perform an action, the current page is still available while the browser waits for the response to the latest request. If it takes a number of seconds to respond (as it often does on the World Wide Wait), then the user has the opportunity to make an alternate request. The user may not then know whether the first request was ignored. This model is fine for browsing read-only information, where a repeated request or a cancelled request has no consequence beyond the image on the screen. If, on the other hand, you are applying control commands, the user interface must be given the opportunity to reflect the result of the request.

The forward and back buttons on the browser create a slew of usability problems. Say I view a screen that tells me the temperature data being read by a device. I navigate elsewhere, and then use the back button to return to the temperature screen. Am I now viewing the same data as was shown the last time I was here? Or did my browser reload the data? The developer could have been clever enough to tag the page as immediately expired. Leaving aside the fact that some methods of expiring the page are ignored by some browsers, we have still not solved the usability problem. The user has no way of being sure that the data was updated (assuming this is the desired behavior). The user knows that pages can be cached, and therefore will consider it a possibility any time he views a page with the back button. The user cannot trust the data, even if the data is always up to date, because the user is transferring concepts that are common on other web pages. It is this sort of uncertainty that makes a browser an unsuitable medium for controlling devices.

For a controlling user interface, the concept of caching should not be part of the user's mental model. However, it is unavoidable on the web.

Tricks and widgets

When we consider the family of widgets available on the browser, it is a lot smaller than the set available on any of the popular desktop GUIs. Most commands are available on the desktop through pull-down command menus. These do not exist in any standard form on the browser. The rowser allows pull-down menus, but these are designed for selection from multiple options, not for commands. Many web sites use pull-down menus for commands, but this is confusing for the user since pulldowns for selection will not jump to a new page, but pull-down menus for commands will reload the page (or a new page). Another way to represent a menu is with a set of links (either text links or icons), but with this format there's no visual distinction between a command that performs an action and simple navigation, and that distinction is importanto the user.

Widgets on the browser get misused in some awful ways. I regularly see drop-down menus to enter the day of the month on which I was born. A desktop application will simply let you type it in. The pull down menu makes the web designer's life easier, so they are everywhere on the web-and they will appear in your product too, because the development environment encourages it. If the web designer allowed you to enter a free form number, more validation would be necessary on the server side, and this is extra work.

This leads us to the crux of the issue. Designing a mediocre interface for the web is much easier than writing a custom application. But if you want to do an above average interface on the web, it will take an awful lot more work than a custom interface.

Controls such as tabs for selecting sub-pages can be implemented in a browser if the web designer builds them by hand, but because they have to be designed from scratch, they will not conform to any standard established elsewhere.

Keyboard accelerators are very useful in a desktop application, but cannot be used in a browser, so we lose a great opportunity for time saving shortcuts. Shortcuts can be provided by the right-click menu in a custom application, but this menu cannot be modified by a web page, so another opportunity is lost.

A number of other challenges in web page design are highlighted in Jakob Nielsen's Alertbox column, which can be found at

Getting connected

It is possible to have a remote graphical interface over TCP/IP that is not browser based. X-Windows did this in the Unix world for over a decade before the Web was invented. In an embedded context, the Photon GUI from QNX makes this possible in an RTOS environment. Either of these systems allow an embedded application to request a graphical operation such as drawLine or createMenu, and the acutal rendering on a display will happen somewhere else on the network. The graphics libraries pass the request over TCP/IP to another processer, which takes responsibility for the display. This is analagous to the way in which a web server passes a web page to another computer where the browser will perform the display. The differenceis that X-Windows and Photon allow the display to be controlled on a call by call (or widget by widget) basis, rather than having to deliver an entire page ata time, as a web server does.

The assumption in the X11 case is that the receiving end has an application, called an X-server, capable of interpreting the graphical requests and displaying the results.

The X-Windows environment does not support the hypertext linking ability of HTML, but that is rarely the vital part of the interface on an embedded device. What you want is control. The control items in the X-Windows world, such as sliders, buttons, text fields, checkboxes, and radio-buttons are mature and, more importantly, they're the fundamental building blocks of the interface.

On a browser, controls are anomalies that seem to have been added as an afterthought to a system that is designed to display information, and not to facilitate its control or manipulation.

The Web is a publishing environment, but our embedded devices beg to be controlled.

While the X-Windows solution is elegant and flexible, it involves too much overhead for most embedded targets. Asking end users to configure an X server on their desktop may be unreasonable, especially if they are PC-centric users.

To find a better solution for the desktop world we should look a little more closely at how much information we are passing across the wires. If we make our VCR internet enabled, we will want to know what timing information it currently holds. We'll also want the ability to change that information. The actual number of bytes that such transactions involve are very few.

The overhead of putting that data in the setting of a web page is enormous. You could easily have 20 bytes of VCR timing information in a 1KB web page. Just because the web model allows us to put the design of the interface on the embedded device doesn't mean that we should.

The simpler option is to allow access to the data, and let the receiver manage the user interface. The data itself can be transmitted over a socket using a proprietary application-level protocol. If the data is complex, or you need to handle multiple client connections, placing a layer such as CORBA or RPC above the socket layer could prove useful, but such protocols simply give the programmer better control of the flow of data. They do not impose restrictions on the user interface as HTML does. If you are not interested in the complexities of these higher level protocols, but still wish to pass data across the Internet, basic sockets allow point-to-point communication of ASCII text.

One of the biggest advantages of specifying an interface without the overhead of a web page surrounding each nugget of data is that it is simpler to allow other devices, or other desktop applications, to interact with that data. If one device wants data from another device, it is very awkward and expensive to download an HTML page and then parse out the data values so that the receiving device can use them. This seems like absurd design, but I have seen it happen when the designers of the original device assumed that the rest of the world would only ever care about viewing their data through a browser, and did not consider that the data could have value to other devices, as well as to people.


A Windows application to provide a front end to an embedded device can be constructed in very little time using GUI building tools (C++ Builder is my own favorite, though many prefer Visual Basic). By doing this we put the processing load where we are more likely to have spare CPU cycles. We have the ability to update the user interface without having to alter the code on the device. A bug in the user interface is less likely to compromise our embedded device. As new spoken languages need to be supported, we can release interfaces for those languages. The embedded device needn't be aware of the new interface.

This approach has one major drawback: the interface is only applicable to one desktop environment. A number of toolkits allow applications to provide a portable user interface across multiple desktop operating systems, but they dramatically reduce the ease of development. One solution is to code multiple applications, one for Windows, one for Macintosh, one for Unix, and so on. Many developers simply follow the herd and only do an Windows version.

Another drawback is that you have to make sure the version of the front-end application is synchronized with the device being controlled. If a mismatch is detected, the user must be coaxed into downloading the latest front-end application from a central web site.

Don't forget the programmer

The programming environment can have an influence in another way, unrelated to usability. For an interactive interface, the HTML pages are effectively a program. However, they are a part of the program that is not compiled or syntax-checked in any way. This means that some mistakes that should be discovered statically can only be discovered through testing. A spelling error in an HTTP link is one example. If a C program contained an equivalent error, say referring to a structure that represented a dialog that did not exist, the compiler would complain and the problem would have to be resolved before an executable program could be produced. However, a web page with a broken link can be delivered, and the broken link will only be discovered when the tester (or user) clicks on it. Late detection of problems makes them more difficult to fix, and it is harder to be sure that none slip through the testing net.

Broken links are the simplest example. During some recent and painful experiences with JavaScript, I wished I was back debugging with a voltmeter and a few printf statements. Again it seems that the simplest web sites are easy to put together, but once they become complex, they have an array of failure modes that don't even exist in more conventional programming languages.

Java applets let us down

I would like to be able to conclude that Java applets provide the ideal solution, as a client-side program that does not have any deployment issues. Unfortunately, this is not the case. Incompatibility between browsers and JVMs means that it is quite difficult to deploy an applet that can even run most places, much less satisfy the run-anywhere mantra. If you follow the path of specifying which browser (including specifying certain security flags) and which JVM the user should use, then you may be clashing with the set of requirements of another web server. If you get around that, you are still dealing with a set of widgets that are inferior to those available to native applications, and it still has to run in the restricted environment of the browser. Java is a great language, but the applet model is one area where it does not provide a solution to the problem that we are addressing here. We need to wait for another generation of browser before we can even hope that the answer to usable client-side software can be solved with Java applets.

Getting our devices talking

Many connected devices will use web interfaces because distributing an application to the users will prove too great an overhead, especially if the user interface is fairly simple. For more sophisticated interfaces, it is worth considering that a browser is not the only way to connect a device to the user.

It always annoys me when the popular press refers to the Internet when they mean the Web and vice versa. The Internet was a huge and valuable source of information long before web servers-the Web merely made it more accessible to humans. I am in favor of making data in embedded devices available to humans, but not at the exclusion of other devices which may require it, and I also believe that most of the best human interfaces to devices will be built without web technology. So let's get our devices connected and talking to each other, as well as talking to us.

Niall Murphy has been writing software for user interfaces and medical systems for ten years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Murphy's training and consulting business is based in Galway, Ireland. He welcomes feedback and can be reached at . Reader feedback to this column can be found at .

Return to August 2001 Table of Contents

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.