The tradeoffs involved in building a UI can be many. Making the right choices can make or break your system.
The implementation of a sophisticated user interface (UI) brings a number of interesting challenges. It often leads to tradeoffs between the cost and difficulty of implementation versus the quality of the user experience. In the consumer space, the UI can make or break a product and have a lasting impact on a vendor's product line.
Let's first look at the different types of facilities and mechanisms found on today's typical UI. Processing a user's interactions with buttons and switches is quite straightforward. It's simply a matter of polling an interface frequently enough or responding to an interrupt. There are a few subtleties to keep in mind. For example, a switch or a button being pressed may be seen as a transition from “1” to “0,” not vice versa as might be expected. Also, switch contacts have a tendency to bounce and without care, can be interpreted as a sequence of actuations.
Lamps are equally as simple. They're typically illuminated by setting a bit in a device register. This has a small challenge: registers are typically “write only” so the software can't read back the current state (a shadow copy of the data must be kept in RAM and used with care). Flashing lamps require some type of timing mechanism. This might be handled by a simple clock interrupt service routine, or a real-time operating system (RTOS) may be employed for the main application, which can accommodate the timing requirements. Simple alphanumeric displays are handled in much the same way as lamps.
On devices with graphical screens–either full size, like a TV, or small, like a cell phone–the functionality may be defined quite precisely. Also, some common features may be leveraged. Although all of these mechanisms are found on today's UI and may seem simple in many respects, nonetheless developers must face some serious challenges when working at the application level.
The complexities of today's devices typically feature a wide range of functionality, often implemented in a number of different applications. A feature-rich cell phone, for example, will have Web browser, messaging, media playback, and address book along with the traditional phone functionality. It's not uncommon for each application to be developed separately, possibly by different companies. The result is that each application has its own UI, yet in every application, the UI is doing much the same work as the others, but in slightly different ways and often with a different look and feel. Unfortunately, this results in more development work (creating lots of UIs), and the user experience is compromised because the UIs aren't consistent.
Another challenge at the application level is the fact that the UI takes significant programming skill and effort to implement and, furthermore, to maintain and adapt to future needs.
Finally, the design and implementation techniques employed at the application level offer limited opportunities to add vendor customization without excessive programming effort. This is a capability demanded by service providers for wireless networks or cable TV operators, for example. These companies view the phone or set-top box as a differentiating extension of the unique service they provide and want to reinforce this message at every opportunity.
The good news is that all three challenges can be addressed by taking a different approach to UI development.
The way to create a UI for a complex embedded system is to implement it as a separate software layer, which may be hooked into all the device's application components. Figure 1 shows a system block diagram illustrating how such a UI engine might work. In this context, the software provides the display graphics and user interaction control. The developer simply configures the UI using XML.
The advantage to the developer when using a UI engine is the simplicity of the methods involved in creating a sophisticated, attractive UI. Once all interface behavior has been delegated to the engine, many aspects of the UI's design (including its branding, look and feel, and menu structure) may be configured with small parametric adjustments that require no coding or scripting. Simple declarative XML files, which are both extensible and human readable, provide an ideal way to specify such a parameterization.
This more rationalized approach to UI development treats each UI menu as a well-defined state machine. A developer need only define what states are permissible, how each onscreen element should look in each state, and what interaction events (such as key presses or stylus taps) should trigger changes from one state to another. The UI engine implements all the logic required to morph the UI's appearance whenever a state change occurs, for example, by scrolling a series of items by one position, or by switching the active focus from one item to another, when a particular key is pressed.
By automating such UI logic, much of the complexity of constructing sophisticated new interfaces is avoided. The substance of the task can be reduced to describing the UI as a series of visual snapshots (or layouts), each representing a single permissible state. Each layout merely defines how every onscreen element should appear (be it text, bitmap, or other visual content) when the UI is in the corresponding state. This definition requires no conditional branching, looping, or programming construct of any kind. In fact, completely new UI designs can be constructed using simple declarative XML formats.
Even more sophisticated UI layouts are possible with this type of platform approach because the visual appearance of any onscreen element (such as an individual icon or caption) can be controlled to a high degree.
Support for 3D rendering
Some available engines enable the layout of an interface's elements to be specified in three dimensions. Many visual effects (such as smooth fading, scaling, and zooming) can be achieved even if the target hardware only has basic 2D graphics application programming interfaces (APIs). However, the 3D capabilities of these engines are most evident with full rotation, solid models, and texture mapping and lighting effects–if OpenGL/ES is available. OpenGL/ES is the open standard API for rendering 3D graphics on embedded systems. OpenGL/ES support can be embedded in a device through either a software library or, for maximum performance, through a hardware-accelerated graphics processor.
Although OpenGL/ES is very powerful, its API operates at an extremely low level, being concerned with the frame-by-frame rendering of millions of individual polygons, rather than the more abstract concepts we normally associate with interface design, such as text, icons, and scrolling. This explains why OpenGL/ES has to date only been used widely in 3D games. Only with an OpenGL/ES-aware UI engine does it become practical to bring the same graphical power to beat across a device's application interfaces.
Given that a sophisticated device may run a wide range of applications, there are three ways a UI engine may be deployed:
• The UI engine can be applied to all the applications in the device immediately. This is the best approach for a brand new system, as it simply requires developers to take a unified approach. It's a more challenging prospect if all the applications already exist, because existing code must be modified to use the UI engine's API. However, this approach would yield the best user experience, as it results in all the applications having a common look and feel and any customizations (fonts, backgrounds, “skins,” and so forth) apply systemwide.
• Developers can take advantage of the UI engine whenever possible. This approach results in new applications, and those being updated, acquiring the rationalized UI. Over time, the complete system will be encompassed.
• The UI engine can be used easily with minimal effort to deploy service provider-specific facilities in an otherwise neutral device. This could take the form of an additional application (for example, a portal to promote downloadable content accessible across a network), or an encapsulation (and “badging”) of facilities already present in the device.
Colin Walls has over 25 years of experience in the electronics industry, largely dedicated to embedded software. A frequent presenter at conferences and author of two books on embedded software, Walls is a member of the Embedded Systems Division at Mentor Graphics. You may contact him at .
Geoff Kendall joined Mentor's Embedded Systems Division in late 2006. He is responsible for the company's Inflexion Platform UI product. Kendall holds a PhD in artificial intelligence from the University of Liverpool, UK. You may contact him at .