Using capacitive sensor user interfaces in next generation mobile and embedded consumer devices - Embedded.com

Using capacitive sensor user interfaces in next generation mobile and embedded consumer devices

Designers working on small systems such as mobile devices, portabledigital entertainment (PDE), remote controls, and digital camerascontinue to face major challenges as these and other similar productsincrease in complexity and functionality. For example, currentgenerations of cell phones are facing user interface (UI) and ergonomic issues due to acrowded keypadand touch screen combination.

A few years ago, these same systems were simple, well-defined, andrequired simple input devices such as keypads, buttons, or touchscreens for handwriting and selection. However, today, cell phones arepacked with sophisticated communications subsystems, while PDAs haveevolved into full-fledged computers with Internet connectivity. Plus,there are newer mobile and portable products on the drawing boardfacing UI challenges as a result of their highly advanced,multi-functional designs.

As a result, designers are migrating toward the extra UI space resistive touchscreens (RTS) offer. This particular sensor is a widelyavailable commodity technology. However, even with the extra space RTSprovide, embedded designers face new issues stemming from RTSlimitations.

A resistive touch panel is a mechanical sensor with two layers ofmaterial, typically separated by air (Figure1, below ). The top layer is a thin, clear polyester film and thelower layer, glass. Pressing the top layer with your finger pushes thetop layer and touches the lower layer. Voltage at the contact point ismeasured, and location is computed. After removing your finger, the toplayer resumes its original configuration.

Fig.1. A resistive touch panel is a mechanical sensor with two layers ofmaterial, typically separated by air.

The two main challenges embedded designers face with RTS are: (1) optics of the underlyingdisplay are severely impaired and (2) poor durability. Devices using RTS often fail in the field when they'redropped or the user presses the screen too hard.

Once the RTS panel breaks, the device's main hardware input iseliminated, rendering the device useless. The designer is also offeredlimited options. For example, RTS require a top bezel to protect theedges of the screen. Cost of creating a bezel opening and the bezel,itself, can double the cost of assembly. Moreover, RTS must be mountedon a flat surface and not beneath plastics.

Further, RTS have poor electrostatic discharge (ESD)resistance. They are prone to condensing humidity damage, whiletemperature and humidity fluctuations can severely alter performance.RTS also expose fragile LCDs to greater risk. As for limitedperformance, RTS require a stylus for accuracy; they are inaccurate atthe edges where most scrollbars and icons are located; and they requireuser calibration on a regular basis.

Rubbing Out RTS Limitations
A more efficient and reliable alternative is the use of a thin,transparent, capacitivesensor touch screen that embedded designers can place over anyviewable surface for input and navigation. One implementation of thiskind of capacitive sensor interface, calledClearPad, offers designers a solution to overcome RTS limitations.

The capacitive sensor module consists of a clear, thin transparentfinger sensing region bonded to a flex circuit tail which contains allof the sensing electronics. As shown in Figure 2, below , a finger on top ofa grid of conductive traces changes the capacitance of the nearesttraces.

This change in trace capacitance is measured and finger positioncomputed. No pressure is needed to activate the capacitive sensor. Agentle stroke or glide along the surface of a capacitive pad is allthat's required.

Figure2. A capacitive sensor panel is solid state. A finger on top of a gridof conductive traces changes the capacitance of the nearest traces.Trace capacitance change is measured and the finger position iscomputed.

Specifically, the capacitive sensor's surface receives touchinformation from contact with the user and sends this information tothe controller board. The controller board processes touch signals andconveys the information to the host. The host then uses finger positionand contact information for a wide variety of user interface featuressuch as entering characters, data, and scrolling.

In this UI technology, capacitive sensing is combined with atransparent trace matrix (Figure 3,below ). The same materials used in RTS are used in thecapacitive sensor interface approach, specifically indium tin oxide(ITO) on polyethyleneterephthalate (PET) .

However, the capacitive sensor does not possess RTS's optical anddurability issues because it is a single laminate with no air gaps todegrade optics and is solid state with no moving parts, thuscontributing to its high reliability and durability. Resistive screens,on the other hand, are physical switches that must flex and require rubin use, decreasing their useful lifetime.

Figure3. Capacitive sensing is combined with a transparent trace matrix. Thesame materials used in resistive screens are used in the capacitivesensor panel, indium tin oxide (ITO) on polyethylene terephthalate(PET).

Also, embedded designers are not limited to pliable surfacematerials because capacitance is sensed through most non-conductivematerials. Capacitive sensing operates even when the sensor is placedbeneath a durable surface like polycarbonate or acrylic. In cases likethese, the capacitive sensor has the environmental durability of itsrigid overlay, which permits it to function in environments where othertechnologies fail.

Mechanical simplicity is another major reason for the improvedreliability and optics. In the capacitive sensor module, each layer isoptically matched to one another and laminated together. Thiseliminates any air gaps and internal reflections they cause. Lightabsorption is also minimized due to the very thin conductive layersused.

Since it is a solid-state sensor with no moving parts, opticalquality remains consistent during sensor usage. In contrast, theresistive touch panel requires an air gap and spacer dots, both ofwhich cause internal reflections and scatter light. Furthermore, stepsmust be taken to minimize optical distortion (such as Newton rings) when the top surface deforms during resistive touch panel operation.

Unlike RTS, capacitive sensors don't require critical spacingbetween sensor layers. Flexing or deforming a resistive sensor canaffect spacing between layers. However, capacitive sensors can sensethrough curved surfaces without functionality loss. As a result ofthese major differences, designers can use capacitive sensors to addcost-effective and simple touch sensing in applications RTS cannot beused effectively.

Applications Custom Fit
Embedded designers working on such applications as mobile, portabledigital entertainment devices, remote controls, and digital cameras canspecify size and shape customization of the capacitive sensor andsupporting electronics module to meet their specific applicationrequirements. The assembly includes the sensor, controller board withproprietary IC, and firmware.

Designs based on the capacitive sensor present superior optics andfinger sensing capabilities to end users. Text and graphics displayedon and the underlying screen are crisp and clear, thanks to thesensor's matched optics that reduce internal reflections.

Figure4. The capacitive sensor module comprises four regions, active andviewing areas, opaque PET inactive borders on three sides, and tailregion housing the capacitive sensing electronics.

Figure 4, above, shows thecapacitive sensor module composed of four regions. A capacitive sensorpanel designed for a four-inch diagonal TFT display, for example, has anactive area of up to 60 x 80 mm with 0.68 mm sensor thickness (thisincludes a 0.075 mm adhesive for laminating to the lens or casing). Theactive area is the sensor's transparent region reporting presence andlocation of a user's finger.

The viewing area, also in the sensor's transparent region, isoutside the active area and does not detect the user's finger. Theopaque PET inactive, non-sensing borders on three sides of the sensor'sviewing area allow for low resistance trace routing. These borders aredesigned to be shielded both electrically and optically from the user.Lastly, the L-shaped or tail region connects to the sensor and housesthe capacitive sensing electronics.

Since the sensor's finger sensing region is transparent, it canideally be used with contextual GUIs that change dynamically dependingon the device's mode or application. Button arrays, sliders,soft-menus, cursor control and character recognition are possible.These interfaces deliver to embedded designers a remarkably high numberof design possibilities. The UI no longer needs to be fixed inhardware, but can now be entirely constructed in software to match thespecific requirements for a given task or application.

Ushering In New DesignConsiderations
Capacitive sensor technology ushers in a vast number of newer and moreenriching design considerations than what embedded designers havebecome accustomed to with RTS. In this instance, creating an intuitiveuser interface involves more than optimizing the design of thecapacitive sensor, itself. In particular, designers must pay specialattention to making accommodations in the device UI for inputinaccuracies introduced by the user.

Since a capacitive sensor is optimized for finger usage, the UIdesigner must consider that typical users will not be able to reliablyposition and control their finger with great accuracy. Althoughdesigning for finger usage appears to be a limitation, such aconstraint actually results in a more intuitive and simpler UI. Such anUI is more suitable for mass market devices and users who may not be astechnology-savvy or lack the time for substantial device training.

There are other ways to optimize the user's interaction with thissensor. They fall into two categories. One is static design, whichincludes control discoverability, layout, and tactile definition. Theother is dynamic control processing which includes button activationmethods, hysteresis in gestures, and consistency in UI processing.

Control discoverability is an obvious, but commonly overlookeddesign issue. The ClearPad sensor, for example, like other touch screeninterfaces, enables an embedded designer to implement the device'scontrols completely in software. In such user interfaces, the displayshows objects that correspond to some device action when a user touchesa particular object.

However not all objects on the screen may be actionable.Accordingly, perhaps the most important interaction rule for controllayout is to make the actionable objects or UI elements discoverableand distinguish them from other on-screen graphics that are not meantto be touched.

Although this interaction rule appears quite obvious, consider howambiguous most existing graphical user interfaces are in displaying UIelements vs. non-actionable objects. It is no surprise that childrenand other GUI novices often experience difficulties with touch screenUIs simply based on this issue.

Control layout design
Another important interaction rule for control layout design is properspacing and sizing of the UI elements. Although screen real estate isat a premium for most handheld devices, small control elements packedtoo closely will frustrate users.

It is important to determine the range of typical finger sizes ofthe device's target users. Buttons and UI elements smaller than thesmallest anticipated finger contact area (typically 8-14 mm in diameterfor an American adult index finger) will cause usability problems andshould be avoided.

Furthermore, it is important to ensure that enough space is providedbetween UI elements. Ideally, control elements should have a pitch (thespacing between the centers of UI elements) of at least a finger width.

One aspect of UI element design often overlooked is that the drawnelement size does not need to match its activation size. For buttons,this implies that the graphic for a button does not need to correspondto its activation region.

For example, in designs where it is necessary for aesthetic/IDreasons to draw large buttons with tight spacing, it is recommendedthat each button be made to activate only within the central area ofthe drawn button.

This increases the effective spacing of the button array andimproves usability. Conversely, for UIs with very small buttons (thatare spaced far apart), it may be necessary to increase each button'sactivation region.

For capacitive button layout, embedded designers should avoidcapacitive button layouts that are difficult for the user to touch onebutton at a time. Equally sized buttons are not equally accessible. Toimprove button accessibility, while conserving space, some buttons canbe made smaller and others larger, depending on their location andfunction in a product's overall design.

Corner buttons are the easiest to access for several reasons.They're approachable from more than one direction, have the fewestneighbors, and they offer the most tactile location cues about theirposition because the phone's edge helps guide the user's hand.

Upper buttons should be slightly larger than the lower cornerbuttons. Those lower corner buttons can be smaller because they havetwo edges nearby, which offer more tactile clues about their location.

A mobile phone keypad best illustrates an efficient mannerto size capacitive buttons to improve accessibility (Figure 5, below ). All buttons havesimilar accessibility because they are sized differently. The middlebutton is the largest because it is the least accessible.

Increasing the middle button's size improves its accessibility.Other button size considerations are the size and shape of the buttonsnear the edges of the mobile phone. Those buttons can be smallerbecause the phone edges provide a point of reference for the user'shand.

Figure5. In this mobile phone keypad layout, with the capacitive approach,buttons can be resized to make each button equally accessible.

Proximity of other capacitive controls is another consideration forbutton layouts. Designers should consider how button placement cancomplement the use of other controls used in the design such asscrolling sensors. For example, capacitive buttons should be arrangedso that the user does not inadvertently activate scrolling or anotherfunction when using the capacitive buttons.

Finally, one must consider that approximately 10 percent of thepopulation is left-handed. Designing controls on either the left orright side of a product can bias one hand over the other. Therefore,the most frequently accessed controls or those requiring the mostdexterity should be centrally positioned.

Alternatives to Braille bumps
In the area of tactile definition, capacitive technologies respond to agentle touch; the user doesn't have to press down forcefully. But iftactile feedback is desired, tactile definition like Braille bumps, abezel, or changes in surface texture can be used to help users locatecapacitive buttons or sensor regions by feel.

Braille bumps have traditionallybeen used in computer keyboards and telephone designs to indicate themiddle “Home” button for a group of buttons or keys. Braille bumps maybe useful if the capacitive sensor is designed in to simulate an arrayof buttons or to provide tactile landmarks for scrolling boundaries.

On the other hand, Braille bumps should not be considered if theuser's finger will frequently slide over them during routineoperations. For instance, if a capacitive sensor is primarily used forcursor control and navigation, a Braille bump should not be used. Itwould produce an unpleasant user experience and could even adverselyaffect pointing performance.

For applications requiring finger strokes or selection gestures, thesurface over the capacitive sensor should not be perfectly smooth.Moist or sticky fingers will stick and skip across an extremely smoothsurface, but a slightly textured surface helps the user's fingers toslide. Accordingly, the capacitive sensor's surface can be texturedwith a hard coated finish if the sensor is the top layer.

For under-plastic designs, the embedded designer should specify adifferent texture for the plastic surface over the sensing areas. Theremainder of the plastic casing could have a smoother surface. When theuser's finger moves outside the sensing area, the user would notice thechange in texture.

An area of UI design unique to the capacitive sensor approach wehave developed is the notion of control processing. Beyond the variousmethods for processing button input, a light touch enables UI elementsthat incorporate finger movement. Examples of such UI elements(gestures) include scrolling, dragging, inking (for drawing orcharacter recognition), and panning/zooming. For such elements, it isimportant to consider the dynamic aspects of the UI design as well asthe previously described static issues.

For button designs, there are two primary mechanisms for activatingbuttons, taps and presses. A tap is a short contact (typically < 250msec), constrained to not contain significant finger motion. Thismechanism prevents accidental activations since the user must contactand lift-off from a button within a narrow timing window. However, itcan be challenging for novice or untrained users to execute effectivelyeven on large buttons because the timing of this gesture for a givendevice is not particularly discoverable.

A press is a contact that appears in the button region and thenlingers in the region (with minimal finger motion) for greater than athreshold amount of time (typically > 250 msec). This mechanism iseasy to perform for most users and is recommended above taps.

It is also important to pay attention to what happens after a buttonis “activated”. A button should be disabled for a brief periodfollowing activation to prevent “double activation.” For scrolling andother movement based UI elements, hysteresis of the controlprocessing is highly recommended. Hysteresis means that once a gestureis initiated by touching within a given UI element, active boundariesof the UI element should be relaxed beyond the original drawn activeregion to accommodate inaccuracies in a user's finger movement.

This relaxation of the active boundaries should continue until thegesture is completed or until the finger moves beyond the new activeboundaries. For example, for a scroll bar, the user first toucheswithin the scrolling region to initiate the scroll gesture. But oncethe finger begins movement, the UI should continue to scroll even ifthe finger moves somewhat outside of the drawn scroll region, until thegesture is terminated. The gesture is considered terminated usually bylifting off the sensor or moving significantly beyond the drawnscrolling region.

Incorporating hysteresis in the UI control processing, whenimplemented properly, significantly improves device usability since itminimizes the effects of user input inaccuracies. Additionally,hysteresis can enable tighter spacing of drawn UI elements in designswith space constraints.

UI control processing
A final aspect of UI control processing involves the consistency of theprocessing. Since the underlying processing is not obvious to the userand cannot easily be communicated visually, it is important that UIelements within a particular control layout act consistently. Forexample, a control layout that mixes buttons that activate on taps withbuttons that active on press will certainly confuse users. Furthermore,UI elements should maintain a consistent implementation for activationregions and hysteresis.

While UI designers are typically familiar with maintainingconsistency within a given control layout, it is equally important tomaintain consistency between different control layouts on a device.Typically, UI designers will keep the visual aspects of the controllayout similar for different applications to maintain a particular IDlanguage.

However, designers must remain vigilante about ensuring that thisconsistency is also maintained in the control processing. That is,button processing, scroll bar functionality and other UI elementsshould behave similarly as control layout changes from one applicationto another on a device.

Since applications for a device may be written by differentdevelopers (or even different companies), consistency of UI processingcan be quite a daunting task and, unlike the visual appearance, may bemore difficult to enforce. Without consistency in the controlprocessing, users will quickly become frustrated with the deviceoperation, regardless of how well designed any one particularapplication or UI control layout may be.

Other Design Considerations
Embedded designers using RTS in their designs have typically relied ona variety of supporting ASICs, ASSPs, or discrete analog/digitalcomponents from different IC vendors. In most cases, designers aresupplied with separately sourced resistive screens, supporting ICs, andhost software and charged with cobbling a design together to make itoperate efficiently.

As a result, they've experienced varying levels of performance. Ifthe RTS is not properly designed and implemented, its performance canlag, ranging from extremely poor to moderate since it is subjected toan array of environmental issues, as well as noise and filtering offinger data.

Conversely, this capacitive sensor interface design is preciselytuned to its supporting proprietary mixed signal VLSI IC. This systemapproach delivers to embedded designers circuits that are specificallytailored to their respective capacitive sensor functions. Computing ofthis high caliber converts analog capacitive measurements into robustbehavior.

Moreover, the design process is simplified from a programmer'sstandpoint since he or she isn't burdened by a non-ideal input system.With the capacitive sensor, programmers receive clean and filtereddata, whereas with RTS, they worry about filtering data and averagingit to make sure they don't get considerable noise in the fingerposition reporting.

A proprietary 16-bit RISC microcontroller core is at the heart ofthe electronics powering this capacitive sensor panel. Its job is tomanage and collect analog measurements, compensate for environmentaleffects such as electrical noise and temperature drift, compute thefinger's position and proximity, detect motion and tapping gestures,and communicate with the host system.

In the important area of electrostatic discharge (ESD) protection,embedded designers receive a significant bonus because the sensor isdesigned to be mounted underneath a lens or device casing. As a result,it is completely sealed off from ESD events, and its ESD rating extendsbeyond ±15 kilovolts (KV) when it is properly mounted. Resistivescreens, on the other hand, are rated at up to ±8 KV, whichmakes them more prone to environmental problems.

As for the capacitive sensor's low-power operation, embeddeddesigners are provided with several modes. The sensor can be powereddown with finger data reported as early as 100 milliseconds frompower-on. The modes include active, doze, sleep, and deep sleep.

At active, the sensor is fully operational at 1.5 milliamps (mA) andchecking for the presence of a finger; it automatically reverts to dozeat about 60 microamps (µA) when it doesn't detect a finger for aspecified amount of time; it goes into sleep at about 40 µA whenfinger-presence checking frequency is further reduced; and finally, itenters deep sleep at 10 µA when finger-presence checking iseliminated, thus maximizing power consumption.

Mariel VanTatenhove is senior product line director, and Andrew Hsu is manager ofstrategic and technical marketing at Synaptics,Inc.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.