Dealing with touch sensitive areas of graphical objects

Application level handling of a touchscreen is fundamentally different to an interface which uses a mouse, trackball or other off-screen device. This article will explore different algorithms for establishing the exact point of intended touch, and whether those touch should be applied to an object at or near that location.

At a lower level the conversion of a touch event to an x,y location on the display is achieved by a calibration process (1) and a driver(2) to convert the analog signals from the touch sensitive device into coordinates on the output display as an x,y position measured in pixels.

For the purposes of this piece, we will be assuming that we have an x,y position, but then the software has decide how to trigger an event based on that reading. There are filters and modifications that can be applied to that x,y position to allow for the fact that it originated from a touchscreen and not a mouse.

There is a fundamental difference between input from a touchscreen and the input from a mouse, trackball or other off-screen device. The mouse produces a location which is then displayed, usually with a small arrow, or cursor, pointing at the current location.

Because the location is displayed, and the user can see that location, it is an absolute measurement, with no error. If there is an error in the reading of the mouse movement, that error is reflected in the position of the cursor, so the user gets to see the new position.

If the error requires correction, the user simply moves the mouse in the appropriate direction to correct the overshoot. The human is in the feedback loop, and that provides correction for errors from the analog reading of the mouse, or errors generated by the user over or under shooting with their hand movement.

The touchscreen does not have this closed loop control. When the user touches the screen with his finger, he can see the thing that he was trying to touch, but there is no indication of where the software believes the touch occurred.

There will always be an element of error due to temperature effects on the screen, inaccurate calibration, or non-linear electronics. This error is the distance from where the user physically touched and the x,y location that the touch driver returned to the application. The error can be seen if the application makes a cursor visible, as shown in Figure 1 below .

Figure 1: a) accurately shows the cursor position where the finger actually touched, while b) shows error in both the x and y direction so the cursor is displayed away from the center of the fingers touch on the screen

If there is a small vertical error, then as the user moves their finger around, the cursor will be displayed a small distance above the finger. If you get to try this experiment, then it can also be interesting to see how sliding the finger quickly may display a cursor that follows behind.

This is due to filtering, usually performed in the driver, which adds a time delay to the touch position. Filtering makes the cursor move more smoothly, since it eliminates some electrical noise, but can also add a lagging effect.

Another artifact that may be visible in this experiment is that the cursor may shake. This noise can be due to a number of factors. One is that the human finger is just not that steady. It may be partly noise in the analog electronics which can be reduced with filtering, either electronically or in software.

Another possibility, with resistive screens, is that variations in the pressure applied by the user are varying the cursor position slightly. If this is the case then most of the jitter can be removed by ignoring touch events where the pressure is below a certain threshold.

This means that very gentle touches on the screen will not register at all, but when the touches are detected they will be stable and accurate. Finding a good balance here is important because you do not want to force your user to have to exert a lot of pressure for the touch to register.

Test Screens
I am not suggesting that your touchscreen applications should have a visible cursor, but it is very useful to have the ability to turn it on during test and debug of the touch event handling algorithms.

If you are implementing this cursor yourself then you may find that a large cross works better than the small arrow used in windows ” the arrow is likely to be completely covered by the user's finger making it difficult to assess accuracy.

It is useful to print the x,y position on the display as well. When you see the cursor on the screen you will know its precise position in pixels. If you draw a few target objects on the screen, for example a small circle with its center at 100,200.

If you press on that circle and the position detected by software is 105, 198, then you have an error of 5,-2. This error may vary across different parts of the screen. Measuring this error using a finger is not very exact.

A stylus will be a bit more precise and in some cases it is worth constructing a mechanical test jig which will guarantee reproducible touching for a few predefined points.

Touchable Buttons
Even if all error could be removed by calibrating out all manufacturing differences, there are certain human factors in play that can not be solved in the electronics.

The first issue is parallax which is the error introduced by the viewing angle. The second issue is that the size of the human finger means that there is an area of touch where the flesh of the finger flattens itself against the screen, and the detected touch point will be in the middle of that area.

This means that the user can not see the point that they have touched and so the exact place they think they have touched may be different to what the software detected.

Given that some error has to be tolerated, what can we do to minimize its impact on the user experience? Most touch events involve the user pressing a graphical button. With a mouse the visible area of the button is the clickable area. The user can see if the cursor point is inside or outside of the graphical button and so they know if a mouse-button-click will activate it.

A touchscreen user does not have the same certainty. So there is benefit in making the touch sensitive area bigger than the visible dimensions of the button. If the user presses quite close to a button, it is a reasonable assumption that their intention was to press that button.

If some of the buttons are close together there is the possibility that their touch sensitive areas may overlap, as seen in Figure 2 below . If the user then touches in between button A and button B, which one should the software respond to?

The best answer is neither. It is usually better for the application to do nothing rather than risk doing the wrong thing. A non-sensitive area between two adjacent buttons will force the user to retouch in a less ambiguous location.

Figure 2: Dashed lines show boundary of touch sensitive areas. A and B have overlapping areas and so touching the shaded area will not activate either one.

There are some special cases where a smarter decision can be made when the touch event happens in between two objects. If an on-screen keyboard is being displayed, then each button represents a letter of the alphabet. If the user touches in between two letters then a predictive text application could guess at which letter is more likely based on the preceding letters.

Release me
A button can be programmed to respond when touched or to respond on release. I prefer to respond to the release. It provides the opportunity to give visual feedback when the object is touched.

On a mouse based interface it might be subtle, like a dotted line appearing around the object. For a touchscreen you are better off being less subtle. Some, or possibly all, of the button being touched is obscured from view by the user's finger or hand.

A complete color change for the button is required, so that if only a small portion of the button is visible to the user then there is a noticeable change. Some systems also display a border around the touched object in the hope of making the indication visible beyond the user's finger.

The on-screen keyboard is a special case. Some applications display the letter touched with a larger rendering in the area above the finger. This allows the user to see exactly which letter will be activated once the key is released.

If the touch has not caused the button to perform its main action, then the user still has the opportunity to slide their finger out of the button in order to cancel the action.

Button state machine
Allowing the user to slide their finger out of a button gives them the option to cancel that button action. We also want to allow them to slide back into the selected button. The main motivation for this can be seen in the following scenario.

Consider a user touching at the edge of the touch sensitive area of a button. Finger shake or electrical noise may lead to the position of the finger moving out of the buttons sensitive area and back in again.

If we cancel the selection when we slide out, then the button will be cancelled before the user has the chance to perform a release by lifting their finger. A preferable interaction is to allow the button to be selected again when they slide into that button.

I have seen implementations which allow the finger to slide into a different button to select it but I am not in favor of that approach. I think it makes the display more likely to perform button actions due to a random touch, such as a person rubbing against the screen as they walk by. Forcing the touch and release to both happen within the sensitive area of the button makes random touches less dangerous.

This leads us to a simple state machine for a button as shown in Figure 3 below . This is also close to the operation of a typical button in Microsoft Windows.

Figure 3: The state machine of a touchable button

In the state machine highlighting refers to the visual change in the button color or outline to indicate that the button is about to perform its action as soon as the user releases their finger from the touchscreen.

Keeping Fitt
One of the guidelines derived from Fitt's Law (3) is that buttons placed at the edges or corners of the display are far easier targets than buttons placed away from the edges.

When using a mouse it is possible to overshoot when moving towards a button, but if the button is at the edge of the display and the mouse is not capable of moving past that edge, then overshoot is impossible in that direction. Corners are even better because overshoot is impossible in two directions, hence the choice of the lower left corner for the 'Start' button in Windows.

With a touchscreen there is no on-screen cursor that has to be moved from its current location, which could be anywhere on the screen. Instead the user's finger is moving from somewhere above the screen down onto the desired button.

The edges still make attractive targets. If the button is at the bottom of the screen, then the user may touch the raised edge of the screen's housing and this ensures that the finger does not accidentally slide as the button is pressed.

When pressing buttons on the bottom of the screen the users hand will obscure a minimum of the display. On the other hand, buttons at the top can cause a lot of the display to be covered by the user's hand and arm, so the bottom edge is a more attractive home for buttons.

Similarly buttons on the right hand edge will not cause the users line of sight to be blocked as much as buttons on the left. This assumes a right handed user. While I would not advocate a design which heavily discriminates against left handed users, if your options are buttons on the left or buttons on the right, then it is sensible to go with the design choice that favors the majority.

It is sometimes visually more pleasing to keep the buttons a few pixels away from the edge of the display. Much of the benefit of placing the button near the edge is lost if it is possible for the user to place their finger up against the raised edge and effectively touch in between the button and the last pixel in the display.

You may think that the user's finger would be too large to cause a touch event in the last couple of pixels, but remember that there could be calibration errors which mean that the effective touch position is a few pixels away from the center of the finger. Alternatively a stylus may be used which would allow a touch event in that hard to reach edge of the display.

One nice compromise is available if the touch sensitive area of the button is greater than its visible area. It is then possible to display the button a couple of pixels from the edge of the display, but close enough that the touch sensitive area extends to the edge as seen in Figure 4 below .

For the mechanical design it is important that the raised edge is not so far from the display that there is a 'dead', non-touch sensitive, area between the display and the raised edge. For resistive touchscreens, this may be tricky to get right.

If the frame is actually touching the touchscreen then it is effectively generating touch events all around the edge of the touchscreen. This might not be noticeable at first, but pressure on the frame may vary with use, generating phantom touch events.

Over the edge
One anomaly of converting an analog touch signal into a location on screen is that it is possible to touch a position which is then converted to a pixel location which is not on the screen.

For example assume that a horizontal error is adding three pixels to every x location. If I have a screen of 1024×1024 pixels and I physically touch on location 1023, 100, then the software will detect that as location 1026, 100.

When I search my list of on-screen objects to find which one is activated, my software must tolerate a location such as this which is off the display. For similar reasons, a negative x or y value must be tolerated.

Consider an example where the touch sensitive area of a button stops at the edge of the display, at pixel 1023 (we will index the 1024 pixels from 0 to 1023). The touch event at 1026, 100 is beyond 1023 and therefore it is a miss.

It is most likely that the user was trying to touch the button that is place up against the edge of the display. Truncating the out-of-range position to the nearest edge would resolve this. So position 1026, 100 should be truncated to 1023, 100 before attempting to locate the touched object.

Be aware that some types of resistive touch screen are subject to non-linear characteristics and the non-linearity is most pronounced near the edges of the screen.

So you may place all the buttons near the edge to make them easier to touch, but then finding your biggest touch location errors are near the edges as well. So it is important to evaluate whether your touch panel suffers from this type of non-linearity.

Debouncing
A user's single touch on the screen may be interpreted as two touches. On a pressure sensitive resistive display, the pressure of the user's finger might be uneven resulting in more than one touch. Or a user might start to remove their finger and then hesitate leading to an action similar to a double click on a mouse button.

In some cases pressing a button navigates to another screen and the button that was pressed to cause the navigation has now disappeared. The user's finger is then left hovering over a non-sensitive area, or possibly the finger is touching a new button, which is unrelated to the button just pressed. It is very easy to accidentally activate this new button with the second 'bounce' on the same location.

Some of this can be avoided with an algorithm similar to debouncing mechanical buttons. If a press action happens a very short time period after a release, then treat it as if only a single press happened.

While this is a useful approach, it can be tricky to decide some of the thresholds. An alternative is to disable all buttons for 0.25 seconds after they first appear. This is always reasonable, since you do not want a user to press a button that they have not had time to read.

Orientation
When arranging the layout of touch sensitive buttons take account of the fact that a user's horizontal control of their finger is far better than their vertical control.

Users tend to drag down slightly, perhaps due to the way knuckles bend, as they lift their finger from a touchscreen. This raises the danger that they might touch a button just below the one they intended.

So a button in a horizontal row of buttons, as seen in Figure 5 below will be an easier target than a button of the same size in a vertical row. If you have to use vertically arranged buttons then try to give them more separation than you use for horizontally arranged rows of buttons.

This difficulty with vertically arranged items is the reason pull down menus work fine in a mouse driven interface but do not transfer to a touchscreen. I have observed another behavior which can lead to error with vertical lists of buttons. Users are inclined to touch the lower half of the intended button.

Possibly this is so that the text inside the button can still be read. Whatever the reason it increases the chances that the detected touch position will be below the visible area of the button. The touch might then be in a non-sensitive are leading to no action, or it might be in the sensitive area of a button below the one intended.

One way to make an allowance for this is to make the sensitive area of a button extend a larger distance beyond the lower edge of the button and a shorter distance above the top.

Windows 7 has implemented an interesting touch enhancement. Some vertical menus open with more spacing between items, whenever windows detects that the menu was opened using a touch event and not a mouse.

This is a refreshing acknowledgement in the world of desktop operating systems that you can not simply replace a mouse with a touchscreen and expect a good user experience.

Multi-touch – or not
While the iPhone has made detecting multiple simultaneous touches a trendy topic, on many applications this has limited use. If your GUI consists of buttons and simple data entry then you may not need the kind of gesture detection appropriate on a device which supports image manipulation.

If your screen can only detect a single touch, it is important that you consider the possibility that a user might accidentally touch more than one place on the screen at the same time.

If the screen is mounted close to the horizontal then the user may rest the heel of their hand on the display as they touch an item with their finger. A vertical orientation makes users inclined to rest the hand that they are not using on top of the display, and sometimes that hand will touch the touchscreen while the active hand is pressing buttons lower down.

There is no perfect solution to these scenarios, and most common touchscreens do not give you a way to detect that this has happened. If you can detect that there are multiple touches far apart on the screen, then the best policy is to ignore all touches until you get a single touch again.

If you can not tell, as with a resistive touchscreen, then there are a couple of scenarios that you should consider and bring them into your test suite. On a resistive touchscreen touching in two points which are far apart usually results in the driver returning a point which is in between the two touched points.

Consider the scenario where a finger is on a button which the user has pressed, but not yet released. If the user then accidentally touches another part of the screen, the software will see this as the touched point suddenly moved to a place half way between the intended touch and the accidental touch.

Assuming that the accidental touch was brief, the position will revert to the correct position after the accidental touch ended. This is a second motivator for allowing the touch location to move out of the button and back in since the touch location's sudden jump away from the button and back again will still allow the user to activate that button.

Other Widgets
So far we have discussed buttons. More complex touchable objects such as sliders also provide interesting challenges. As with buttons each object will have to be larger than the equivalent widget on a mouse driven interface. On a horizontal slider you want to allow the user's finger to stray a little high or a bit low without releasing control of the slider.

The popularity of capacitive touchscreens in recent consumer items means that many user's have experience of using very gentle sweeping motions on a touchscreen to move a slider or to turn a page.

Capacitive screens do not require the user to exert any downward pressure, and so these gentle touches word well. If your product has a resistive touchscreen, you may find that users are using similar actions on your screen and seeing no response. This is more noticeable in objects involving dragging, such as sliders, than it is for simple buttons.

Third Party Toolkits
To my knowledge none of the graphical toolkits available, even those targeted at the embedded market, implement any of the mechanisms described above, or any equivalent. They seem to be happy to convert touch events into the same format as their mouse events and allow all the interaction to be the same for both.

Because of the fundamental differences pointed out at the beginning of this article, there is missed opportunity here to raise the standard of the touch interfaces.

If your toolkit is available in source form, you may have the option of implementing some or all of the techniques described here. None of them are difficult to code as long as you have access to the event management code and the code that locates the activated object.

In C++ toolkits such as PEG or Qt you have the option to overload the virtual function which is called in response to a mouse event. This allows you to respond to a different sensitive area than the default. This allows the sensitive area to be resized.

However it takes further modification of the library's source code to handle the situation where the touch occurred in the area where two or more sensitive objects overlap. Amulet Technologies also sell programmable graphics modules which also have hooks to allow the touch sensitive area to be modified.

Touching Up
The feeling of responsiveness of a user interface is often down to the details of how individual events are managed. If the user occasionally thinks 'I pressed that button, but nothing happened ” oh well, I will just press it again', then the interface probably just lost an event that was not properly processed.

It might be due to a slightly inaccurate touchscreen. It might be due to the fact that the user touched just outside the object, but the user still feels he touched the button. These missed events lead to an overall feeling of unease with the interface which in turn creates a perception of low quality.

Most users would be hard pressed to identify just why they do not feel comfortable with the interface. On the other hand, careful use of the techniques described here can help the user feel that they are in control, and that all of their touch commands will lead to the desired response.

Niall Murphy has been designing user interfaces for over 14 years. He is the author of Front Panel: Designing Software for Embedded User Interfaces. Murphy teaches and consults on building better user interfaces. He welcomes feedback and can be reached at nmurphy@panelsoft.com . His web site is www.panelsoft.com.

References
1. How To Calibrate Touch Screens, Carlos E. Vidales, Embedded Systems Programming, June 2002.

2. Writing drivers for common touch-screen interface hardware, Kenneth G. Maxwell,Embedded Systems Programming, July 2005.

3. A Quiz Designed to Give You Fitts, AskTog, February 1999.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.