Gesture-based wearable devices, leave the user’s hands empty. This allows users to abandon interaction with the device instantly when the physical environments requires it, e.g., if the user stumbles. In theory, this gives this class of devices the potential to be used during a mission critical “primary” task.
When driving a car, playing sports, walking on a busy street, or rock climbing, users cannot take focus away from the physical world for more than an instant. If any interaction with a computer has to take place, it must therefore happen in very short bursts. Such short interactions (< 4sec), i.e., microinteractions, typically involve users checking transit schedules, traffic reports, weather advisories, or their email on their mobile or wearable device.
In this paper, we explore this class of devices. In particular, we investigate how to engage in interaction quickly—because such scenarios typically allow users to take their eyes off the primary task only briefly (microinteractions).
We present PinchWatch, a one-handed device with a wrist-worn display. Users invoke functions by pinching (i.e., thumb presses against finger or palm); they enter parameters by performing sliding or dialing motions with the thumb on the palm; or they move the hand as a whole.
All user input is tracked by a chest-worn camera observing the user’s hand and wrist. Complementing this camera with depth sensing capability allows for tracking additional gestures, for use in sunlight, and for recognizing pinch gestures when the hand is not directly facing the camera.
Our design minimizes interference with the user’s primary task by (1) building on large number of single purpose gestures, thereby avoiding modes and menus (2) by offering tactile feedback and a eyes-free use for most interactions through pinching, and (3) requiring only a single hand, allowing the other hand to stay on the primary task at all times.
To read this external content in full, download the complete paper from the author's archive on line.