UI design: Don't throw out the baby with the bathwater
At the recent 2013 CES and at earlier ESC DESIGN shows I have seen the radical ways in which new human-machine technologies are having on the way humans interface with their computers. This shift is not only occurring in mobile smartphones, wireless tablets and even dekstop PCs, but in a variety of embedded devices in consumer, industrial and automotive apps as well.
The new UI technologies they now use run the gamut: capacitive switch based sensors, 3D, gesture and hover, MEMS-based accelerometers and vision-based systems that can see and understand.
The flood of new UI alternatives available to the developer has both its upsides and its downsides. On the positive side, the new user interface alternatives designed to either supplement, or replace, the traditional graphical user interface are in some cases making computer based systems much more accessible. On the negative, in many cases device designers seem to have adopted a "throw out the baby with the bathwater" strategy in their enthusiasm for the new user interface alternatives.
Without taking into account what they have learned about user habits and preferences in the decades that the traditional GUI has been in use, they have moved willy-nilly to the new HMI alternatives, often with disastrous results.
By ignoring the basic principles of user interface design learned in the era of when graphical UIs were the main mechanism for such interaction, the result as far as I can see, is confusion and rejection. Using one of these new gesture, hover, etc. enabled devices is very much like being asked to join an elite secret society and then being asked to learn a unique set of "secret handshakes" to gain access to the secret society’s meetings or to talk fellow members.