The computer mouse – next generation?

In my lecture on user interface engineering I start out with a short history of human computer interaction. I like to discuss ideas and inventions in the context of the people who did it, besides others I take about Vannevar Bush and his vision of information processing [1], Ivan Sutherland’s sketchpad [2], Doug Engelbart’s CSCW demo (including the mouse) [3], and Alan Kay’s vision of the Dynabook [4].

One aspect of looking at the history is to better understand the future of interaction with computers. One typical question I ask in class is „what is the ultimate user interface“ and typical answers are „direct interface to my brain – the computer will do what I think“ and „mouse and keyboard“ – both answers showing some insight…

As the mouse is still a very import input device (and probably for some time to come) there is a recent paper that I find really interesting. It looks at how the mouse could be enhanced – Nicolas Villar and his colleagues put really a lot of ideas together [5]. The paper is worthwhile to read – but if you don’t have time at least watch it on youtube.

[1] Vannevar Bush, As we may think, Atlantic monthly, July 1945.
[2] Ivan Sutherland, „Sketchpad: A Man-Machine Graphical Communication System“ Technical Report No. 296, Lincoln Laboratory, Massachusetts Institute of Technology via Defense Technical Information Center January 1963. (PDF, youtube).
[3] Douglas Engelbart, the demo 1968. (Overview, youtube)
[4] John Lees. The World In Your Own Notebook (Alan Kay’s Dynabook project at Xerox PARC). The Best of Creative Computing. Volume 3 (1980)
[5] Villar, N., Izadi, S., Rosenfeld, D., Benko, H., Helmes, J., Westhues, J., Hodges, S., Ofek, E., Butler, A., Cao, X., and Chen, B. 2009. Mouse 2.0: multi-touch meets the mouse. In Proceedings of the 22nd Annual ACM Symposium on User interface Software and Technology (Victoria, BC, Canada, October 04 – 07, 2009). UIST ’09. ACM, New York, NY, 33-42. DOI= http://doi.acm.org/10.1145/1622176.1622184

More surface interaction using audio: Scratch input

After my talk at the Minerva School Roy Weinberg pointed me to a paper by Chris Harrison and Scott Hudson [1] – it also uses audio for creating an interactive surface. The novelty on the technical side is limited but nevertheless the approach is interesting and appealing because of its simplicity and its potential (e.g. just think beyond a fingernail on a table to any contact movement on surfaces – pushing toy cars, walking, pushing a shopping trolley…). Perhaps having a closer look at this approach a generic location system could be created (e.g. using special shoe soles that make a certain noise).

There is a youtube movie: http://www.youtube.com/watch?v=2E8vsQB4pug

Besides his studies Roy develops software for the Symbian platform and he sells a set of interesting applications.

[1] Harrison, C. and Hudson, S. E. 2008. Scratch input: creating large, inexpensive, unpowered and mobile finger input surfaces. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 205-208. DOI= http://doi.acm.org/10.1145/1449715.1449747

Interesting interaction devices

Looking at interesting and novel interaction devices that would be challenging for students to classify (e.g. in the table suggested by Card et al 1991 [1]) I can across some pretty unusual device. Probably not really useful for an exam but perhaps next year for discussion in class…

Ever wanted to rearrange the keys on your keyboard? ErgoDex DX1 is a set of 25 keys that can be arranged on a surface to create a specific input device. It would be cool if the device could also sense which key is where – would make re-arranging part of the interaction process. In some sense it is similar to Nic Villar’s Voodoo I/O [2].
Wearable computing is not dead – here is some proof 😉 JennyLC Chowdhury presents intimate controllers – basically touch sensitive underwear (a bra and briefs). Have a look at the web page or the video on youtube.
What are keyboards of the future? Each key is a display? Or is the whole keyboard a screen? I think there is too much focus on the visual und to less on the haptic – perhaps it could be interesting to have keys that change shape and where the tactile properties can be programmed… 
[1] Card, S. K., Mackinlay, J. D., and Robertson, G. G. 1991. A morphological analysis of the design space of input devices. ACM Trans. Inf. Syst. 9, 2 (Apr. 1991), 99-122. DOI= http://doi.acm.org/10.1145/123078.128726 
[2] VILLAR, N., GILLEADE, K. M., RAMDUNYELLIS, D., and GELLERSEN, H. 2007. The VoodooIO gaming kit: a real-time adaptable gaming controller. Comput. Entertain. 5, 3 (Jul. 2007), 7. DOI= http://doi.acm.org/10.1145/1316511.1316518

Why can I not rotate my windows on my Vista Desktop?

In the User Interface Engineering lecture we discussed today input devices, especially to interact with 3D environments. In 3D environments having 6 degrees of freedom (3 directions in translation and 3 options for rotation) appears very natural. Looking back at 2D user interfaces with this in mind one has to ask why are we happy (an now for more than 25 years) with translation (in 2D) only and more specifically why is it not possible to rotate my application windows in Vista (or perhaps it is and I just dont know it). At first this questions seems like a joke but if you think more of it there could be interesting implication (perhaps with a little more thinking
 than this sketch 😉

Obviously people have implemented desktops with more than 2D and here is the link to the video on project looking glass – discussed in the lecture. (if you are bored with the sun sales story just move to 2:20): http://de.youtube.com/watch?v=JXv8VlpoK_g
It seems you can have it on Ubuntu, too: http://de.youtube.com/watch?v=EjQ4Nza34ak