It is better to look beautiful… Aesthetics and HCI

During the summerschool in Haifa Prof. Noam Tractinsky from Ben-Gurion University of the Negev gave a presentation about Aesthetics in Human-Computer Interaction. It was good to meet him in person and get some more insight in his work – as I refer to it typically in my HCI class.


In short his finding can be summarized by: What is Beautiful is Usable [1], [2]. In his talk he had some interesting example – you can look at a web page for one second only and you will figure out if it is a good design or not. There has been previous work in Japan [3] similar results – suggesting that this may be universial. Methodical I think the research approaches are not straightforward and may be disputed in parts – but the basic findings are very intuitive and should be taken more into account.

[1] Tractinsky, N. 1997. Aesthetics and apparent usability: empirically assessing cultural and methodological issues. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, United States, March 22 – 27, 1997). S. Pemberton, Ed. CHI ’97. ACM, New York, NY, 115-122. DOI= http://doi.acm.org/10.1145/258549.258626

[2] Tractinsky, N., Shoval-Katz A. and Ikar, D. (2000) What is Beautiful is Usable. Interacting with Computers, 13(2): 127-145.

[3] Kurosu, M. and Kashimura, K. 1995. Apparent usability vs. inherent usability: experimental analysis on the determinants of the apparent usability. In Conference Companion on Human Factors in Computing Systems (Denver, Colorado, United States, May 07 – 11, 1995). I. Katz, R. Mack, and L. Marks, Eds. CHI ’95. ACM, New York, NY, 292-293. DOI= http://doi.acm.org/10.1145/223355.223680

History and Future of Computing and Interaction

Today I was teaching my class on user interface engineering and we covered a selected history of HCI and looked at the same time at a potential future. We discussed how user interface evolved and where UI revolutions have happed. To my question „What is the ultimate user interface?“ I got three very interesting answers (1) a keyboard, (2) mind reading, and (3) a system that anticipates what I want. 
With regard to history in HCI one of my favorite texts is the PhD dissertation of Ivan Sutherland [1]. The work described was done in 1960-1963 when the idea of personal computing was very far from main stream. Even just browsing some of the pages gives an impression of the impact the work had…
For future user interfaces we talked about brain computer interfaces (BCI) and how they very much differ from the idea of mind reading. I came across a game controller – Mindlink – developed by Atari (1984) and that was never released [2]. It was drawing on the notion of linking to the mind but in fact it only measured muscle activity above the eye brows and apparently did not perform very well. However there is a new round coming up for such devices, see [3] for a critical article on consumer BCI.
On the fun side I found a number of older videos that look at future technology predictions- see the videos for yourself:
http://www.paleofuture.com one is a site that has an amazing (and largely funny) selection of predictions. There is a more serious – but nevertheless – very entertaining article on predictions for computing and ICT by Friedemann Mattern: Hundert Jahre Zukunft – Visionen zum Computer- und Informationszeitalter (hundred years future – predictions of the computing and information age) [4].
[1] Sutherland’s Ph.D. Thesis, Sketchpad, A Man-Machine Graphical Communication System. 1963 http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-574.pdf
[3] Emmet Cole. Direct Brain-to-Game Interface Worries Scientists. Wired. 09.05.07. http://www.wired.com/medtech/health/news/2007/09/bci_games
[4] Friedemann Mattern.Hundert Jahre Zukunft – Visionen zum Computer- und Informationszeitalter. Die Informatisierung des Alltags – Leben in smarten Umgebungen, Springer Verlag 2007. http://www.vs.inf.ethz.ch/publ/papers/mattern2007-zukunft.pdf

MobileHCI 2008 Tutorial

The conference on mobile human computer interaction (MobileHCI 2008) started today in Amsterdam with the tutorial and workshop day.

I am chairing the tutorials and we tried a new approach for the tutorial, having 6 sessions/chapters that all together make up an introduction to mobile HCI. After 10 years of mobile HCI it seems important to help new members of the community to quickly learn about the field. The presentations were given by experts in the field that had 1 hour each for their topics. We had unexpected high attendence (the room with 100 seats was nearly always full). Have a look at the slides:

Text input for mobile devices by Scott MacKenzie
Scott gave an overview of different input means (e.g. key-based, stylus, predictive, virtual keyboard), parameters relevant for designing and assessing mobile text input (e.g., writing speed, cognitive load) and issues related to the context of use (e.g., walking/standing).

Mobile GUIs and Mobile Visualization by Patrick Baudisch

Patrick introduced input and output options for mobile devices. He will talk about the design process, prototyping and assessment of user interfaces, trade-offs related to the design of mobile GUIs and different possible interaction styles.

Understanding Mobile User Experience by Mirjana Spasojevic
Mirjana discussed different means for studying mobile user needs and evaluating the user experience. This includes explorative studies and formal evaluations (in the lab vs. in the field), including longitudinal pilot deployments. The lecture discusses traditional HCI methods of user research and how they need to be adapted for different mobile contexts and products.

Context-Aware Communication and Interaction by Albrecht Schmidt
Albrecht gave an overview of work in context-awareness and activity recognition that is related to mobile HCI. He discussed how sharing of context in communication applications can improve the user experience. The lecture explained how perception and sensing can be used to acquire context and activity information and show examples how such information can be exploited.

Haptics, audio output and sensor input in mobile HCI by Stephen Brewster
Stephen discussed the design space for haptics, audio output as well as sensor and gesture input in mobile HCI. Furthermore he assessed resulting interaction methods and implications for the interactive experience.

Camera-based interaction and interaction with public displays by Michael Rohs
Michael introduced camera based interaction with mobile devices; this included a assessment of optical markers, 2D-barcodes and optical flow as well as techniques related to augmented reality. In this context he addressed interaction with public displays, too.

You can also download the complete tutorial including all 6 chapters in a single PDF file (16MB).

Paul presented our paper at Pervasive 2008

Paul presented after lunch our full paper on a development approach and environment for mobile applications that supports underlying user models [1]. In the paper he shows how you can create applications while programming by example where the development environment automatically adds a KLM model. In this way the developer becomes automatically aware of estimated usage times for the application. The paper is work that builds on our paper on KLM for physical mobile interaction which was presented last year at CHI [2]. The underlying technology is the embedded interaction toolkit [3] – have a look – perhaps it makes you applications easier, too.

[1] Paul Holleis, Albrecht Schmidt: MAKEIT: Integrate User Interaction Times in the Design Process of Mobile Applications. In: Proceedings of the Sixth International Conference on Pervasive Computing, Pervasive’08. Sydney, Australia 2008, S. 56-74.

[2] Holleis, P.; Otto, F.; Hußmann, H.; Schmidt, A.: Keystroke-Level Model for Advanced Mobile Phone Interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ’07. ACM Press, New York, NY, 1505-1514.. 2007.

Meeting on Human Centered in Vienna

Ina Wagner, Volker Wulf and Kjeld Schmidt organized a meeting to get together people from all over Europe that work on human centred computing. We had interesting discussions what is specific and distinct European human centred computing and how well it is represented in organizations such as the ACM.

Some years ago there has been significant support in this area of research on a European level – namely I3 and the disappearing computer initiative. Currently many of us feel that the value of user centred research is not supported enough and hence innovation happens somewhere else which can lead to massive disadvantages for European industries. One central issue is that we need more to communicate value of user interface research.

The need for user interface research is undoubtedly accepted. One example is the ISTAG report of 2001 that tried to look into 2010 – a future that is now not too far anymore. Looking at the challenges stated in this report it becomes clear that most of the technical issues are solved but this does has not lead to a breakthrough with regard to the visionary scenarios. But towards the challenge „natural interfaces“ we have still a long way to go. If we really want to get closer to those scenarios of ambient intelligence that are human friendly we really have to push on interaction and user interfaces – hopefully decision makers on a European level will get it 😉

Guest course at the University of Linz, MSc Pervasive Computing

I am teaching a guest course at the University of Linz in the Pervasive Computing master program. The topic is Unconventional User Interaction – User Interfaces in a Pervasive Computing World (http://www.ubicomp.net/uui). Today we started with an introduction to motivate how pervasive computing changes human computer interaction. I am already looking forward to the projects!

At dinner I learned why you can never have enough forks in a good restaurant. In case you loose your pen for the mobile phone a fork will do… The topic of the lecture is everywhere!

Car UIs transport Emotion

It is often discussed whether or not the user interface in the car matters or not. The basic argument is that cars are emotional and hence the driving experience matters and everything else is secondary.

However it seems the user interface becomes more and more part of the experience. On Saturday night I travelled via Munich to Innsbruck – and had again some time on in the lounge at the railway station. In a article on the new VW concept car UP it was interesting to see that about 15% of the text (about 20 lines text of the whole article of 130 lines, see red box) were about the new user touch screen – the section about the motor was similar in length.

The opening keynote at Ubicomp was given by Antonio Calvosa from Ferrari. Here two a very experienced and user interface focus could be seen. The talk touched issues of emotion and affective issues. Overall he argues that Ubicomp technologies should amplify what humans like to perceive.

Our Papers at Interact 2007

Heiko Drewes and Richard Atterer, collegues from university of Munich, have travelled to Interact 2007. Their emails indicate that the conference is this year at a most interesting place. The conference is in Rio de Janeiro, directly at the Copacabana. The conference was highly competitive and we are happy to have two papers we can present there.

Heiko presents a paper that shows that eye gestures can be used to interact with a computer. In his experiments he shows that users can learn gesture with eyes (basically moving the eyes in a certain pattern, e.g. following the outline of a dialog box). The paper is part of his PhD research on eye-tracking for interaction. More details are in:

Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.

Richard’s paper is on collaboration support with a proxy based approach. Using our previous work on the UsaProxy we extended the functionality to supported synchronous communication while using the Web:

Richard Atterer, Albrecht Schmidt, and Monika Wnuk. A Proxy-Based Infrastructure for Web Application Sharing and Remote Collaboration on Web Pages. Proceedings of INTERACT 2007.

Tico Ballagas defended his PhD in Aachen, New insight on Fitts‘ law.

Today I finally got around visiting Jan Borchers (media computing group at RWTH Aachen). Tico Ballagas hat as part of his PhD defence a public talk and took the chance to go there.

There where new parts in the talk on the impact of the selection space resolution on Fitts’s law that I had not seen in his work before. It is published in 2006 as a technical report (Rafael Ballagas and Jan Borchers. Selexels: a Conceptual Framework for Pointing Devices with Low Expressiveness. Technical Report AIB-2006-16, RWTH Aachen, Dec 2006) which is worthwhile to have a look at. This could be very interesting and relevant for the work Heiko Drewes does on eye-gaze interaction. Discriminating between input and output space for the index of difficulty could be helpful to understand better the impact of the errors that we see in eye gaze interaction.

One part of Tico’s research was concerned with a definition of a design space for input devices. This is partly described in a paper in IEEE Pervasive magazine, see: Ballagas, R., Borchers, J., Rohs, M., Sheridan, J.G., The Smart Phone: A Ubiquitous Input Device. IEEE Pervasive Computing 5(1). 70-77. 2006.