Andreas Riener defends his PhD in Linz

After a stop-over in Stansted/Cambridge at the TEI conference I was today in Linz, Austria, as external for the PhD defense of Andreas Riener. He did his PhD with Alois Ferscha and worked on implicit interaction in the car. The set and size of experiments he did is impressive and he has two central results. (1) using tactile output in the car can really improve the car to driver communication and reduce reaction time. And (2) by sensing the force pattern a body creates on the seat driving relates activities can be detected and to some extend driver identification can be performed. For more details it makes sense to have a look into the thesis 😉 If you mail Andreas he will probably sent you the PDF…
One of the basic assumptions of the work was to use implicit interaction (on input and output) to lower the cognitive load while driving – which is defiantly a valid approach. Recently however we also discussed more the issues that arise when the cognitive load for drivers is to low (e.g. due to assistive systems in the car such as ACC and lane keeping assistance). There is an interesting phenomenon, the Yerkes-Dobson Law (see [1]), that provides the foundation for this. Basically as the car provides more sophisticated functionality and requires less attention of the user the risk increase as the basic activation of the driver is lower. Here I think looking into multimodality to activate the user more quickly in situations where the driver is required to take over responsibility could be interesting – perhaps we find a student interested in this topic.
[1] http://en.wikipedia.org/wiki/Yerkes-Dodson_law (there is a link to the 1908 publication by Yerkes, & Dodson)

Demo day at TEI in Cambridge

What is a simple and cheap way to get from SaarbrĂźcken to Linz? It’s not really obvious, but going via Stansted/Cambridge makes sense – especially when there is the conference on Tangible and Embedded Interaction (www.tei-conf.org) and Raynair offers 10€ flight (not sure about sustainability though). Sustainability, from a different perspective was also at the center of the Monday Keynote by Tom Igeo which I missed.

Nicolas and Sharam did a great job and the choice to do a full day of demos worked out great. The large set of interactive demos presented captures and communicates a lot of the spirit of the community. To get an overview of the demos one has to read through the proceedings (will post a link as soon as they are online in the ACM-DL) as there are too many to discuss them here.
Nevertheless here is my random pick:
One big topic is tangible interaction on surfaces. Several examples showed how interactive surfaces can be combined with physical artifacts to make interaction more graspable. Jan Borcher’s group showed a table with passive controls that are recognized when placed on the table and they provide tangible means for interaction (e.g. keyboard keys, knobs, etc.). An interesting effect is that the labeling of the controls can be done dynamically.
Microsoft research showed an impressive novel table top display that allows two images to be projected – on the interactive surface and one on the objects above [1]. It was presented at large year’s UIST but I have tried it out now for the first time – and it is a stunning effect. Have a look at the paper (and before you read the details make a guess how it is implemented – at the demo most people guessed wrong 😉
Embedding sensing into artifacts to create a digital representation has always been a topic in tangible – even back to the early work of Hiroshi Ishii on Triangles [2]. One interesting example in this year’s demo was a set of cardboard pieces that are held together by hinges. Each hinge is technically realized as a potentiometer and by measuring the potion the structure can be determined. It is really interesting to think this further.
Conferences like TEI let you inevitably think about the feasibility of programmable matter – and there is ongoing work in this in the robotics community. The idea is to create micro-robots that can create arbitrary shapes – for a starting point see the work at CMU on Claytronics.
[1] Izadi, S., Hodges, S., Taylor, S., Rosenfeld, D., Villar, N., Butler, A., and Westhues, J. 2008. Going beyond the display: a surface technology with an electronically switchable diffuser. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1449715.1449760
[2] Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 – 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56. DOI= http://doi.acm.org/10.1145/274644.274652

Voice interaction – Perhaps it works …

Today we visited Christian MĂźller at DFKI in SaarbrĂźcken. He organized a workshop on Automotive User Interfaces at IUI last week. My talk was on new directions for user interfaces and in particular arguing for a broad view on multimodality. We showed some of our recent projects on car user interfaces. Dagmar gave a short overview of CARS our simulator for evaluating driving performance and driver distractions and we discussed options for potential extensions and shortcomings of the Lane Change Task.
Being a long time skeptic about voice interfaces I was surprise to see a convincing demo of a multimodal user interface combining voice and a tactile controller in the car. I think this could be really an interesting option for future interfaces. 
Classical voice-only interfaces usually lack basic properties of modern interactive systems, e.g. as stated in Shneiderman’s Golden Rules or in Norman’s action cycle. In particular the following points are most often not well realized in voice-only system:
  • State of the system is always visible
  • Interactions with the system provide immediate and appropriate feedback
  • Actions are easily reversible
  • Opportunities for interaction are always visible 
By combing a physical controller with voice and having at the same time the objects of interaction visible to the user (as part of the physical system that is controlled, e.g. window, seat) these problems are addressed in a very interesting way. I am looking forward to seeing more along these lines – perhaps we should also not longer ignore speech interaction in our projects 😉 

Design Ideas and Demos at FH Potsdam

During the workshop last week in Potsdam we got to see demos from students of Design of Physical and Virtual Interfaces class taught by Reto Wettach and JennyLC Chowdhury. The students had to design a working prototype of an interactive system. As base technology most of them use the Arduino Board with some custom made extensions. For a set of pictures see my photo gallery and the photos on flickr. It would need pages to describe all of the projects so I picked few…

The project “Navel” (by Juan Avellanosa, Florian Schulz and Michael Härtel) is a belt with tactile output, similar to [1], [2] and [3]. The first idea along this lines that I have tried out was Gentle Guide [4] at mobile HCI 2003 – it seemed quite compelling. The student project proposed one novel application idea: to use it in sport. That is quite interesting and could complement ideas proposed in [5].

Vivien’s favorite was the vibrating doormat; a system where a foot mat is constructed of three vibrating tiles that can be controlled and different vibration patters can be presented. It was built by Lionel Michel and he has several ideas what research questions this could address. I found especially the question if and how one can induce feelings and emotions with such a system. In the same application context (doormat) another prototype looked at emotions, too. If you stroke or pat this mat it comes out of its hiding place (Roll-o-mat by Bastian Schulz).

There were several projects on giving everyday objects more personality (e.g. a Talking Trashbin by Gerd-Hinnerk Winck) and making them emotional reactive (e.g. lights that reacted to proximity). Firefly (by Marc Tiedemann) is one example how reactiveness and motion that is hard to predict can lead to an interesting user experience. The movement appears really similar to a real firefly.

Embedding Information has been an important topic in our research over the last years [6] – the demos provided several interesting examples: a cable that visualized energy consumption and keyboard to leave messages. I learned a further example of an idea/patent application where information is included in the object – in this case in a tea bag. This is an extreme case but I think looking into the future (and assuming that we get sustainable and bio-degradable electronics) it indicates an interesting direction and pushing the idea of Information at your fingertip (Bill Gates Keynote in 1994) much further than originally intended.

For more photos see my photo gallery and the photos on flickr.

[1] Tsukada, K. and Yasumrua, M.: ActiveBelt: Belt-type Wearable Tactile Display for Directional Navigation, Proceedings of UbiComp2004, Springer LNCS3205, pp.384-399 (2004).

[2] Alois Ferscha et al. Vibro-Tactile Space-Awareness . Video Paper, adjunct proceedings of Ubicomp2008. Paper. Video.

[3] Heuten, W., Henze, N., Boll, S., and Pielot, M. 2008. Tactile wayfinder: a non-visual support system for wayfinding. In Proceedings of the 5th Nordic Conference on Human-Computer interaction: Building Bridges (Lund, Sweden, October 20 – 22, 2008). NordiCHI ’08, vol. 358. ACM, New York, NY, 172-181. DOI= http://doi.acm.org/10.1145/1463160.1463179

[4] S.Bosman, B.Groenendaal, J.W.Findlater, T.Visser, M.de Graaf & P.Markopoulos . GentleGuide: An exploration of haptic output for indoors pedestrian guidance . Mobile HCI 2003.

[5] Mitchell Page, Andrew Vande Moere: Evaluating a Wearable Display Jersey for Augmenting Team Sports Awareness. Pervasive 2007. 91-108

[6] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‚Ubiquitous Display Environments‘, September 2004

Towards interaction that is begreifbar

Since last year we have in Germany a working group on graspable/tangible interaction in mixed realities.
In German the key term we use is “begreifbar” or “begreifen” which has the meaning of acquire a deep understanding of something and the words basic meaning is to touch. Basically understand by touching – but in a more fundamental sense than grasping or getting grip. Hence the list of translations for “begreifen” given in the dictionary is quite long.
Perhaps we should push more for the word in the international community – Towards interaction that is begreifbar (English has too few foreign terms anyway 😉

This meeting was organized by Reto Wettach at Potsdam and the objective was to have two days to invent things together. The mix of people mainly included people from computer science and design. It is always amazing how many ideas come up if you put 25 people for a day in a room 🙂 We followed this week up on some of the ideas related to new means for communication – there are defiantly interesting student projects on this topic.

In the evening we had a half pecha-kucha (each person 10 slides of 20 seconds – in total 3:20, the original is 20 slides) http://www.pecha-kucha.org/. It is a great way of getting quickly to know about work, research, ideas, and background of other people. It could be format we could use more in teaching a perhaps for ad-hoc sessions at a new conference we plan (e.g. http://auto-ui.org) … prepared my slides on the train in the morning – and it is more challenging that expected to get a set of meaningful pictures together for 10 slides.

Overall the workshop showed that there is a significant interest and expertise in Germany moving from software ergonomics to modern human computer interaction.
There is a new person on our team (starting next week) – perhaps you can spot him on the pics.
For a set of pictures see my photo gallery and the photos on flickr.

Two basic references for interaction byond the desktop

Following the workshop I got a few questions on what the important papers are that one should read to start on the topic. There are many (e.g. search in google schoolar for tangible interaction, physical interaction, etc and you will see) and there conference dedicated to it (e.g. the tangible and embedded interaction TEI – next week in cambridge).

But if I have to pick two here is my joice:

[1] Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, xv-xxv. DOI= http://doi.acm.org/10.1145/1347390.1347392

[2] Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 201-210. DOI= http://doi.acm.org/10.1145/1357054.1357089

What happens if Design meets Pervasive Computing?

This morning I met with Claudius Lazzeroni, a colleague from Folkwang Hochschule (they were part of our University till two years ago).
 
They have different study programs in design and art related subjects. He showed me some projects (http://www.shapingthings.net/ – in German but lots of pictures that give you the idea). Many of the ideas and prototypes related to our work and I hope we get some joint projects going. I think it could be really exciting to have projects with design and computer science students – looking forward to this!
When I was in the UK we collaborated in the equator project with designers – mainly Bill Gaver and his group – and the results were really exciting [1]. We build a table that reacted to load changes on the surfaces and allowed you to fly virtually over the UK. The paper is worthwhile to read – if you are in a hurry have a look at the movie about it on youtube: http://www.youtube.com/watch?v=uRKOypmDDBM
There was a further project with a table –  a key table – and for this one there more funny (and less serious?) video on youtube: http://www.youtube.com/watch?v=y6e_R5q-Uf4
[1] Gaver, W. W., Bowers, J., Boucher, A., Gellerson, H., Pennington, S., Schmidt, A., Steed, A., Villars, N., and Walker, B. 2004. The drift table: designing for ludic engagement. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 885-900. DOI= http://doi.acm.org/10.1145/985921.985947

Interesting interaction devices

Looking at interesting and novel interaction devices that would be challenging for students to classify (e.g. in the table suggested by Card et al 1991 [1]) I can across some pretty unusual device. Probably not really useful for an exam but perhaps next year for discussion in class…

Ever wanted to rearrange the keys on your keyboard? ErgoDex DX1 is a set of 25 keys that can be arranged on a surface to create a specific input device. It would be cool if the device could also sense which key is where – would make re-arranging part of the interaction process. In some sense it is similar to Nic Villar’s Voodoo I/O [2].
Wearable computing is not dead – here is some proof 😉 JennyLC Chowdhury presents intimate controllers – basically touch sensitive underwear (a bra and briefs). Have a look at the web page or the video on youtube.
What are keyboards of the future? Each key is a display? Or is the whole keyboard a screen? I think there is too much focus on the visual und to less on the haptic – perhaps it could be interesting to have keys that change shape and where the tactile properties can be programmed… 
[1] Card, S. K., Mackinlay, J. D., and Robertson, G. G. 1991. A morphological analysis of the design space of input devices. ACM Trans. Inf. Syst. 9, 2 (Apr. 1991), 99-122. DOI= http://doi.acm.org/10.1145/123078.128726 
[2] VILLAR, N., GILLEADE, K. M., RAMDUNYELLIS, D., and GELLERSEN, H. 2007. The VoodooIO gaming kit: a real-time adaptable gaming controller. Comput. Entertain. 5, 3 (Jul. 2007), 7. DOI= http://doi.acm.org/10.1145/1316511.1316518

Ranking Conferences and Journals – A Down-Under perspective

As many of us I am skeptical of rankings (as long as I was not involved in making them 😉 Nevertheless sometimes they are interesting and helpful in assessing where to publish or what better not to read…

This morning we discussed where to publish some interesting work related to web technology (a follow-up of the UsaProx) and for the discussion such a list may have been helpful. 
A colleague from Munich sent me the link to an Australian conference ranking and obviously they also have ranked Journals, too. They use A+, A, B, L, and C as tiers.
… and as we always knew you cannot be wrong when publishing in Pervasive, Percom, Ubicomp, and CHI 🙂

Technology Review with a Focus on User Interfaces

The February 2009 edition of technology review (German version) has its focus on new user interfaces and titles „Streicheln erwĂźnscht“ (translates to stroking/caressing/fondling welcome). It has a set of articles talking about new way of interacting multimodality, including tangible user interfaces and tactile communication. In the article „Feel me, touch me“ by Gordon Bolduan on page 74 a photo of Dagmar’s prototype of tactile steering wheel is depicted. The full paper on the study will be published at Pervasive in May 2009 (so you have to be patient to get the details – or come and visit our lab 😉

In the blog entry of technology review  introducing the current issue there is a nice anecdote mentioned about a literature search on haptic/tactile remote communication (while I was still in Munich) – the final version of the seminar paper (now not X-rated anymore) is „Neue Formen der entfernten Kommunikation“ by Martin Schrittenloher. He continued in his MSc Project on the topic and worked with Morten Fjeld  on sliders that give remote feedback, see [1].

Another topic closely related is to new forms of communication are exertion interfaces (we looked at the 2002/2003 work Florian ‚Floyd‘ Mueller in the UIE lecture yesterday – even with the Nintendo Wii around the work is highly inspiring and impressive, see [2]). The communication example given in Breakout for Two is showing the potential of including the whole body in communication tasks. Watching the video  is really to recommend 🙂
[1] Jenaro, J., Shahrokni, A., Schrittenloher, and M., Fjeld, M. 2007. One-Dimensional Force Feedback Slider: Digital platform. In Proc. Workshop at the IEEE Virtual Reality 2007 Conference: Mixed Reality User Interfaces: Specification, Authoring, Adaptation (MRUI07), 47-51
[2] Mueller, F., Agamanolis, S., and Picard, R. 2003. Exertion interfaces: sports over a distance for social bonding and fun. InProceedings of the SIGCHI Conference on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 – 10, 2003). CHI ’03. ACM, New York, NY, 561-568. DOI= http://doi.acm.org/10.1145/642611.642709 Â