Workshop on Automobile User Interfaces

For the second time we ran this year a workshop on automobile user interfaces and interactive applications in the car at the German HCI conference: http://automotive.ubisys.org/

In the first session we discussed the use of tactile output and haptics in automotive user interfaces. It appears that there is significant interest in this area at the moment. In particular using haptics as an additional modality creates a lot of opportunities for new interfaces. We had a short discussion about two directions in haptic output: naturalistic haptic output (e.g. line assist that feels like going over the side of the road) vs. generic haptic output (e.g. giving a vibration cue when to turn).

 I think the first domain could make an interesting project – how does it naturally feel to drive too fast, to turn the wrong way, to be too close to the car in front of you, etc…

In a further session we discussed framework and concepts for in-car user interfaces. The discussion on the use of context with the interface was very diverse. Some people argued it should be only used in non-critical/optional parts of the UI (e.g. entertainment) as one is not 100% sure if the recognized context is right. Others argue that context may provide a central advantage, especially in safety critical systems, as it gives the opportunity to react faster. 

In the end it comes always down to the question: to what extent do we want to have the human in the loop… But looking at Wolfgang’s overview slide it is impressive how much functionality depends already now on context…

In the third session we discussed tools and methods for developing and evaluating user interfaces in the car context. Dagmar presented our first version of CARS (a simple driving simulator for evaluation of UIs) and discussed findings from initial studies [1]. The simulator is based on the JMonkey Game engine and available open source on our website [2].

There were several interesting ideas on what topics are really hot in automotive UIs, ranging from interfaces for information gather in Car-2-Car / Car-2-Envrionment communication to micro-entertainment while driving.

[1] Dagmar Kern, Marco Müller, Stefan Schneegaß, Lukasz Wolejko-Wolejszo, Albrecht Schmidt. CARS – Configurable Automotive Research Simulator. Automotive User Interfaces and Interactive Applications – AUIIA 08. Workshop at Mensch und Computer 2008 Lübeck 2008

[2] https://www.pcuie.uni-due.de/projectwiki/index.php/CARS

PS: In a taxi in Amsterdam the driver had a DVD running while driving – and I am sure this is not a form of entertainment that works well (it is neither fun to watch, nor is it save or legal).

Implanted Persuasion Technologies

While listening to BJ Fogg, and especially on the motivation pairs (in particular instant pleasure and gratification vs. instant pain) I was wondering how long it will take till we talk about and see implantable persuasion technologies. Take the example of obesity – here one could really image ways of creating an implant that provides motivation for a certain eating behavior… would this be ethical?

Thermo-imaging camera at the border – useful for Context-Awareness?

When we re-entered South Korea I saw guard looking with an infrared camera at all arriving people. It was very hot outside so the heads were very red. My assumption is that this is used to spot people who have fever – however I could not verify this.

Looking at the images created while people moved around I realized that for many tasks in activity recognition, home health care, and wellness this may be an interesting technology to use. For several tasks in context-awareness it seems straightforward to get this information from an infrared camera. In the computer vision domain it seems that there have several papers towards this problem over the recent years.

We could thing of an interesting project topic related to infrared activity recognition or interaction to be integrated in our new lab… There are probably some fairly cheep thermo-sensing cameras around to used in research – for home brew use you find hints on the internet, e.g. How to turn a digital camera into an IR cam – pretty similar to what we did with the web cams for our multi-touch table.

The photo is from http://en.wikipedia.org/wiki/Thermography

Embedded Information – Airport Seoul

When I arrived in Seoul at the airport I saw an interesting instance of embedded information. In Munich we wrote a workshop paper [1] about the concept of embedded information and the key criteria are:

  • Embedding information where and when it is useful
  • Embedding information in a most unobtrusive way
  • Providing information in a way that there is no interaction required

Looking at an active computer display (OK it was broken) that circled the luggage belt (it is designed to list the names of people who should contact the information desk) and a fixed display on a suitcase I was reminded of this paper. With this set-up people become aware of the information – without really making an effort. With active displays becoming more ubiquitous I expect more innovation in this domain. We currently work on some ideas related to situated and embedded displays for advertising – if we find funding we push further… the ideas are there.
[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‚Ubiquitous Display Environments‘, September 2004

How to proof that Ubicomp solutions are valid?

Over the last years there have been many workshops and sessions in the ubicomp community that address the evaluation of systems. At Pervasive 2005 in Munich I co-organized a workshop on Application led research with George Coulouris and others. For me one of the central outcomes was that we – as ubicomp researchers – need to team up in evaluating our technologies and solutions with experts in the application domain and that we stay involved in this part of the research. Just handing it over for evaluation into the other domain will not bring us the insights we need to move the field forward. There is a workshop report which appeared in the IEEE Pervasive Magazine, that discusses the topic in more detail [1].

On Friday I met we a very interesting expert in the domain of gerontology. Elisabeth Steinhagen-Thiessen is chief consultant and director of the protestant geriatric centre of Berlin and professor of internal medicine/gerontology at the Charite in Berlin. We talked about opportunities for activity recognition in this domain and discussed potential set-ups for studies.

[1] Richard Sharp, Kasim Rehman. What Makes Good Application-led Research? IEEE Pervasive Computing Magazin. Volume 4, Number 3. July-September 2005.

New ways for reducing CO2 in Europe? Impact of pedestrian navigation systems

Arriving this morning in Brussels I was surprised by the length of the queue for taxis. Before seeing the number of people I considered taking a taxi to the meeting place as I had some luggage – but doing a quick count on the taxi frequency and the number of people in the line I decided to walk to make it in time. Then I remembered that some months ago I had a similar experience in Florence, when arriving at the airport for CHI. There I calculated the expected waiting time and choose the bus. Reflecting briefly on this it seems that this may be a new scheme to promote eco-friendly travel in cities… or why otherwise would there be not enough taxis in a free market?

Reflecting a little longer I would expect that with upcoming pedestrian navigation systems we may see a switch to more people walking in the city. My hypothesis (based on minimal observation) is that people often take a taxi or public transport as they have no idea where to walk to and how long it would take when walking. If now a pedestrian navigation system can offer reliably a time of arrival estimation (which is probably more precise for walking than for driving as there is less traffic jam) and the direction the motivation to walk may be increased. We should probably put pedestrian navigation systems on our project topic list as there is still open research on this topic…

Workshop on Smart Homes at Pervasive 2008

Today we had our Pervasive at home workshop – as part of Pervasive 2008 in Sydney. We had 7 talks and a number of discussions on various topics related to smart homes. Issues ranged from long term experience with smart home deployments (Lasse Kaila et al.), development cycle (Aaron Quigley et al.), to end-user development (Joëlle Coutaz). For the full workshop proceedings see [1].

One trend that can be observed is that researchers move beyond the living lab. In the discussion it became apparent that living labs can start research efforts in this area and function as focus point for researchers with different interests (e.g. technology and user-centred). However it was largely agreed that this can only be a first step and that deployments in actual home settings are becoming more essential to make an impact.

On central problem in smart home research is to develop future devices and services – where prototyping is based on current technologies and where we extrapolate from currently observed user behavior. We had some discussion how this can be done most effectively and what value observational techniques add to technology research and vice versa.

We discussed potential options for future smart home deployments and I suggested creating a hotel where people can experience future living and agree at the same time to give away their data for research purpose. Knowing what theme-hotels are around this idea is not as strange as it sounds 😉 perhaps we have to talk to some companies and propose this idea…

More of the workshop discussion is captured at: http://pervasivehome.pbwiki.com/

There are two interesting references that came up in discussions that I like to share. First the smart home at Duke University (http://www.smarthome.duke.edu/), which is dorm that is a live-in laboratory at Duke University – and it seems it is more expensive that the regular dorm. The second is an ambient interactive device, Joelle Coutaz discussed in the context of her presentation on a new approach to end-user programming and end-user development. The Nabaztag (http://www.nabaztag.com/) is a networked user interface that includes input and output (e.g. text2speech, moveable ears and LEDs) which can be programmed. I would be curious how well it really works to get people more connected – which relates to some ideas of us on having an easy communication channels.

[1] A.J. Brush, Shwetak Patel, Brian Meyers, Albrecht Schmidt (editors). Proceedings of the 1st Workshop on “Pervasive Computing at Home” held at the 6th international Conference on Pervasive Computing, Sydney, May 19 2008. http://murx.medien.ifi.lmu.de/~albrecht/pdf/pervasive-at-home-ws-proceedings-2008.pdf

Poor man’s location awareness

Over the last day I have experienced that very basic location information in the display can already provide a benefit to the user. Being the first time in Sydney I realized that network information of my GSM-phone is very reliable to tell me when to get off the bus – obviously it is not fine grain location information but so far always walking distance. At some locations (such as Bondi beach) visual pattern matching works very well, too 😉 And when to get off the bus seems a concern to many people (just extrapolating from the small sample I had over the last days…).

In my pervasive computing class, which I currently teach, we covered recently different aspects of location based systems – by the way a good starting point on the topic is [1] and [2]. At We discussed issues related to visual pattern matching – and when looking at the skyline of Sydney one becomes very quickly aware of the potential of this approach (especially with all the tagged pictures on flickr) but at the same time the complexity of matching from arbitrary locations becomes apparent.

Location awareness offers many interesting questions and challenging problems – looks like there are ideas for project and thesis topics, e.g. how semantic location information (even of lower quality) can be beneficial to users or finger printing based on radio/TV broadcast information.

[1] J. Hightower and G. Borriello. Location systems for ubiquitous computing. IEEE Computer, 34(8):57–66, Aug. 2001. http://www.intel-research.net/seattle/pubs/062120021154_45.pdf

[2] Jeffrey Hightower and Gaetano Borriello. Location Sensing Techniques. UW-CSE-01-07-01.

A service for true random numbers

After the exam board meeting at Trinity College in Dublin (I am external examiner for the Ubicomp program) I went back with Mads Haahr (the course director) to his office. Besides the screen on which he works he has one extra where constantly the log entries of his web server is displayed. It is an interesting awareness devices 😉 some years ago we did a project where we used the IP-address of incoming HTTP-requests to guess who the visitors are and to show their web pages on an awareness display [1], [2]. Looking back at web visitors works very well in an academic context and with request out of larger companies where one can expect that information is available on the web. Perhaps we should revisit the work and look how we can push this further given the new possibilities in the web.

The web-server Mads has in his office is pretty cool – it provides true random numbers – based on atmospheric noise picked up with 3 real radios (I saw them)! Have a look at the service for yourself: www.random.org. It provides an HTTP interface to use those numbers in your own applications. I would not have though of a web service to provide random numbers – but thinking a little more it makes a lot of sense…

[1] Schmidt, A. and Gellersen, H. 2001. Visitor awareness in the web. In Proceedings of the 10th international Conference on World Wide Web (Hong Kong, Hong Kong, May 01 – 05, 2001). WWW ’01. ACM, New York, NY, 745-753. DOI= http://doi.acm.org/10.1145/371920.372194

[2] Gellersen, H. and Schmidt, A. 2002. Look who’s visiting: supporting visitor awareness in the web. Int. J. Hum.-Comput. Stud. 56, 1 (Jan. 2002), 25-46. DOI= http://dx.doi.org/10.1006/ijhc.2001.0514

Wolfgang Spießl presented our CHI-Note

People take mobile devices into their cars and the amount of information people have on those devices is huge – just consider the number of songs on an MP3-Player, the address database in a navigation system and eventually the mobile web. In our work we looked at ways to design and implement search interfaces that are usable while driving [1]. For the paper we compared a categorized search and a free search. The was another paper in the session looking at practice of GPS use by Leshed et al. which was really interesting and can inform future navigation or context-aware information systems [2]. One interesting finding is that you loose AND at the same time create opportunities for applications and practices. In the questions she hinted some interesting observations on driving in familiar vs. driving in unfamiliar environments using GPS units. Based on these ideas there may be an interesting student project to do…

The interest in Wolfgang’s talk and into automotive user interfaces in general was unexpected high. As you see on the picture there was quite a set of people talking pictures and videos during the presentation.

[1] Graf, S., Spiessl, W., Schmidt, A., Winter, A., and Rigoll, G. 2008. In-car interaction using search-based user interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1685-1688. DOI= http://doi.acm.org/10.1145/1357054.1357317

[2] Leshed, G., Velden, T., Rieger, O., Kot, B., and Sengers, P. 2008. In-car gps navigation: engagement with and disengagement from the environment. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1675-1684. DOI= http://doi.acm.org/10.1145/1357054.1357316