Our Publications at Pervasive – Public Displays, Car Adverts, and Tactile Output for Navigation

Our group was involved in 3 papers that are published at Pervasive 2009 in Nara.

The first contribution is a study on public display that was presented by Jörg Müller from Münster. The paper explores display blindness that can be observed in the real world (similarly to banner blindness) and concludes that the extent to which people look at displays is very much correlated to the users expectation of the content of a display in a certain location [1].

The second short paper is a survey on car advertising and has been conducted in the context of the master thesis of Christoph Evers. The central question is about the design space of dynamic advertising on cars and how the users perceive such a technology [2].

Dagmar presented a paper on vibra-tactile output integrated in the steering wheel for navigation systems in cars. The studies explored how multi-modal presentation of information impact driving performance and what modalities are preferred by users. The general conclusion is that combining visual information with vibra-tactile output is the best option and that people prefer multi-modal output over a single modality [3].

[1] Jörg Müller, Dennis Wilmsmann, Juliane Exeler, Markus Buzeck, Albrecht Schmidt, Tim Jay, Antonio Krüger. Display Blindness: The Effect of Expectations on Attention towards Digital Signage. 7th International Conference on Pervasive Computing 2009. Nara, Japan. Springer LNCS 5538, pp 1-8.
http://www.springerlink.com/content/gk307213786207g2

[2] Florian Alt, Christoph Evers, Albrecht Schmidt. User’s view on Context-Aware Car Advertisement. 7th International Conference on Pervasive Computing 2009. Nara, Japan. Springer LNCS 5538, pp 9-16.
http://www.springerlink.com/content/81q8818683315523

[3] Dagmar Kern, Paul Marshall, Eva Hornecker, Yvonne Rogers, Albrecht Schmidt. Enhancing Navigation Information with tactile Output Embedded into the Steering Wheel. 7th International Conference on Pervasive Computing 2009. Nara, Japan. Springer LNCS 5538, pp 42-58.
http://www.springerlink.com/content/x13j7547p8303113

Andreas Riener visits our lab

Andreas Riener from the University of Linz came to visit us for 3 days. In his research he works on multimodal and implicit interaction in the car. We talked about several new ideas for new user multimodal interfaces. Andreas had a preseure matt with him and we could try out what sensor readings we get in different setups. It seems that in particular providing redundancy in the controls could create interesting opportunities – hopefully we find means to explore this further.

App store of a car manufacturer? Or the future of cars as application platform.

When preparing my talk for the BMW research colloquium I realized once more how much potential there is in the automotive domain (if you looks from am CS perspective). My talk was on the interaction of the driver with the car and the environment and I was assessing the potential of the car as a platform for interactive applications (slides in PDF). Thinking of the car as a mobile terminal that offers transportation is quite exciting…

I showed some of our recent project in the automotive domain:

  • enhance communication in the car; basically studying the effect of a video link between driver and passenger on the driving performance and on the communication
  • handwritten text input; where would you put the input and the output? Input on the steering wheel and visual feedback in the dashboard is a good guess – see [1] for more details.
  • How can you make it easier to interrupt tasks while driving – we have some ideas for minimizing the cost of interruptions for the driver on secondary tasks and explored it with a navigation task.
  • Multimodal interaction and in particular tactile output are interesting – we looked at how to present navigation information using a set of vibra tactile actuators. We will publish more details on this at Pervasive 2009 in a few weeks.

Towards the end of my talk I invited the audience to speculate with me on future scenarios. The starting point was: Imagine you store all the information that goes over the bus systems in the car permanently and you transmit it wireless over the network to a backend storage. Then image 10% of the users are willing to share this information publicly. That is really opening a whole new world of applications. Thinking this a bit further one question is how will the application store of a car manufacturer look in the future? What can you buy online (e.g. fuel efficiency? More power in the engine? A new layout for your dashboard? …). Seems like an interesting thesis topic.

[1] Kern, D., Schmidt, A., Arnsmann, J., Appelmann, T., Pararasasegaran, N., and Piepiera, B. 2009. Writing to your car: handwritten text input while driving. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 4705-4710. DOI= http://doi.acm.org/10.1145/1520340.1520724

New Conference on Automotive User Interfaces

If industries are not doing well one way forward is to promote innovation!

Since a number of years it became apparened that many PhD students in computer science and especially in human computer interaction work on topics related to user interfaces in the car. We think it is a good idea to forster a community in this area and hence we run the 1st International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI 2009)  in Essen, Germany. The conference is in the week after Mobile HCI and takes place Mon/Tue 21 – 22 September 2009. 
Submission deadline: 02 June 2009

Andreas Riener defends his PhD in Linz

After a stop-over in Stansted/Cambridge at the TEI conference I was today in Linz, Austria, as external for the PhD defense of Andreas Riener. He did his PhD with Alois Ferscha and worked on implicit interaction in the car. The set and size of experiments he did is impressive and he has two central results. (1) using tactile output in the car can really improve the car to driver communication and reduce reaction time. And (2) by sensing the force pattern a body creates on the seat driving relates activities can be detected and to some extend driver identification can be performed. For more details it makes sense to have a look into the thesis 😉 If you mail Andreas he will probably sent you the PDF…
One of the basic assumptions of the work was to use implicit interaction (on input and output) to lower the cognitive load while driving – which is defiantly a valid approach. Recently however we also discussed more the issues that arise when the cognitive load for drivers is to low (e.g. due to assistive systems in the car such as ACC and lane keeping assistance). There is an interesting phenomenon, the Yerkes-Dobson Law (see [1]), that provides the foundation for this. Basically as the car provides more sophisticated functionality and requires less attention of the user the risk increase as the basic activation of the driver is lower. Here I think looking into multimodality to activate the user more quickly in situations where the driver is required to take over responsibility could be interesting – perhaps we find a student interested in this topic.
[1] http://en.wikipedia.org/wiki/Yerkes-Dodson_law (there is a link to the 1908 publication by Yerkes, & Dodson)

Voice interaction – Perhaps it works …

Today we visited Christian Müller at DFKI in Saarbrücken. He organized a workshop on Automotive User Interfaces at IUI last week. My talk was on new directions for user interfaces and in particular arguing for a broad view on multimodality. We showed some of our recent projects on car user interfaces. Dagmar gave a short overview of CARS our simulator for evaluating driving performance and driver distractions and we discussed options for potential extensions and shortcomings of the Lane Change Task.
Being a long time skeptic about voice interfaces I was surprise to see a convincing demo of a multimodal user interface combining voice and a tactile controller in the car. I think this could be really an interesting option for future interfaces. 
Classical voice-only interfaces usually lack basic properties of modern interactive systems, e.g. as stated in Shneiderman’s Golden Rules or in Norman’s action cycle. In particular the following points are most often not well realized in voice-only system:
  • State of the system is always visible
  • Interactions with the system provide immediate and appropriate feedback
  • Actions are easily reversible
  • Opportunities for interaction are always visible 
By combing a physical controller with voice and having at the same time the objects of interaction visible to the user (as part of the physical system that is controlled, e.g. window, seat) these problems are addressed in a very interesting way. I am looking forward to seeing more along these lines – perhaps we should also not longer ignore speech interaction in our projects 😉 

Known route – driving your car in mental auto-pilot?

Mandy Marder, a doctoral student at university hospital in Essen has done an interesting study, looking at the activity of the brain at different driving situations. It seems that if you are driving a well know route you are less alert than when you drive an unknown route (see press release, we have yet to find the appropriate reference). This is an interesting finding that may help to inform some of our work on automotive user interfaces. Together with trends that move more responsibility from the driver to assitive functions this nay be an indication that driving could be a valid domain for serious games.

Workshop on Automobile User Interfaces

For the second time we ran this year a workshop on automobile user interfaces and interactive applications in the car at the German HCI conference: http://automotive.ubisys.org/

In the first session we discussed the use of tactile output and haptics in automotive user interfaces. It appears that there is significant interest in this area at the moment. In particular using haptics as an additional modality creates a lot of opportunities for new interfaces. We had a short discussion about two directions in haptic output: naturalistic haptic output (e.g. line assist that feels like going over the side of the road) vs. generic haptic output (e.g. giving a vibration cue when to turn).

 I think the first domain could make an interesting project – how does it naturally feel to drive too fast, to turn the wrong way, to be too close to the car in front of you, etc…

In a further session we discussed framework and concepts for in-car user interfaces. The discussion on the use of context with the interface was very diverse. Some people argued it should be only used in non-critical/optional parts of the UI (e.g. entertainment) as one is not 100% sure if the recognized context is right. Others argue that context may provide a central advantage, especially in safety critical systems, as it gives the opportunity to react faster. 

In the end it comes always down to the question: to what extent do we want to have the human in the loop… But looking at Wolfgang’s overview slide it is impressive how much functionality depends already now on context…

In the third session we discussed tools and methods for developing and evaluating user interfaces in the car context. Dagmar presented our first version of CARS (a simple driving simulator for evaluation of UIs) and discussed findings from initial studies [1]. The simulator is based on the JMonkey Game engine and available open source on our website [2].

There were several interesting ideas on what topics are really hot in automotive UIs, ranging from interfaces for information gather in Car-2-Car / Car-2-Envrionment communication to micro-entertainment while driving.

[1] Dagmar Kern, Marco Müller, Stefan Schneegaß, Lukasz Wolejko-Wolejszo, Albrecht Schmidt. CARS – Configurable Automotive Research Simulator. Automotive User Interfaces and Interactive Applications – AUIIA 08. Workshop at Mensch und Computer 2008 Lübeck 2008

[2] https://www.pcuie.uni-due.de/projectwiki/index.php/CARS

PS: In a taxi in Amsterdam the driver had a DVD running while driving – and I am sure this is not a form of entertainment that works well (it is neither fun to watch, nor is it save or legal).

Is it easier to design for touch screens if you have poor UI designers?

Flying back from Sydney with Qantas and now flying to Seattle with Lufthansa I had to long distance flights in which I had the opportunity to study (n=1, subject=me, plus over-shoulder-observation-while-walking-up-and-down-the-aisle 😉 the user interface for the in-flight entertainment.

The 2 systems have very different hardware and software designs. The Qantas infotainment system is a regular screen and interaction is done via a wired moveable remote control store in the armrest. The Lufthansa system uses a touch screen (It also has some hard buttons for volume in the armrest). Overall the content on the Qantas system comprised of more content (more movies, more TV-shows) including real games.

The Qantas system seemed very well engineered and the remote control UI worked was greatly suited for playing games. Nevertheless the basic operation (selecting movies etc.) seemed more difficult using the remote control compared to the touch screen interface. In contrast the Lufthansa system seems to have much room for improvement (button size, button arrangement, reactions times of the system) but it appeared very easy to use.

So here are my hypotheses:

Hypothesis 1: if you design (public) information or edutainment systems (excluding games) using a touch screen is a better choice than using an off-screen input device.

Hypothesis 2: with UI design team of a given ability (even a bad UI design team) you will create a significantly better information and edutainment systems (excluding games) if you use a touch screen than using an off-screen input device.

From the automotive domain we have some indications that good off-screen input device are really hard to design so that they work well (e.g. in-build-car navigation system). Probably I should find a student to proof it (with n much larger than 1 and other subjects than me).

PS: the Lufthansa in-flight entertainment runs on Windows-CE 5.0 (the person in front of me had mainly the empty desktop with the Win CE logo showing) and it boots over network (takes over 6 minutes).

CfP: Automotive User Interfaces and Interactive Applications – AUIIA 08

After last year’s successful workshop on automotive user interfaces we are planning to run another one this year. We – Susanne Boll (Uni Oldenburg), Wolfgang Spießl (BMW), Matthias Kranz (DLR) and Albrecht Schmidt – are really looking forward to many interesting submissions and a cool workshop program. The theme gains a the moment some momentum, which was very visible at the Special Interest Group meeting at CHI2008.

More information on the workshop and a call for paper is available at: http://automotive.ubisys.org/