App store of a car manufacturer? Or the future of cars as application platform.

When preparing my talk for the BMW research colloquium I realized once more how much potential there is in the automotive domain (if you looks from am CS perspective). My talk was on the interaction of the driver with the car and the environment and I was assessing the potential of the car as a platform for interactive applications (slides in PDF). Thinking of the car as a mobile terminal that offers transportation is quite exciting…

I showed some of our recent project in the automotive domain:

  • enhance communication in the car; basically studying the effect of a video link between driver and passenger on the driving performance and on the communication
  • handwritten text input; where would you put the input and the output? Input on the steering wheel and visual feedback in the dashboard is a good guess – see [1] for more details.
  • How can you make it easier to interrupt tasks while driving – we have some ideas for minimizing the cost of interruptions for the driver on secondary tasks and explored it with a navigation task.
  • Multimodal interaction and in particular tactile output are interesting – we looked at how to present navigation information using a set of vibra tactile actuators. We will publish more details on this at Pervasive 2009 in a few weeks.

Towards the end of my talk I invited the audience to speculate with me on future scenarios. The starting point was: Imagine you store all the information that goes over the bus systems in the car permanently and you transmit it wireless over the network to a backend storage. Then image 10% of the users are willing to share this information publicly. That is really opening a whole new world of applications. Thinking this a bit further one question is how will the application store of a car manufacturer look in the future? What can you buy online (e.g. fuel efficiency? More power in the engine? A new layout for your dashboard? …). Seems like an interesting thesis topic.

[1] Kern, D., Schmidt, A., Arnsmann, J., Appelmann, T., Pararasasegaran, N., and Piepiera, B. 2009. Writing to your car: handwritten text input while driving. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 4705-4710. DOI= http://doi.acm.org/10.1145/1520340.1520724

Visit to Newcastle University, digital jewelry

I went to see Chris Kray at Culture Lab at Newcastle University. Over the next months we will be working on a joined project on a new approach to creating and building interactive appliances. I am looking forward to spending some more time in Newcastle.

Chris showed me around their lab and I was truly impressed. Besides many interesting prototypes in various domains I have not seen this number of different ideas and implementations of table top systems and user interface in another place. For picture of me in the lab trying out a special vehicle see Chris‘ blog.

Jayne Wallace showed me some of her digital jewelry. A few years back she wrote a very intersting article with the title „all the useless beauty“ [1] that provides an interesting perspective on design and suggests beauty as a material in digital design. The approach she takes it to design deliberately for a single individual. The design fits their personality and their context. She created a communication device to connect two people in a very simple and yet powerful way [2]. A further example is a piece of jewelry that makes the environment change to provide some personal information – technically it is similar to the work we have started with encoding interest in the Bluetooth friendly names of phones [3] but her artefacts are much more pretty and emotionally exciting.

[1] Wallace, J. and Press, M. (2004) All this useless beauty The Design Journal Volume 7 Issue 2 (PDF)

[2] Jayne Wallace. Journeys. Intergeneration Project.

[3] Kern, D., Harding, M., Storz, O., Davis, N., and Schmidt, A. 2008. Shaping how advertisers see me: user views on implicit and explicit profile capture. In CHI ’08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 3363-3368. DOI= http://doi.acm.org/10.1145/1358628.1358858

Ubicomp Spring School in Nottingham – prototyping user interfaces

On Tuesday and Wednesday afternoon I ran practical workshops on creating novel user interfaces complementing the tutorial on Wednesday morning. The aim of the practical was to motivate people to more fundamentally question user interface decisions that we make in our research projects.

On a very simple level an input user interface can be seen as a sensor, a transfer function or mapping, and an action in the system that is controlled. To motivate that this I showed two simple javascript programs that allowed to play with the mapping of the mouse to a movement of a button on the screen and with moving through a set of images. If you twist the mapping functions really simple tasks (like moving one button on top of the other) may get complicated. Similarly if you change the way you use the sensor (e.g. instead of moving the mouse on a surface, having several people moving a surface over the mouse) such simple tasks may become really difficult, too.

With this initial experience, a optical mouse, a lot of materials (e.g. fabrics, cardboard boxes, picture frames, toys, etc.), some tools, and 2 hours of time the groups started to create their novel interactive experience. The results created included a string puppet interface, a frog interface, a interface to the (computer) recycling, a scarf, and a close contact dancing interface (the music only plays if bodies touch and move).

The final demos of the workshop were shown before dinner. Seeing the whole set of the new interface ideas one wonders why there is so little of this happening beyond the labs in the real world and why people are happy to live with current efficient but rather boring user interfaces – especially in the home context…

Ubicomp Spring School in Nottingham – Tutorial

The ubicomp spring school in Nottingham had an interesting set of lectures and practical sessions, including a talk by Turing Award winner Robin Milner on a theoretical approach to ubicomp. When I arrived on Tuesday I had the chance to see Chris Baber’s tutorial on wearable computing. He provided really good examples of wearable computing and its distinct qualities (also in relation to wearable use of mobile phones). One example that captures a lot about wearable computing is an adaptive bra. The bra one example of a class of interesting future garments. The basic idea is that these garments detects the activity and changes their properties accordingly. A different example in this class is a shirt/jacket/pullover/trouser that can change its insulation properties (e.g. by storing and releasing air) according to the external temperature and the users body temperature.

My tutorial was on user interface engineering and I discussed: what is different in creating ubicomp UIs compared to traditional user interfaces. I showed some trends (including technologies as well as a new view on privacy) that open the design space for new user interfaces. Furthermore we discussed the idea about creating magical experiences in the world and the dilemma of user creativity and user needs.

There were about 100 people the spring school from around the UK – it is really exciting how much research in ubicomp (and somehow in the tradition of equator) is going on in the UK.

Mobile Boarding Pass, the whole process matters

Yesterday night I did an online check-in for my flight from Düsseldorf to Manchester. For convenience and curiosity I chose the mobile boarding pass. It is amazingly easy and it worked in principle very well. Only not everyone can work without paper yet. At some point in the process (after border control) I got a hand written „boarding pass“ because this person needs to stamp it 😉 and we would probably have gotten into an argument if he tried to stamp my phone. There is some further room for improvement. The boarding pass shows besides the 2D barcode all the important information for the traveler – but you have to scroll to the bottom of the page to get the boarding number (which seems quite important for everyone else than the traveler – it was even on my handwritten boarding pass).

Teaching, Technical Training Day at the EPO

Together with Rene Mayrhofer and Alexander De Luca I organized a technical training at the European Patent Office in Munich. In the lectures we made the attempt to give a broad overview of recent advanced in this domain – and preparing such a day one realizes how much there is to it…. We covered the following topic:
  • Merging the physical and digital (e.g. sentient computing and dual reality [1])
  • Interlinking the real world and the virtual world (e.g. Internet of things)
  • Interacting with your body (e.g. implants for interaction, brain computer interaction, eye gaze interaction)
  • Interaction beyond the desktop, in particular sensor based UIs, touch interaction, haptics, and Interactive surfaces
  • Device authentication with focus on spontaneity and ubicomp environments
  • User authentication focus on authentication in the public 
  • Location-Awareness and Location Privacy
Overall we covered probably more than 100 references – here are just a few nice ones to read: computing tiles as basic building blocks for smart environments [2], a bendable computer interface [3], a touch screen you can also touch on the back side [4], and ideas on phones as basis for people centric censing [5].
[1] Lifton, J., Feldmeier, M., Ono, Y., Lewis, C., and Paradiso, J. A. 2007. A platform for ubiquitous sensor deployment in occupational and domestic environments In Proceedings of the 6th Conference on international information Processing in Sensor Networks (Cambridge, Massachusetts, USA, April 25 – 27, 2007). IPSN ’07. ACM, New York, NY, 119-127. DOI= http://doi.acm.org/10.1145/1236360.1236377
[2] Naohiko Kohtake, et al. u-Texture: Self-organizable Universal Panels for Creating Smart Surroundings. The 7th Int. Conference on Ubiquitous Computing (UbiComp2005), pp.19-38, Tokyo, September, 2005. http://www.ht.sfc.keio.ac.jp/u-texture/paper.html
[3] Schwesig, C., Poupyrev, I., and Mori, E. 2004. Gummi: a bendable computer. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 263-270. DOI= http://doi.acm.org/10.1145/985692.985726 
[4] Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J., and Shen, C. 2007. Lucid touch: a seethrough mobile device. InProceedings of the 20th Annual ACM Symposium on User interface Software and Technology (Newport, Rhode Island, USA, October 07 – 10, 2007). UIST ’07. ACM, New York, NY, 269-278. DOI= http://doi.acm.org/10.1145/1294211.1294259 
[5] Campbell, A. T., Eisenman, S. B., Lane, N. D., Miluzzo, E., Peterson, R. A., Lu, H., Zheng, X., Musolesi, M., Fodor, K., and Ahn, G. 2008. The Rise of People-Centric Sensing. IEEE Internet Computing 12, 4 (Jul. 2008), 12-21. DOI= http://dx.doi.org/10.1109/MIC.2008.90  

Final Presentation: Advertising 2.0

Last term we ran an interdisciplinary project with our MSc students from computer science and business studies to explore new ways in outdoor advertising. The course was jointly organized by the chairs: Specification of Software Systems, Pervasive Computing and User Interface Engineering, and Marketing and Trade. We were in particular interested what you can do with mobile phones and public displays. It is always surprising how much a group of 10 motivated students can create in 3 months. The group we had this term was extraordinary – over the last weeks they regularly stayed in the evenings longer in the lab than me 😉

The overall task was very open and the students created a concept and than implemented it – as a complete system including backend server, end user client on the mobile phone, and administration interface for advertisers. After the presentation and demos we really started thinking where we can deploy it and who the potential partners would be. The system offers means for implicit and explicit interaction, creates interest profiles, and allows to target adverts to groups with specific interest. Overall such technologies can make advertising more effective for companies (more precisely targeted adverts) and more pleasant for consumers (getting adverts that match personal areas of interest).

There are more photos of the presentation on the server.

PS: one small finding on the side – Bluetooth in its current form is a pain for interaction with public display… but luckily there are other options.

Impact of colors – hints for ambient design?

There is a study that looked at how the performace in solving certain cognitive/creative tasks is influenced by the backgroun color [1]. In short: to make people alert and to increase performance on detail oriented tasks use red; to get people in creative mode use blue. Lucky us our corporate desktop background is mainly blue! Perhaps this could be interesting for ambient colors, e.g. in the automotive context…

[1] Mehta, Ravi and Rui (Juliet) Zhu (2009), „Blue or Red? Exploring the Effect of Color on Cognitive Task Performances“ Science 27 February 2009:Vol. 323. no. 5918, pp. 1226 – 1229 DOI: 10.1126/science.1169144

Modular device – for prototyping only?


Over the last years there have been many ideas how to make devices more modular. Components that allow the end-user to create their own device – with exactly the functionality they want have been the central idea. So far they are only used in prototyping and have not really had success in the market place. The main reason seems that you get a device that has everything included and does everything – smaller and cheaper… But perhaps as electronics gets smaller and core functions get more mature it may happen.

Yanko Design has proposed a set of concepts along this line – and some of them are appealing 🙂
http://www.yankodesign.com/2007/12/12/chocolate-portable-hdd/
http://www.yankodesign.com/2007/11/26/blocky-mp3-player-oh-and-modular-too/
http://www.yankodesign.com/2007/08/31/it-was-a-rock-lobster/

Buglabs (http://www.buglabs.net) sells a functional system that allows you to build your own mobile device.

Being creative and designing your own system has been of interest in the computing and HCI community for many years. At last years CHI there was an paper by Buechley et al. [1] that looked how the LilyPad Arduino can make creating „computers“ an intersting experience – and especially for girls.

[1] Buechley, L., Eisenberg, M., Catchen, J., and Crockett, A. 2008. The LilyPad Arduino: using computational textiles to investigate engagement, aesthetics, and diversity in computer science education. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 423-432. DOI= http://doi.acm.org/10.1145/1357054.1357123

The next big thing – let’s look into the future

At Nokia Research Center in Tampere I gave a talk with the title „Computing Beyond Ubicomp – Mobile Communication changed the world – what else do we need?„. My main argument is that the next big thing is a device that allows us to predict the future – on a system as well as on a personal level. This is obviously very tricking as we have a free will and hence the future is not completely predictable – but extrapolating from the technologies we see now it seems not farfetched to create a device that enables predictions of the future in various contexts.

My argument goes as follows: the following points are technologically feasible in the near future:

  1. each car, bus, train, truck, …, object is tracked in real-time
  2. each person is tracked (location, activity, …, food intake, eye-gaze) in real-time
  3. environmental conditions are continuously sensed – globally and locally sensed
  4. with have a complete (3D) model of our world (e.g. buildings, street surface, …)

Having this information we can use data mining, learning, statistics, and models (e.g. a physics engine) to predict the future. If you wonder if I forget to thing about privacy – I did not (but it takes longer to explain – in short: the set of people who have a benefit or who do not care is large enough).

Considering this it becomes very clear that in medium term there is a great potential in having control over the access terminal to the virtual world, e.g. a phone… just thing how rich your profile in facebook/xing/linkedin can be if it takes all the information you implicitly generate on the phone into account.