Social networks connected to the real world

Florian Michahelles mentioned in his blog a talk [1] and paper [2] by Aaron Beach on mobile social networks that are linked to artefacts (e.g. clothing) in the real world. This is really interesting and I think we should look more into this…

[1] Aaron Beach. University of Colorado. Whozthat: Mobile Social Networks. Whoz touching me? Whoz Music? Whoz Watching? Who Cares?

[2] Beach, A.; Gartrell, M.; Akkala, S.; Elston, J.; Kelley, J.; Nishimoto, K.; Ray, B.; Razgulin, S.; Sundaresan, K.; Surendar, B.; Terada, M.; Han, R., „WhozThat? evolving an ecosystem for context-aware mobile social networks“ Network, IEEE , vol.22, no.4, pp.50-55, July-Aug2008

Visit to NEC labs in Heidelberg

In the afternoon I gave a talk at NEC labs in Heidelberg on ubiquitous display networks. Over the last year we developed and number of ideas and prototypes of interactive public display systems. We run a lab class (Fallstudien) on pervasive computing technologies and advertising together with colleagues from marketing. In another class (Projektseminar) we investigated how to facilitate interaction between interactive surfaces (e.g. multi touch table) and mobile devices. One of the prototypes will be shown as poster at mobile HCI 2009 in Bonn. In some thesis projects we introduced the notion of mobile contextual displays and their potential applications in advertising, see [1] and [2].

Seeing the work at NEC and based on the discussion I really think there is a great of potential for ubiquitous display networks – at the same time there are many challenges – including privacy that allways ensures discussion 😉 It would be great to have another bachelor or master thesis to address some of them – perhaps jointly with people from NEC. To understand the information needs in a particular display environment (at the University of Duisburg-Essen) we currently run a survey to better understand requirements. If you read German you are welcome to participate in the survey.

Predicting the future usually features in my talks – and interestingly I go a recommendation from Miquel Martin for a book that takes its own angle on that: Predictably Irrational by Dan Ariely (the stack of book gets slowly to large – time for holidays).

[1] Florian Alt, Albrecht Schmidt, Christoph Evers: Mobile Contextual Displays. Pervasive Advertising Workshop @ Pervasive 2009. Nara, Japan 2009.

[2] Florian Alt, Christoph Evers, Albrecht Schmidt: Users‘ View on Car Advertisements. In: Proceedings of the Seventh International Conference on Pervasive Computing, Pervasive’09. Springer Berlin / Heidelberg Nara, Japan 2009.

Human Computer Confluence – Information Day in Brussels

By the end of the month FET Open will launch the call for the proactive initiative on Human Computer Confluence. The term is new and hopefully it will really lead to new ideas. Today was already an information day on the upcoming proactive initiatives. I arrived the evening before and it is always a treat to talk a walk in the city.

The presentations were not really surprising and also the short intros by the participants remained very generic. Seeing the call that is now finalized and having been at the consultation meetings it seems to me that the focus is rather broad for a proactive initiative… but with many people wanting a piece of the cake this seems inevitable.

I presented a short idea of „breaking space and time boundaries“ – the idea is related to a previous post on predicting the future. The main idea is that with massive sensing (by a large number of people) and with uniform access to this information – independ of time and space – we will be able to create a different view of our realty. We think of putting a consortium together for an IP. Interested? Then give me a call.

Andreas Riener visits our lab

Andreas Riener from the University of Linz came to visit us for 3 days. In his research he works on multimodal and implicit interaction in the car. We talked about several new ideas for new user multimodal interfaces. Andreas had a preseure matt with him and we could try out what sensor readings we get in different setups. It seems that in particular providing redundancy in the controls could create interesting opportunities – hopefully we find means to explore this further.

Meeting on public display networks

Sunday night I travelled to Lugano for a meeting public display networks. I figured out that going there by night train is the best option – leaving midnight in Karlsruhe and arriving at 6am there. As I planned to sleep all the time my assumption was that the felt travel time would be zero. Made my plan without the rail company… the train was 2 hours late and I walked up and down for 2 hours in Karlsruhe at the track – and interestingly the problem would have been less annoying if public displays would provide the relevant information … The most annoying thing was passengers had no information if or when the train will come and no one could tell (neither was anyone at the station nor was anyone taking calls at the hotline).
The public display – really nice state of the art hardware – showed for 1 hour nothing and that it showed that the train is one hour late (was already more than 1 hour after the scheduled time) and finally the train arrived 2 hours late (the display still showing 1 hour delay). How hard can it be to provide this information? It seems with current approaches it is too hard…

On my way back I could observe a further example of short comings with content on public display. In the bus office they had a really nice 40-50 inch screen showing teletext of the departure. The problem was it was the teletext for the evening as the staff has to manually switch the pages. Here too it is very clear the information is available but current delivery systems are not well integrated.

In summary it is really a pity how poorly the public display infrastructures are used. It seems there are a lot of advances in the hardware but little on the content delivery, software and system side.

Offline Tangible User Interface

When shopping for a sofa I used an interesting tangible user interface – magnetic stickers. For each of the sofas systems the customer can create their own configuration using these magnetic stickers on a background (everything in a scale 1:50).

After the user is happy with the configuration the shop assistant makes a xerox copy (I said I do not need a black and white copy I make my own color copy with the phone) and calculates the price and writes up an order. The interaction with the pieces is very good and also great as a shared interface – much nicer than comparable systems that are screen based. I could imaging with a bit of effort one could create a phone application that scans the customer design, calculates the prices, and provides a rendered image of the configuration – with the chosen color (in our case green ;-). Could be an interesting student project…

App store of a car manufacturer? Or the future of cars as application platform.

When preparing my talk for the BMW research colloquium I realized once more how much potential there is in the automotive domain (if you looks from am CS perspective). My talk was on the interaction of the driver with the car and the environment and I was assessing the potential of the car as a platform for interactive applications (slides in PDF). Thinking of the car as a mobile terminal that offers transportation is quite exciting…

I showed some of our recent project in the automotive domain:

  • enhance communication in the car; basically studying the effect of a video link between driver and passenger on the driving performance and on the communication
  • handwritten text input; where would you put the input and the output? Input on the steering wheel and visual feedback in the dashboard is a good guess – see [1] for more details.
  • How can you make it easier to interrupt tasks while driving – we have some ideas for minimizing the cost of interruptions for the driver on secondary tasks and explored it with a navigation task.
  • Multimodal interaction and in particular tactile output are interesting – we looked at how to present navigation information using a set of vibra tactile actuators. We will publish more details on this at Pervasive 2009 in a few weeks.

Towards the end of my talk I invited the audience to speculate with me on future scenarios. The starting point was: Imagine you store all the information that goes over the bus systems in the car permanently and you transmit it wireless over the network to a backend storage. Then image 10% of the users are willing to share this information publicly. That is really opening a whole new world of applications. Thinking this a bit further one question is how will the application store of a car manufacturer look in the future? What can you buy online (e.g. fuel efficiency? More power in the engine? A new layout for your dashboard? …). Seems like an interesting thesis topic.

[1] Kern, D., Schmidt, A., Arnsmann, J., Appelmann, T., Pararasasegaran, N., and Piepiera, B. 2009. Writing to your car: handwritten text input while driving. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 – 09, 2009). CHI EA ’09. ACM, New York, NY, 4705-4710. DOI= http://doi.acm.org/10.1145/1520340.1520724

Impact of colors – hints for ambient design?

There is a study that looked at how the performace in solving certain cognitive/creative tasks is influenced by the backgroun color [1]. In short: to make people alert and to increase performance on detail oriented tasks use red; to get people in creative mode use blue. Lucky us our corporate desktop background is mainly blue! Perhaps this could be interesting for ambient colors, e.g. in the automotive context…

[1] Mehta, Ravi and Rui (Juliet) Zhu (2009), „Blue or Red? Exploring the Effect of Color on Cognitive Task Performances“ Science 27 February 2009:Vol. 323. no. 5918, pp. 1226 – 1229 DOI: 10.1126/science.1169144

The next big thing – let’s look into the future

At Nokia Research Center in Tampere I gave a talk with the title „Computing Beyond Ubicomp – Mobile Communication changed the world – what else do we need?„. My main argument is that the next big thing is a device that allows us to predict the future – on a system as well as on a personal level. This is obviously very tricking as we have a free will and hence the future is not completely predictable – but extrapolating from the technologies we see now it seems not farfetched to create a device that enables predictions of the future in various contexts.

My argument goes as follows: the following points are technologically feasible in the near future:

  1. each car, bus, train, truck, …, object is tracked in real-time
  2. each person is tracked (location, activity, …, food intake, eye-gaze) in real-time
  3. environmental conditions are continuously sensed – globally and locally sensed
  4. with have a complete (3D) model of our world (e.g. buildings, street surface, …)

Having this information we can use data mining, learning, statistics, and models (e.g. a physics engine) to predict the future. If you wonder if I forget to thing about privacy – I did not (but it takes longer to explain – in short: the set of people who have a benefit or who do not care is large enough).

Considering this it becomes very clear that in medium term there is a great potential in having control over the access terminal to the virtual world, e.g. a phone… just thing how rich your profile in facebook/xing/linkedin can be if it takes all the information you implicitly generate on the phone into account.

Visit to Nokia Research Center Tampere, SMS, Physiological sensors

This trip was my first time in Tampere (nice to see sometimes a new place). After arriving yesterday night I got a quick cultural refresher course. I even met a person who was giving today a presentation to the president of Kazakhstan (and someone made a copy using a phone – hope he got back OK to Helsinki after the great time in the bar).

In the morning I met a number of people in Jonna Hakkila’s group at the Nokia Research Center. The team has a great mix of backgrounds and it was really interesting to discuss the project, ranging from new UI concepts to new hardware platform – just half a days is much too short… When Ari was recently visiting us in Essen he and Ali started to implement a small piece of software that (hopefull) improves the experience when receiving an SMS (to Ali/Ari – the TODOs for the Beta-release we identified are: sound design, screen design with statistics and the exit button in the menu, recognizing Ok and oK, autostart on reboot, volume level controlable and respecting silent mode). In case you have not helped us with our research yet please fill in the questionnaire: http://www.pcuie.uni-due.de/uieub/index.php?sid=74887#

I gave a talk (see separate post on the next big thing) and had the chance to meet Jari Kangas. We discovered some common interest in using physiological sensing in the user interface context. I think the next steps in integrating physiological sensors into devices are smaller than expected. My expectation is that we rather detect simple events like „surprise“ rather than complex emotion (at least in the very near future). We will see where it goes – perhaps we should put some more students on the topic…