Girls‘ day at Fraunhofer in Birlinghoven

In Germany there are still too few girls and women interested in studying technical subjects – it may have different reasons but I think trying to convince young girls that technology is really exciting is a good way of addressing the problem.

We had this morning 7 girls visiting our lab. Matching our current teaching at B-IT (and using the test implementation Dagmar made 😉 we offered the topic “The computer knows where I am – how does this work”. First we played a mini geocaching game where they had to find a bag of jelly babies behind the castle.

After they had some experience with GPS and an electronic map we explained how it works and even parse together a NMEA-0183 sentence. We also discussed some application ideas, e.g. kids monitoring with GPS. The discussion, in particular the privacy issues that came up, were quite interesting.

I can really see that for some projects running focus groups with kids could be fun for them and a great value for the projects.

Why shopping is fun – thoughts on intelligent user interfaces or why n=1 is not enough

Some weeks ago I saw for the first time one of the intelligent scales in the wild (=outside the lab). At that time I was really impressed how well it worked (sample size: n=1, product: banana, pack: no-bag, recognition performance: 100%). Last time I was too late so there was no time to play or see other people using it – but today I had some 5 minutes to invest.

The basic idea of the scale is simple and quite convincing. The customers put their purchase on the scales. A camera makes a guess what it is and the selection menu is reduced to the candidates that match the guess of the camera. Additionally, there is always a button to get all the options (as in the old version without the camera). It appears that this should make things easier.

I observed people trying to weigh different fruits and vegetables in bags and without bags (obviously I tried it myself, too). It did not work very often but interestingly people did not care much. It looked as most people did not really realise that this is meant to be an intelligent user interface. They probably just wondered why the display is showing always different things, but as they are intelligent themselves they found a way to deal with.

Overall it seems that it does really well on bananas which are not wrapped in a bag (my initial test case) and does not too well on many other things. I think the scales are an interesting example of a invisible interface.

Overall this is again a remainder that user tests that are small may be utterly wrong.

A magic lens for the mass market?

Making things visible that can not be seen with the naked eye? Overlaying personalized information onto objects or images? Such concepts make good fiction but are there interesting use cases? Michael Rohs from T-Labs in Berlin visited B-IT and Fraunhofer IAIS today and he showed us in his talk and demos several such scenarios that appear not to be far in the future or fictional anymore.

Michael has developed during his PhD at ETH Zürich the Visual Codes system (http://people.inf.ethz.ch/rohs/visualcodes/) that provides a basis for augmented reality interaction on mobile phones. Some of his current work, in particular overlaying information on large paper maps, shows impressively the potential of using personal mobile devices, such as phones, as interfaces to combine static and dynamic information. I think for everyone trying out information overlays using a phone can easily imaging that this could be commonplace pretty soon. The question is more what the first pervasive and convincing applications are for mobile augmented reality and when will we find them in the wild. In our discussion a number of interesting application areas came up, in particular games and advertising seem very appealing.

In the morning Michael got a tour a B-IT and some demos. One of our tasks in the practical course developing location and context-aware systems is also related to a magic device from the Harry Potter book – a map with a moving point 😉

Till Schäfers, a student at B-IT who is currently at T-Labs in Berlin for his master thesis (supervised by Michael and me) gave this morning a presentation on the work he started. We had a longer discussion on issues of teleconferencing – and in particular on mobile teleconferencing – were many interesting ideas and issues came up. (Remark on research&realty: somehow it is sad that even after many year of research in teleconferencing the tools we use to do meeting over the phone in our daily work are still poor – but things are getting better). Another issue we discussed in detail is the questions of how to involve the user in the design process even more while giving at same time the user interface designer the freedom to designs and defines novel ways of interaction and exciting interfaces.

Besides the scientific exchange it is great to have visitors to learn about new gadgets. Michael and Till showed the SHAKE SK6 sensor /actuator attached to a phone – quite an interesting tool for research.

Steffi Beckhaus Showed us Virtual Reality beyond 3D Visual Displays

Steffi Beckhaus, who is professor for computer science at the University of Hamburg, visited our group at B-IT. Meeting her was another classical example how small the scientific community in user interface research is. I met Tanja Döring, one of Steffi’s students, at TEI’07 in Baton Rouge. They had a very interesting paper on novel user interfaces for art historians – “The Card Box at Hand: Exploring the Potentials of a Paper-Based Tangible Interface for Education and Research in Art History”. Looking then up Steffi Beckhaus details I saw that she was at Fraunhofer IMK (which is now part of IAIS) in Birlinghoven some year ago in the virtual environments group.

After lunch we had a few demos (like always on short notice as I forgot to tell before). Florian Alt demoed the current stage of his annotation platform for the web and Paul Holleis showed some examples of the work on modelling physical interaction and cross device prototyping, which we will present at CHI in 2 weeks. We also showed one of the student projects from the last course on developing mobile applications (CardiViz) and the ongoing work of our current lab on context and location awareness. We realized that we have very similar values and methods for teaching. In particular forcing students to bring in their own creativity into projects which they drive and for which we set a corridor seems a very efficient way to teach people who to create novel user experiences.

In her presentation Steffi showed us details about her lab in Hamburg (we were so impressed that we invited ourselves for a visit). In particular the combination of “classical VR” and tangible and novel user interfaces is intriguing. Overall her work is greatly interesting as it looks into the whole body experience (e.g. sound-floor, chairIO) and connects much more than I expected to our research theme of embedded interaction.

In our discussion Steffi brought up a video of the Pain-Station (http://www.fursr.com/) . It is basically a pong game where you get penalized for low performance with actuator that creates pain (I think with heat and a whip). To be successful you can either play well or take more pain than your opponent 😉 This led us to the discussion of how far one should go in designing novel user experiences.

Taking about tactile output Steffi mentioned the project VRIB (for more info see VRIB at Fraunhofer IMK or at Univeristy of Ilmenau) that was done from 2000 to 2004 on novel interaction devices and metaphors. This includes interesting issues that may be also relevant for our work on tactile output on mobile devices.

Wearable Activity Recognition – Talk by Kristof van Laerhoven

How many sensors do we need? That was one point in the discussion after Kristof’s talk. His approach, in contrast to many others, is to use a large number of sensors for activity recognition. This offers more freedom with regard to placement of sensors, variations of sensors, and also provides redundancy but makes the overall system more complex. His argument is that in the long term (when sensors will be an integral part of garments) the multi-sensor approach is superior – let’s wait some 10 years and then discuss it again 😉

From a scientific perspective and in particular for machine learning (where he sees the greatest challenge) the larger number of distributed sensors is more interesting. I find his porcupine2 (http://porcupine2.sourceforge.net/) stuff quite interesting.

Kristof’s web page: www.mis.informatik.tu-darmstadt.de/People/kristof

Guests @ UIE

We are delighted to have a group of people visiting from Lancaster University in the UK. Prof. Nigel Davies and 2 of his PhD students, Oliver Storz and André Hesse, came last week to Bonn and will stay with us for the next 3 months. It is great that Nigel decided to have his Sabbatical at Fraunhofer IAIS and B-IT, University of Bonn.

Earlier this evening we had already a brain storming session at the Bier garden of the Bahnhöfchen in Beuel (www.bahnhoefchen.de). I am really looking forward to interesting joined projects in the next weeks and months.

GPS statistics – or 25 hours in the car not driving?

My navigation system records simple statistics. I was surprised to see that I have been sitting in my car for about 25 hours in the last 4 months – not driving. Overall it means that 30% of the time I am actually not driving (usually waiting at a traffic light or a railroad crossing – not sure if the 30% are a Bonn-phenomenon).

This makes me wonder if it would be useful to design technologies that provide entertainment or education during these forced waiting times. Could I have used these 25 hours to learn or improve a language? Or could I have watched some funny clips from youtube? Or is listening to the radio all we need while driving (and not driving)?

Paul Holleis joined UIE in Bonn

The Embedded Interaction Research Group (www.hcilab.org) moved in the beginning of April from the University of Munich to the University of Bonn. Paul Holleis (www.paul-holleis.de) , who worked on the project for the last 3 years joined us now in Bonn. It is really great to have him and his experience here!

Matthias Kranz (www.ibr.cs.tu-bs.de/users/kranz/) who also worked on the project is now in Braunschweig working with Michael Beigl. Braunschweig is too close to not work together – this term we run a seminar on context-aware and ambient systems in parallel at TU Braunschweig and University of Bonn.

User tests and final presentations of the lab course



This week the final presentations of the lab course were due. It was really interesting to see what motivated students can achieve in just 4 weeks! The projects explored the idea of contextual ECG and students implemented the data acquisition, transmission over the network, and visualisation. The user studies showed that there is quite some potential in the idea (even though there is also still some way to go before the system is perfect 😉 We plan to publish a paper on the results of the course.

Having a digital presence after life?

An event this week reminded me that life has an end. Getting a link to a Google Map page (satellite image) where someone found his last resting place shows how far reaching new technologies have penetrated our life. This made me think about a demo I saw at Ubicomp last year (http://mastaba.digital-shrine.com/). The digital family shrine did not really relate to my cultural experience and felt somehow strange, but still very interesting and intriguing. I wonder in what form of digital presence after life will become common in Germany