Technologies of Globalization 2008 in Darmstadt

I have been chairing the Stream „Aging as a Global Issue“ at the conference Technologies of Globalization 2008 in Darmstadt. It is always very suprising who different research is across different diciplines…

On-Kwok Lai from Kwansei Gakuin University gave a really interesting overview on the current situation in Asia and in particular in Japan with regards the aging society. Learning more about ageing I find myself more often thinking the current “aging research” is more like treating a symptom and not looking at the real problem. And it seems the real problem: reduced reproduction in industrial states – basically we do not have enough children anymore. This leads to the obvious question: would researching into solutions and technologies that make it easier to raise children while working or studying not be the more important challenge?

In another talk Birgit Kasper reported from a study of multi-modal travel in Köln („Patenticket“). In the trail they got people who have a yearly ticket to introduce other older people to public transport by providing them a 3 month flat-rate ticket for public transport in the region. The benefits seem to come from two sides: (1) people do not worry if they have the right ticket and (2) having a person that acts as a patron learning the public transport system is supported. If we look at the results a radical suggestion would be to introduce a car-city-tax (e.g. like London) and give in return free public transport to everyone – would this simple solution not solve many of our problems (economic, ecological, …) or would it create a two-tier society?

The social event was at castle Frankenstein – but surprisingly everyone came back in the morning unharmed 😉

My Random Papers Selection from Ubicomp 2008

Over the last days there were a number of interesting papers presented and so it is not easy to pick a selection… Here is my random paper selection from Ubicomp 2008 that link to our work (the conference papers link into the ubicomp 2008 proceedings in the ACM DL, our references are below):

Don Patterson presented a survey on using IM. One of the finding surprised me: people seem to ignore „busy“ settings. In some work we did in 2000 on mobile availability and sharing context users indicated that they would respect this or at least explain when interrupt someone who is busy [1,2] – perhaps it is a cultural difference or people have changed. It may be interesting to run a similar study in Germany.

Woodman and Harle from Cambridge presented a pedestrian localization system for large indoor environments. Using a XSens device they combine dead reckoning with knowledge gained from a 2.5D map. In the experiment they seem to get similar results as with a active bat system – by only putting the device on the user (which is for large buildings much cheaper than putting up infrastructure).
Andreas Bulling presented work where he explored the use EOG goggles for context awareness and interaction. The EOG approach is complementary to video based systems. The use of gesturest for context-awarenes follows a similar idea as our work on eye gestures [3]. We had an interesting discussion about further ideas and perhaps there is chance in the future to directly compare the approaches and work together.
In one paper „on using existing time-use study data for ubiquitous computing applications“ links to interesting public data sets were given (e.g the US time-use survey). The time-use surevey data covers the US and gives detailed data on how people use their data.
University of Salzburg presented initial work on an augmented shopping system that builds on the idea of implicit interaction [4]. In the note they report a study where they used 2 cameras to observe a shopping area and they calculated the „busy spots“ in the area. Additional they used sales data to get best selling products. Everything was displayed on a public screen; and an interesting result was that it seems people where not really interesting in other shoppers behavior… (in contrast to what we observe in e-commerce systems).
Researchers from Hitachi presented a new idea for browsing and navigating content based on the metaphor of using a book. In is based on the concept to have a bendable surface. In complements interestingly previous work in this domain called Gummi presented in CHI 2004 by Schwesig et al.
[1] Schmidt, A., Takaluoma, A., and Mäntyjärvi, J. 2000. Context-Aware Telephony Over WAP. Personal Ubiquitous Comput. 4, 4 (Jan. 2000), 225-229. DOI= http://dx.doi.org/10.1007/s007790070008
[2] Albrecht Schmidt, Tanjev Stuhr, Hans Gellersen. Context-Phonebook – Extending Mobile Phone Applications with Context. Proceedings of Third Mobile HCI Workshop, September 2001, Lille, France.
[3] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.
[4] Albrecht Schmidt. Implicit Human Computer Interaction Through Context. Personal Technologies, Vol 4(2), June 2000

Our Posters at Ubicomp 2008

In the poster session we showed 3 ideas of ongoing work from our lab and got some interesting comments and had good discussions. 
Ali presented initial findings from our experiments with multi-tactile output on a mobile phone [1]. He created a prototype with 6 vibration motors that can be individually controlled. In the studies we looked at what locations of the vibration elements and what patters can be spotted by the user. In short it seems that putting the vibration motors in the corners works best.

Florian showed ideas on counting page impressions on public advertising screens we sketched out with Antonio’s Group in Münster [2]. The basic idea is to use sensors to get an idea of the number of people passing by. To calibrate such a system (as the sensor observations are only incomplete) we propose to use existing ways of counting people (e.g. access gates to public transport) and extrapolate based on this information.

I presented Dagmar’s poster on initial work on providing information on the fuel economy in an engaging way to the user [3]. In a focus group study we observed that people are more aware of the price of a journey when using public transport than when using a car. For the car they know well the price per liter but have to calculate the price for a typical trip (e.g. to work). We suggest ideas where one can compete with others (e.g. from the social network) in saving energy on a specific route.

[1] Alireza Sahami Shirazi, Paul Holleis, Albrecht Schmidt. Rich Tactile Output for Notification on Mobile Phones (2-page paper, poster). Adjunct proceedings of Ubicomp 2008, Seoul, Korea, p26-27
[2] Albrecht Schmidt, Florian Alt, Paul Holleis, Jörg Müller, Antonio Krüger. Creating Log Files and Click Streams for Advertisements in Physical Space (2-page paper, poster). Adjunct proceedings of Ubicomp 2008, Seoul, Korea, p28-29
[3] Dagmar Kern, Paul Holleis, Albrecht Schmidt. Reducing Fuel Consumption by Providing In-situ Feedback on the Impact of Current Driving Actions (2-page paper, poster), Adjunct proceedings of Ubicomp 2008, Seoul, Korea, p18-19

Doctoral colloquium at Ubicomp 2008

In the doctoral colloquium at Ubicomp 2008 we saw an interesting mix of topics including work on context-awareness, interaction in smart space, home infrastructures and urban environments. Overall there is again the observation that in ubicomp topics are very broad (at least in the beginning) and that it is not easy to narrow it down.

As Ali works on tactile feedback it was very interesting to see the presentation of Kevin Li on eyes-free interaction. He has an upcoming paper at UIST 2008 which is worthwhile to check out [1]. It was interesting some of the questions that relate to „easily learnable“ or „intuitive“ related to the discussion we had 2 weeks ago at the automotive UI workshop – what are tactile stimuli that are natural and we associate meaning with them without explanations or learning?
There are many more papers to read if you are interested in tactile communication and output, here are two suggestions [2] and [3].
[1]. Li, K. A., Baudisch, P., Griswold, W.G., Hollan, J.D. Tapping and rubbing: exploring new dimensions of tactile feedback with voice coil motors. To appear in Proc. UIST’08.

[2] Chang, A. and O’Modhrain, S., Jacob, R., Gunther, E., and Ishii, H. ComTouch: design of a vibrotactile communication device. Proc. Of DIS’02, pp. 312-320.

[3] Malcolm Hall, Eve Hoggan, Stephen A. Brewster: T-Bars: towards tactile user interfaces for mobile touchscreens. Mobile HCI 2008: 411-414

PS: Just one remark on the term „framework“ (a favorite word to use in dissertation and paper titles) – it is not a clear term and expectations are very different, hence it make sense to think twice before using it 😉

Is there a Net Generation? Keynote by Rolf Schulmeister

This year the German HCI conference (Mensch und Computer) is co-located with the German e-learning conference. The opening keynote this morning by Rolf Schulmeister was an interesting analysis of how young people use media in the context of learning. Over the last year that have been plenty of popular science books that tell us how human kind changes with the internet, e.g. Digital Native vs. Digital Immigrands by Marc Prensky (extract). The talk seriously questioned if a “Net Generation” exists and it seems that many of the properties associated with it (e.g. short attention spans, use of internet to socialize, reveal feelings through the internet, preference of graphics over text) are found based on studies where people select themselves to participate in the studies/questionnaires.

The paper (Gibt es eine »Net Generation«, in German over 130 pages) that accompanies the talk provides many interesting reference and is worthwhile a further look.

Keynote at MobileHCI2008: BJ Fogg – mobile miracle

BJ Fogg gave the opening keynote at mobile HCI 2008 in Amsterdam. The talk explained very well the concept of Captology (computers as persuasive technologies) and the newer projects are very inspiring. He put the following questions at the center: How can machines change people’s minds and hearts? How can you automate persuasion? His current focus is on behavior change.

He reported of a class he is teaching at Stanford on designing facebook applications. The metric for success (and on this students are marked) is the uptake of the created application over the time of the course. He reported that the course attracted 16 million users in total and about 1 million on a daily basis – that is quite impressive. This is also an example of the approach he advocates: “rather try than think”. The rational is to try out a lot of things (in the real market with real users, alpha/beta culture) rather than optimize a single idea. Here the background is that nowadays implementation and distribution is really easy and that the marked decides if it is hot or not… His advice is to create minimal application – simple application and then push it forward. All big players (e.g. google, flickr) have done it this ways…

With regard to the distribution methods for persuasion he referred over and over to social networks (and in particular facebook). His argument is that by these means one is able to reach many people in a trusted way. He compared this to the introduction of radio but highlighted the additional qualities. Overall he feels that Web 2.0 is only a worm up for all the applications to come on the mobile in the future.

At the center of the talk was that prediction that mobile devices will be within 15 years the main technology for persuasion. He argued that mobile phones are the greatest invention of human kind – more important than the writing and transportation systems (e.g. planes, cars). He explained why mobile phones are so interesting based on three metaphors: heart, wrist watch, magic wand.

Heart – we love our mobile phones. He argued that if users do not have their phone with them they miss it and that this is true love. Users form a very close relationship with their phone and spend more time with the phone than with anything/anyone else. He used the image of “mobile marriage”…

Wrist watch – the phone is always by our sides. It is part of the overall experience in the real world provding 3 functions: Concierge (reactive, can be asked for advice, relationship base on trust), Coach (proactive, coach comes to me tells me, pushing advice), and Court Jester (entertains us, be amused by it, create fun with content that persuades).

Magic wand – phones have amazing and magical capabilities. A phone provides humans with a lot of capabilities (remote communication, coordination, information access) that empower many things.

Given this very special relationship it may be a supplement for our decision making (or more general our brain). The phone will advise us what to do (e.g. navigation systems tell us where to go) and we love it. We may have this in other areas, too – getting told what movie to see, what food to eat, when to do exercise, … not fully convinced 😉

He gave a very interesting suggestion how to design good mobile applications. Basically to create a mobile application the steps are: (1) Identify the essence of the application, (2) strip everything of the application that is not essential to provide this and (3) you have a potentially compelling mobile application. Have heard of this before, nevertheless it seems that still features sell but it could by a change with the next generation.

He provided some background on the basics of persuasion. For achieving a certain target behavior you need 3 things – and all at the same time: 1. sufficient motivation (they need to want to do it), 2. Ability to do what they want (you either have to train them or to make it very easy – making easer is better) and 3. a trigger. After the session someone pointed out that this is similar to what you have in crime (means, motive, opportunity 😉

For creating persuasive technologies there are 3 central pairs describing motivation:

  • Instant pleasure and gratification vs. instant pain
  • Anticipation of good or hope vs. anticipation of the bad or fear (it is noted that hope is the most important motivator
  • Social acceptance vs. social rejection

When designing systems it is essential to go for simplicity. He named the following five factors that influence simplicity: (1) money, (2) physical effort, (3) brain cycles, (4) social deviation, and (5) non-routine. Antonio pointed out that this links to work of Gerd Gigerenzer at MPI work on intuitive intelligence.

[1] Gigerenzer, G. Gut feelings: The intelligence of the unconscious. New York: Viking Press.

ISUVR 2008, program day2

Norbert Streitz – Trade-off for creating smartness

Norbert gave an interesting overview of research in the domain of ubicomp based on his personal experience – from Xerox PARC to the disappearing computer. He motivated the transition from Information Design to Experience Design. Throughout the work we see a trade-off between providing “smart support” to the user and “privacy” (or control over privacy). One of the questions if we will re-invent privacy or if it will become a commodity…
As one of the concrete examples Norbert introduced the Hello.Wall done in the context Ambient Agoras [1]. This again brought up the discussion of public vs. private with regard to the patterns that are displays. (photos of some slides from Norbert’s talk)

[1] Prante, T., Stenzel, R., Röcker, C., Streitz, N., and Magerkurth, C. 2004. Ambient agoras: InfoRiver, SIAM, Hello.Wall. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 763-764. DOI= http://doi.acm.org/10.1145/985921.985924 (Video Hello.Wall)
Albrecht Schmidt – Magic Beyond the Screen
I gave a talk on “Human Interaction in Ubicomp -Magic beyond the screen” highlighting work in user interfaces beyond the screen that we did over the last years. It is motivated by the facts that classical limitations in computer science (e.g. frame rate, processing, storage) are getting less and less important to many application areas and that the human computer interaction becomes in many areas the critical part of the system.
In my talk I suggested using “user illusion” as a design tool for user interfaces beyond the desktop. This involves two steps: 1) describe precisely the user illusion the application will create and the 2) Investigate what parameters have an influence on the quality of the created user illusion for the application. (photos of some slides from Albrecht’s talk, Slides in PDF)
Jonathan Gratch – Agents with Emotions

His talk focused on the domain of virtual reality with a focus on learning/training applications. One central thing I learned is that the timing of non-verbal cues (e.g. nodding) is very crucial to produce an engagement in speaking with an agent. This may also be interesting for other forms of computer created feedback.
He gave a specific example on how assigning blame works. It was really interesting to see that there are solid theories in this domain that can be concretely used to design novel interfaces. He argues that appraisal theory can explain people’s emotional states and this could improve context-awareness.

He showed an example of emotional dynamics and it is amazing how fast emotion happen. One of the ways of explaining this is to look at different dynamics: dynamics in the world, dynamics in the perceived world relationship, and dynamic through action. (photos of some slides from Jonathan’s talk)
Daijin Kim – Vision based human robot interaction
Motivated by the vision that after the personal computer we will see the “Personal Robot” Daijin investigates natural ways to interact with robots. For vision based interaction with robots he named a set of difficulties, in particular: people are moving, robots are moving, and the illuminations and distances are variable. The proposed approach is to generate a pose, expression, and illumination specific active appearance model.
He argues that face detection is a basic requirement for vision based human robot interaction. The examples he showed in demo movie were very robust with regard to movement, rotation, and expression and it works for very variable distances. The talk contained further examples of fast face recognition and recognition of simple head gestures. Related to our research it seems that such algorithms could be really interesting in creating context-aware outdoor advertisement. (photos of some slides from Daijin’s talk)

Steven Feiner – AR for prototyping UIs

Steven showed some work mobile projector and mobile device interaction, were they used augmented reality for prototyping different interaction methods. He introduced Spot-light (position based interaction), orientation based interaction and widget-based interaction for an arm mounted projector. Using the synaptic touchpad and projection may also be an option for our car-ui related research. For interaction with a wrist device (e.g. a watch) he introduced the string-based interaction which is a simple but exciting idea. You pull out a string of a device and the distances as well as the direction are the resulting input parameters [2].
In a further example Steven showed a project that supports field work on identification of plants using capture (of the image of the real leaf), comparison with the data base and matching out of a subset that matches the features. Their prototype was done on a tablet and he showed ideas how to improve this with AR; it is very clear that this may also an interesting application (for the general user) on the mobile phone.

New interfaces and in particular gestures are hard to explore – if you have no idea what is supported by the system. In his example on visual hint for tangible gestures using AR Steven showed interesting options in this domain. One approach follows a “preview style” visualizations – they called it ghosting. (photos of some slides from Stevens’s talk)

[2] Blasko, G., Narayanaswami, C., and Feiner, S. 2006. Prototyping retractable string-based interaction techniques for dual-display mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI ’06. ACM, New York, NY, 369-372. DOI= http://doi.acm.org/10.1145/1124772.1124827
[3] White, S., Lister, L., and Feiner, S.Visual Hints for Tangible Gestures in Augmented Reality.Proc. ISMAR 2007 IEEE and ACM Int. Symp. on Mixed and Augmented Reality, Nara Japan, November 13-16, 2007. (youtube video)

If you are curious about the best papers, please the photos from the closing 🙂

Finally some random things to remember:

  • Richard W. DeVaul did some work on subliminal user interfaces – working towwrds the vision of zero attention UIs [4]
  • Jacqueline Nadel (development psychologist) did studies on emotions between parents and infants using video conferencing
  • V2 – Toward a Universal Remote Console Standard http://myurc.org/whitepaper.php
  • iCat and Gaze [5]

[4] Richard W. DeVaul. The Memory Glasses: Wearable Computing for Just-in-Time Memory Support. PhD Thesis. MIT 2004. http://devaul.net/~rich/DeVaulDissertation.pdf

[5] Poel, M., Breemen, A.v., Nijholt, A., Heylen, D.K., & Meulemans, M. (2007). Gaze behavior, believability, likability and the iCat. Proceedings Sixth Workshop on Social Intelligence Design: CTIT Workshop Proceedings Series (pp. 109–124). http://www.vf.utwente.nl/~anijholt/artikelen/sid2007-1.pdf

Korean Dinner – to many dishes to count

In the evening we had a great Korean dinner. I enjoyed it very much – and I imagine we have seen everything people eat in Korea – at some point I lost count of the number of different dishes. The things I tasted were very delicious but completly different to what I typically eat.

Dongpyo Hong convinced me to try a traditional dish (pork, fish and Kimchi) and it was very different in taste. I was not adventures enough to try a dish that still moved (even though the movement was mariginal – can you spot the difference in the picture) – but probably I missed something as Dongpyo Hong enjoyed it.

I made some photos from the conference dinner.

ISUVR 2008, program day1

The first day of the symposium was exciting and we saw a wide range of contributions from context-awareness to machine vision. In following I have a few random notes on some of the talks…

Thad Starner, new idea on BCI
Thad Starner gave a short history of his experience with wearable computing. He argued that common mobile keyboards (e.g. mini-querty, multi-tap, T9) are fundamentally not suited real mobile tasks. He showed the studies of typing with the twiddler – the data is impressive. He is arguing for cording keyboards and generally he suggests that “Typing while walking is easier than reading while walking“ . I buy the statement but I still think that the cognitive load created by the twiddler does not make it generally suited. He also showed a very practical idea of how errors on mini-keyboards can be reduced using text prediction [1] – that relates to the last exercise we did in the UIE class. (photos of some slides from Thad’s talk)

He has suggested a very interesting approach to “speech recognition” using EEG. The basic idea is that people use sign language (either really moving their hands or just imagine to move their hands) and that the signals of the motor cortex are measured using a brain interface. This is so far the most convincing idea for a human computer brain interface idea that I have seen… I am really curious to see Thad’s results of the study! He also suggested an interesting idea for sensors – using a similar approach as in hair replacement technology (have no idea about this so far, but I probably should read up on this).

[1] Clawson, J., Lyons, K., Rudnick, A., Iannucci, R. A., and Starner, T. 2008. Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 573-582. DOI= http://doi.acm.org/10.1145/1357054.1357147

Anind Dey – intelligible context
Anind provided an introduction to Context-Awareness. The characterized them as situationally appropriate applications that adapt to context and eventually increase the value to user. Throughout the talk he made a number of convincing cases that context has to be intelligible to the users, otherwise problems arise when the systems guess wrong (and they will get it wrong sometimes).

He showed an interesting example how data collected from a community of drivers (in this case cab drivers) is useful to predict the destination and the route. These examples are very interesting and show a great potential for learning and context prediction from community activity. I think sharing information beyond location may have many new applications.
In one study they use a windscreen project display (probably a HUD – have to follow up on this) We should find more about it as we look in such displays ourselves for one of the ongoing master projects. (photos of some slides from Aninds’s talk)

Vincent Lepetit – object recognition is the key for tracking
Currently most work in computer vision used physical sensors or visual markers. The vision however is real clear – just do the tracking based on natural features. In his talk he gave an overview how close we are to this vision. He showed examples of markerless visual tracking based on natural features. One is a book – which really looks like a book with normal content and no markers – which has an animated overlay.
His take-away message was “object recognition is the key for tracking” and it is still difficult. (photos of some slides from Vincent’s talk)

Jun Park – bridge the tangibility gap
In his talk he discussed the tangibility gap in design – in different stages of the design and the design evaluation it is important to feel the product. He argues that rapid prototyping, using 3D printing, is not well suited, especially as it is comparably slow and it is very difficult to render material properties. His approach alternative is augmented foam using a visually non-realistic but tangible foam mock-up combined with augmented reality techniques. Basically the CAD model is rendered on top of the foam.

The second part of the talk was concerned with e-commerce. The basic idea is that users can overlay a product into their own environment, to experience the size and how well it matches the place. (photos of some slides from Jun’s talk)

Paper Session 1 & 2

For the paper sessions see the program and some photos from the slides.
photos of some slides from paper session 1
photos of some slides from paper session 2

GIST, Gwangju, Korea

Yesterday I arrived in Gwangju for the ISUVR-2008. It is my first time in Korea and it is an amazing place. Together with some of the other invited speakers and PhD students we went for a Korean style dinner (photos from the dinner). The campus (photos from the campus) is large and very new.

This morning we had the opportunity to see several demos from Woontack’s students in the U-VR lab. There is a lot of work on haptics and mobile augmented reality going on. See the pictures of the open lab demo for yourself…

In the afternoon we had some time for culture and sightseeing – the country side parks are very different from Europe. Here are some of the photos of the trip around Gwangju and see http://www.damyang.go.kr/

In 2005 Yoosoo Oh, a PhD student with Woontack Wo at GIST, was a visiting student in our lab in Munich. We worked together on issues related to context awareness and published a paper together discussing the whole design cycle and in particular the evaluation (based on a heuristic approach) of context-aware systems [1].

[1] Yoosoo Oh, Albrecht Schmidt, Woontack Woo: Designing, Developing, and Evaluating Context-Aware Systems. MUE 2007: 1158-1163

Photos – ISUVR2008 – GIST – Korea