Keynote by Pertti Huuskonen: Ten Views to Context Awareness

Pertti Huuskonen from Nokia presented his keynote at Percom in Mannheim. I worked with Pertti in 1999 on a European Project TEA – creating context-aware phones [1].

After telling us about CERN and some achievements in physics he raised the issue that an essential skill of humans is that they are context-aware. Basically culture is context-awareness – learning how to appropriately behave in life is essential to be accepted. We do this by looking at other people and by learning how how they act and how others react. „Knowing how to behave“ we become fit for social life and this questions the notion of intuitive use as it seems that most of it is learned or copied from others.

He gave a nice overview of how we can context-awareness is useful. One very simple example he showed is that people typically create context at the start of a phone call.

One example of a future to come may be ubiquitous spam – where context may be the enabler but also the enabler for blogging adverts. He also showed the potential of context in the large, see Nokoscope. His keynote was refreshing – and as clearly visible he has a good sense of humor 😉

[1] Schmidt, A., Aidoo, K. A., Takaluoma, A., Tuomela, U., Laerhoven, K. V., and Velde, W. V. 1999. Advanced Interaction in Context. In Proceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 89-101.

Sensor modules for acceleration, gyro, and magnetic field

I came across 2 Sensor module recently released by ST Microelectronics:

There will be in the future probably very few mobile devices without such sensors. When we worked on the project TEA in 1999 it seemed far away… What can you do with sensors on the mobile? There are a few papers to read: using them for context awareness [1], for interaction [2], [3], and for creating smart devices [4].

Last week in Finland I met Antii Takaluoma (one of the co-authors of [1]) and he works now for offcode.fi – I saw impressive Linux hardware – I expect cool stuff to come 🙂

[1] Schmidt, A., Aidoo, K. A., Takaluoma, A., Tuomela, U., Laerhoven, K. V., and Velde, W. V. 1999. Advanced Interaction in Context. In Proceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 89-101.

[2] Hinckley, K., Pierce, J., Sinclair, M., and Horvitz, E. 2000. Sensing techniques for mobile interaction. In Proceedings of the 13th Annual ACM Symposium on User interface Software and Technology (San Diego, California, United States, November 06 – 08, 2000). UIST ’00. ACM, New York, NY, 91-100. DOI= http://doi.acm.org/10.1145/354401.354417

[3] Albrecht Schmidt. Implicit human computer interaction through context. Personal and Ubiquitous Computing, 4(2):191-199, June 2000

[4] A. Schmidt and K. Van Laerhoven. How to Build Smart Appliances?, IEEE Personal Communications, p.66 – 71, (2001)

Workshop at MobileHCI: Context-Aware Mobile Media and Mobile Social Networks

Together with colleagues from Nokia, VTT, and CMU we organized a workshop on Context-Aware Mobile Media and Mobile Social Networks at MobileHCI 2009.

The topic came up in discussions some time last year. It is very clear that social network have moved towards mobile scenarios and that utilizing context and contextual media adds a new dimension. The workshop program is very diverse and ranges studying usage practices to novel technological solutions for contextual media and application.

One topic that is interesting to further look at is to use (digital) social networks for health care. Taking an analogy in history it is evident that the direct social group you were in took were the set of people that helped you in case of illness or accident. Looking at conditions and illnesses that cause a loss of mobility or memory it could be interesting to find applications on top of digital social networks to provide help. Seems this could be a project topic.

In one discussion we explored what would happen if we would change our default communication behavior from closed/secret (e.g. Email and SMS) to public (e.g. bulletin boards). I took the example of organizing this workshop: our communication has been largely on email and has not been public. If it would had been open (e.g. public forum) we probably would have organized the workshop in the same way but at the same time provided an example how one can organize a workshop and by this perhaps provided useful information for future workshop chairs. In this case there are little privacy concerns but images all communication is public? We would learn a lot about how the world works…

About 10 years ago we published at paper there is more to context than location [1]. However, looking at our workshop it seems: location is still the dominant context people think of. Many of the presentations and discussions included the term context, but the examples focused on location. Perhaps we do need location only? Or perhaps we should look more closely to find the benefit of other contexts?

[1] A. Schmidt, M. Beigl, H.W. Gellersen (1999) There is more to context than location, Computers & Graphics, vol. 23, no. 6, pp. 893-901.

New project on ambient visualization – kick-off meeting in Munich

We met in Munich at Docomo Euro Labs to start a new project that is related to context and ambient visualizations. And everyone already got bunnies 😉

Related to this there is a large and very interesting project: IYOUIT. Besides other things it can record and share your context – if you have a Nokia series 60 phone you should try it out. As far as I remember it was voted best mobile experience at mobile HCI 2008. 

My Random Papers Selection from Ubicomp 2008

Over the last days there were a number of interesting papers presented and so it is not easy to pick a selection… Here is my random paper selection from Ubicomp 2008 that link to our work (the conference papers link into the ubicomp 2008 proceedings in the ACM DL, our references are below):

Don Patterson presented a survey on using IM. One of the finding surprised me: people seem to ignore „busy“ settings. In some work we did in 2000 on mobile availability and sharing context users indicated that they would respect this or at least explain when interrupt someone who is busy [1,2] – perhaps it is a cultural difference or people have changed. It may be interesting to run a similar study in Germany.

Woodman and Harle from Cambridge presented a pedestrian localization system for large indoor environments. Using a XSens device they combine dead reckoning with knowledge gained from a 2.5D map. In the experiment they seem to get similar results as with a active bat system – by only putting the device on the user (which is for large buildings much cheaper than putting up infrastructure).
Andreas Bulling presented work where he explored the use EOG goggles for context awareness and interaction. The EOG approach is complementary to video based systems. The use of gesturest for context-awarenes follows a similar idea as our work on eye gestures [3]. We had an interesting discussion about further ideas and perhaps there is chance in the future to directly compare the approaches and work together.
In one paper „on using existing time-use study data for ubiquitous computing applications“ links to interesting public data sets were given (e.g the US time-use survey). The time-use surevey data covers the US and gives detailed data on how people use their data.
University of Salzburg presented initial work on an augmented shopping system that builds on the idea of implicit interaction [4]. In the note they report a study where they used 2 cameras to observe a shopping area and they calculated the „busy spots“ in the area. Additional they used sales data to get best selling products. Everything was displayed on a public screen; and an interesting result was that it seems people where not really interesting in other shoppers behavior… (in contrast to what we observe in e-commerce systems).
Researchers from Hitachi presented a new idea for browsing and navigating content based on the metaphor of using a book. In is based on the concept to have a bendable surface. In complements interestingly previous work in this domain called Gummi presented in CHI 2004 by Schwesig et al.
[1] Schmidt, A., Takaluoma, A., and Mäntyjärvi, J. 2000. Context-Aware Telephony Over WAP. Personal Ubiquitous Comput. 4, 4 (Jan. 2000), 225-229. DOI= http://dx.doi.org/10.1007/s007790070008
[2] Albrecht Schmidt, Tanjev Stuhr, Hans Gellersen. Context-Phonebook – Extending Mobile Phone Applications with Context. Proceedings of Third Mobile HCI Workshop, September 2001, Lille, France.
[3] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.
[4] Albrecht Schmidt. Implicit Human Computer Interaction Through Context. Personal Technologies, Vol 4(2), June 2000

Which way did you fly to Korea?

We got a new USB GPS tracker(from Mobile Action, GT100) and had to try it out on the trip to Korea. It worked very well compared to the other devices we had so far. It got the bus trip in Düsseldorf airport right and the entire flight from Amsterdam to Seoul. Tracking worked well in the taxi from the Airport to the hotel. While walking in downtown Seoul it still performed OK (given the urban canyons) with some outliers.

It did not get any signal while we were on the Fokker-50 from Düsseldorf to Amsterdam 🙁 I slept a few hours on the flight to Seoul but I think someone took a photo (probably of me) over Mongolia… If you wonder if it is allowed to used your GPS in the plane or not – it is – at least with KLM (according to a random website http://gpsinformation.net/airgps/airgps.htm 🙂

Back in Korea, Adverts, Driving and Entertainment

On the way into town we got a really good price for the taxi (just make a mental note never to negotiate something with Florian and Alireza at the same time 😉 It seems taxi driving is sort of boring – he too watched television while driving (like the taxi driver some weeks ago in Amsterdam). I think we should seriously think more about entertainment for micro breaks because I still think it is for a good reason not allowed to watch TV while driving.

Seoul is an amazing place. There are many digital signs and electronic adverts. Walking back to the hotel I saw a large digital display on a rooftop (would guess about 10 meters by 6 meters). If working it is probably nice. But now it is mal functioning and the experience walking down the road is worsened as one inevitably looks at it. I wonder if in 10 years we will be used to broken large screen displays…   

Thermo-imaging camera at the border – useful for Context-Awareness?

When we re-entered South Korea I saw guard looking with an infrared camera at all arriving people. It was very hot outside so the heads were very red. My assumption is that this is used to spot people who have fever – however I could not verify this.

Looking at the images created while people moved around I realized that for many tasks in activity recognition, home health care, and wellness this may be an interesting technology to use. For several tasks in context-awareness it seems straightforward to get this information from an infrared camera. In the computer vision domain it seems that there have several papers towards this problem over the recent years.

We could thing of an interesting project topic related to infrared activity recognition or interaction to be integrated in our new lab… There are probably some fairly cheep thermo-sensing cameras around to used in research – for home brew use you find hints on the internet, e.g. How to turn a digital camera into an IR cam – pretty similar to what we did with the web cams for our multi-touch table.

The photo is from http://en.wikipedia.org/wiki/Thermography

ISUVR 2008, program day2

Norbert Streitz – Trade-off for creating smartness

Norbert gave an interesting overview of research in the domain of ubicomp based on his personal experience – from Xerox PARC to the disappearing computer. He motivated the transition from Information Design to Experience Design. Throughout the work we see a trade-off between providing “smart support” to the user and “privacy” (or control over privacy). One of the questions if we will re-invent privacy or if it will become a commodity…
As one of the concrete examples Norbert introduced the Hello.Wall done in the context Ambient Agoras [1]. This again brought up the discussion of public vs. private with regard to the patterns that are displays. (photos of some slides from Norbert’s talk)

[1] Prante, T., Stenzel, R., Röcker, C., Streitz, N., and Magerkurth, C. 2004. Ambient agoras: InfoRiver, SIAM, Hello.Wall. In CHI ’04 Extended Abstracts on Human Factors in Computing Systems (Vienna, Austria, April 24 – 29, 2004). CHI ’04. ACM, New York, NY, 763-764. DOI= http://doi.acm.org/10.1145/985921.985924 (Video Hello.Wall)
Albrecht Schmidt – Magic Beyond the Screen
I gave a talk on “Human Interaction in Ubicomp -Magic beyond the screen” highlighting work in user interfaces beyond the screen that we did over the last years. It is motivated by the facts that classical limitations in computer science (e.g. frame rate, processing, storage) are getting less and less important to many application areas and that the human computer interaction becomes in many areas the critical part of the system.
In my talk I suggested using “user illusion” as a design tool for user interfaces beyond the desktop. This involves two steps: 1) describe precisely the user illusion the application will create and the 2) Investigate what parameters have an influence on the quality of the created user illusion for the application. (photos of some slides from Albrecht’s talk, Slides in PDF)
Jonathan Gratch – Agents with Emotions

His talk focused on the domain of virtual reality with a focus on learning/training applications. One central thing I learned is that the timing of non-verbal cues (e.g. nodding) is very crucial to produce an engagement in speaking with an agent. This may also be interesting for other forms of computer created feedback.
He gave a specific example on how assigning blame works. It was really interesting to see that there are solid theories in this domain that can be concretely used to design novel interfaces. He argues that appraisal theory can explain people’s emotional states and this could improve context-awareness.

He showed an example of emotional dynamics and it is amazing how fast emotion happen. One of the ways of explaining this is to look at different dynamics: dynamics in the world, dynamics in the perceived world relationship, and dynamic through action. (photos of some slides from Jonathan’s talk)
Daijin Kim – Vision based human robot interaction
Motivated by the vision that after the personal computer we will see the “Personal Robot” Daijin investigates natural ways to interact with robots. For vision based interaction with robots he named a set of difficulties, in particular: people are moving, robots are moving, and the illuminations and distances are variable. The proposed approach is to generate a pose, expression, and illumination specific active appearance model.
He argues that face detection is a basic requirement for vision based human robot interaction. The examples he showed in demo movie were very robust with regard to movement, rotation, and expression and it works for very variable distances. The talk contained further examples of fast face recognition and recognition of simple head gestures. Related to our research it seems that such algorithms could be really interesting in creating context-aware outdoor advertisement. (photos of some slides from Daijin’s talk)

Steven Feiner – AR for prototyping UIs

Steven showed some work mobile projector and mobile device interaction, were they used augmented reality for prototyping different interaction methods. He introduced Spot-light (position based interaction), orientation based interaction and widget-based interaction for an arm mounted projector. Using the synaptic touchpad and projection may also be an option for our car-ui related research. For interaction with a wrist device (e.g. a watch) he introduced the string-based interaction which is a simple but exciting idea. You pull out a string of a device and the distances as well as the direction are the resulting input parameters [2].
In a further example Steven showed a project that supports field work on identification of plants using capture (of the image of the real leaf), comparison with the data base and matching out of a subset that matches the features. Their prototype was done on a tablet and he showed ideas how to improve this with AR; it is very clear that this may also an interesting application (for the general user) on the mobile phone.

New interfaces and in particular gestures are hard to explore – if you have no idea what is supported by the system. In his example on visual hint for tangible gestures using AR Steven showed interesting options in this domain. One approach follows a “preview style” visualizations – they called it ghosting. (photos of some slides from Stevens’s talk)

[2] Blasko, G., Narayanaswami, C., and Feiner, S. 2006. Prototyping retractable string-based interaction techniques for dual-display mobile devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 – 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI ’06. ACM, New York, NY, 369-372. DOI= http://doi.acm.org/10.1145/1124772.1124827
[3] White, S., Lister, L., and Feiner, S.Visual Hints for Tangible Gestures in Augmented Reality.Proc. ISMAR 2007 IEEE and ACM Int. Symp. on Mixed and Augmented Reality, Nara Japan, November 13-16, 2007. (youtube video)

If you are curious about the best papers, please the photos from the closing 🙂

Finally some random things to remember:

  • Richard W. DeVaul did some work on subliminal user interfaces – working towwrds the vision of zero attention UIs [4]
  • Jacqueline Nadel (development psychologist) did studies on emotions between parents and infants using video conferencing
  • V2 – Toward a Universal Remote Console Standard http://myurc.org/whitepaper.php
  • iCat and Gaze [5]

[4] Richard W. DeVaul. The Memory Glasses: Wearable Computing for Just-in-Time Memory Support. PhD Thesis. MIT 2004. http://devaul.net/~rich/DeVaulDissertation.pdf

[5] Poel, M., Breemen, A.v., Nijholt, A., Heylen, D.K., & Meulemans, M. (2007). Gaze behavior, believability, likability and the iCat. Proceedings Sixth Workshop on Social Intelligence Design: CTIT Workshop Proceedings Series (pp. 109–124). http://www.vf.utwente.nl/~anijholt/artikelen/sid2007-1.pdf

ISUVR 2008, program day1

The first day of the symposium was exciting and we saw a wide range of contributions from context-awareness to machine vision. In following I have a few random notes on some of the talks…

Thad Starner, new idea on BCI
Thad Starner gave a short history of his experience with wearable computing. He argued that common mobile keyboards (e.g. mini-querty, multi-tap, T9) are fundamentally not suited real mobile tasks. He showed the studies of typing with the twiddler – the data is impressive. He is arguing for cording keyboards and generally he suggests that “Typing while walking is easier than reading while walking“ . I buy the statement but I still think that the cognitive load created by the twiddler does not make it generally suited. He also showed a very practical idea of how errors on mini-keyboards can be reduced using text prediction [1] – that relates to the last exercise we did in the UIE class. (photos of some slides from Thad’s talk)

He has suggested a very interesting approach to “speech recognition” using EEG. The basic idea is that people use sign language (either really moving their hands or just imagine to move their hands) and that the signals of the motor cortex are measured using a brain interface. This is so far the most convincing idea for a human computer brain interface idea that I have seen… I am really curious to see Thad’s results of the study! He also suggested an interesting idea for sensors – using a similar approach as in hair replacement technology (have no idea about this so far, but I probably should read up on this).

[1] Clawson, J., Lyons, K., Rudnick, A., Iannucci, R. A., and Starner, T. 2008. Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 573-582. DOI= http://doi.acm.org/10.1145/1357054.1357147

Anind Dey – intelligible context
Anind provided an introduction to Context-Awareness. The characterized them as situationally appropriate applications that adapt to context and eventually increase the value to user. Throughout the talk he made a number of convincing cases that context has to be intelligible to the users, otherwise problems arise when the systems guess wrong (and they will get it wrong sometimes).

He showed an interesting example how data collected from a community of drivers (in this case cab drivers) is useful to predict the destination and the route. These examples are very interesting and show a great potential for learning and context prediction from community activity. I think sharing information beyond location may have many new applications.
In one study they use a windscreen project display (probably a HUD – have to follow up on this) We should find more about it as we look in such displays ourselves for one of the ongoing master projects. (photos of some slides from Aninds’s talk)

Vincent Lepetit – object recognition is the key for tracking
Currently most work in computer vision used physical sensors or visual markers. The vision however is real clear – just do the tracking based on natural features. In his talk he gave an overview how close we are to this vision. He showed examples of markerless visual tracking based on natural features. One is a book – which really looks like a book with normal content and no markers – which has an animated overlay.
His take-away message was “object recognition is the key for tracking” and it is still difficult. (photos of some slides from Vincent’s talk)

Jun Park – bridge the tangibility gap
In his talk he discussed the tangibility gap in design – in different stages of the design and the design evaluation it is important to feel the product. He argues that rapid prototyping, using 3D printing, is not well suited, especially as it is comparably slow and it is very difficult to render material properties. His approach alternative is augmented foam using a visually non-realistic but tangible foam mock-up combined with augmented reality techniques. Basically the CAD model is rendered on top of the foam.

The second part of the talk was concerned with e-commerce. The basic idea is that users can overlay a product into their own environment, to experience the size and how well it matches the place. (photos of some slides from Jun’s talk)

Paper Session 1 & 2

For the paper sessions see the program and some photos from the slides.
photos of some slides from paper session 1
photos of some slides from paper session 2