ISUVR 2008, program day1

The first day of the symposium was exciting and we saw a wide range of contributions from context-awareness to machine vision. In following I have a few random notes on some of the talks…

Thad Starner, new idea on BCI
Thad Starner gave a short history of his experience with wearable computing. He argued that common mobile keyboards (e.g. mini-querty, multi-tap, T9) are fundamentally not suited real mobile tasks. He showed the studies of typing with the twiddler – the data is impressive. He is arguing for cording keyboards and generally he suggests that “Typing while walking is easier than reading while walking“ . I buy the statement but I still think that the cognitive load created by the twiddler does not make it generally suited. He also showed a very practical idea of how errors on mini-keyboards can be reduced using text prediction [1] – that relates to the last exercise we did in the UIE class. (photos of some slides from Thad’s talk)

He has suggested a very interesting approach to “speech recognition” using EEG. The basic idea is that people use sign language (either really moving their hands or just imagine to move their hands) and that the signals of the motor cortex are measured using a brain interface. This is so far the most convincing idea for a human computer brain interface idea that I have seen… I am really curious to see Thad’s results of the study! He also suggested an interesting idea for sensors – using a similar approach as in hair replacement technology (have no idea about this so far, but I probably should read up on this).

[1] Clawson, J., Lyons, K., Rudnick, A., Iannucci, R. A., and Starner, T. 2008. Automatic whiteout++: correcting mini-QWERTY typing errors using keypress timing. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 573-582. DOI= http://doi.acm.org/10.1145/1357054.1357147

Anind Dey – intelligible context
Anind provided an introduction to Context-Awareness. The characterized them as situationally appropriate applications that adapt to context and eventually increase the value to user. Throughout the talk he made a number of convincing cases that context has to be intelligible to the users, otherwise problems arise when the systems guess wrong (and they will get it wrong sometimes).

He showed an interesting example how data collected from a community of drivers (in this case cab drivers) is useful to predict the destination and the route. These examples are very interesting and show a great potential for learning and context prediction from community activity. I think sharing information beyond location may have many new applications.
In one study they use a windscreen project display (probably a HUD – have to follow up on this) We should find more about it as we look in such displays ourselves for one of the ongoing master projects. (photos of some slides from Aninds’s talk)

Vincent Lepetit – object recognition is the key for tracking
Currently most work in computer vision used physical sensors or visual markers. The vision however is real clear – just do the tracking based on natural features. In his talk he gave an overview how close we are to this vision. He showed examples of markerless visual tracking based on natural features. One is a book – which really looks like a book with normal content and no markers – which has an animated overlay.
His take-away message was “object recognition is the key for tracking” and it is still difficult. (photos of some slides from Vincent’s talk)

Jun Park – bridge the tangibility gap
In his talk he discussed the tangibility gap in design – in different stages of the design and the design evaluation it is important to feel the product. He argues that rapid prototyping, using 3D printing, is not well suited, especially as it is comparably slow and it is very difficult to render material properties. His approach alternative is augmented foam using a visually non-realistic but tangible foam mock-up combined with augmented reality techniques. Basically the CAD model is rendered on top of the foam.

The second part of the talk was concerned with e-commerce. The basic idea is that users can overlay a product into their own environment, to experience the size and how well it matches the place. (photos of some slides from Jun’s talk)

Paper Session 1 & 2

For the paper sessions see the program and some photos from the slides.
photos of some slides from paper session 1
photos of some slides from paper session 2

GIST, Gwangju, Korea

Yesterday I arrived in Gwangju for the ISUVR-2008. It is my first time in Korea and it is an amazing place. Together with some of the other invited speakers and PhD students we went for a Korean style dinner (photos from the dinner). The campus (photos from the campus) is large and very new.

This morning we had the opportunity to see several demos from Woontack’s students in the U-VR lab. There is a lot of work on haptics and mobile augmented reality going on. See the pictures of the open lab demo for yourself…

In the afternoon we had some time for culture and sightseeing – the country side parks are very different from Europe. Here are some of the photos of the trip around Gwangju and see http://www.damyang.go.kr/

In 2005 Yoosoo Oh, a PhD student with Woontack Wo at GIST, was a visiting student in our lab in Munich. We worked together on issues related to context awareness and published a paper together discussing the whole design cycle and in particular the evaluation (based on a heuristic approach) of context-aware systems [1].

[1] Yoosoo Oh, Albrecht Schmidt, Woontack Woo: Designing, Developing, and Evaluating Context-Aware Systems. MUE 2007: 1158-1163

Photos – ISUVR2008 – GIST – Korea

Embedded Information – Airport Seoul

When I arrived in Seoul at the airport I saw an interesting instance of embedded information. In Munich we wrote a workshop paper [1] about the concept of embedded information and the key criteria are:

  • Embedding information where and when it is useful
  • Embedding information in a most unobtrusive way
  • Providing information in a way that there is no interaction required

Looking at an active computer display (OK it was broken) that circled the luggage belt (it is designed to list the names of people who should contact the information desk) and a fixed display on a suitcase I was reminded of this paper. With this set-up people become aware of the information – without really making an effort. With active displays becoming more ubiquitous I expect more innovation in this domain. We currently work on some ideas related to situated and embedded displays for advertising – if we find funding we push further… the ideas are there.
[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‚Ubiquitous Display Environments‘, September 2004

Visitors to our Lab

Christofer Lueg (he is professor at the School of Computing & Information Systems at the University of Tasmania) and Trevor Pering (he is a senior researcher at Intel Research in Seattle) visited our lab this week. The timing is not perfect but at I am not the only interesting person in the lab 😉

Together with Roy Want and others Trevor published some time ago an article in the IEEE Pervasive Magazine that is still worthwhile to read “Disappearning Hardware” [1]. It shows clearly the trend that in the near future it will be feasible to include processing and wireless communication into any manufactured product and outlines resulting challenges. One of those challenges which we look into in our lab is how to interact with such systems… Also in a 2002 paper Christopher raised some very fundamental questions how far we will get with intelligent devices [2].

[1] Want, R., Borriello, G., Pering, T., and Farkas, K. I. 2002. Disappearing Hardware. IEEE Pervasive Computing 1, 1 (Jan. 2002), 36-47. DOI= http://dx.doi.org/10.1109/MPRV.2002.993143

[2] Lueg, C. 2002. On the Gap between Vision and Feasibility. In Proceedings of the First international Conference on Pervasive Computing (August 26 – 28, 2002). Lecture Notes In Computer Science, vol. 2414. Springer-Verlag, London, 45-57.

How to proof that Ubicomp solutions are valid?

Over the last years there have been many workshops and sessions in the ubicomp community that address the evaluation of systems. At Pervasive 2005 in Munich I co-organized a workshop on Application led research with George Coulouris and others. For me one of the central outcomes was that we – as ubicomp researchers – need to team up in evaluating our technologies and solutions with experts in the application domain and that we stay involved in this part of the research. Just handing it over for evaluation into the other domain will not bring us the insights we need to move the field forward. There is a workshop report which appeared in the IEEE Pervasive Magazine, that discusses the topic in more detail [1].

On Friday I met we a very interesting expert in the domain of gerontology. Elisabeth Steinhagen-Thiessen is chief consultant and director of the protestant geriatric centre of Berlin and professor of internal medicine/gerontology at the Charite in Berlin. We talked about opportunities for activity recognition in this domain and discussed potential set-ups for studies.

[1] Richard Sharp, Kasim Rehman. What Makes Good Application-led Research? IEEE Pervasive Computing Magazin. Volume 4, Number 3. July-September 2005.

Innovative in-car systems, Taking photos while driving

Wolfgang just sent me another picture (taken by a colleague of him) with more information in the head-up display. It shows a speed of 180 km/h and I wonder who took the picture. Usually only the driver can see such a display 😉

For assistance, information and entertainment systems in cars (an I assume we could consider taking photos an entertainment task) there are guidelines [1, 2, 3] – an overview presentation in German can be found in [4]. Students in the Pervasive Computing class have to look at them and design a new information/assistance system that is context aware – perhaps photography in the car could be a theme… I am already curious about the results of the exercise.

[1] The European Statement of Principles (ESoP) on Human Machine Interface in Automotive Systems
[2] AAM Guidelines
[3] JAMA Japanese Guidelines
[4] Andreas Weimper, Harman International Industries, Neue EU Regelungen für Safety und Driver Distraction

(thanks to Wolfgang Spießl for sending the references to me)

Integration of Location into Photos, Tangible Interaction

Recently I came across a device that tracks the GPS position and has additionally a card reader (http://photofinder.atpinc.com/). If you plug in a card with photos it will integrate location data into the jpgs using time as common reference.

It is a further interesting example where software moves away from the generic computer/PC (where such programs that use a GPS track an combine it with photos are available, e.g. GPS photo linker) into a appliance and hence the usage complexity (on principle, did not try it out so far this specific device so far) can be massively reduced and the usability can be increased. See the simple analysis:

Tangible Interaction using the appliance:

  • buying the device
  • plug-in a card
  • wait till it is ready

vs.

GUI Interaction:

  • starting a PC
  • buy/download the application
  • install the application
  • finding an application
  • locating the images in a folder
  • locating the GPS track in a folder
  • wait till it is ready

.. could become one of my future examples where tangible UIs work 😉

Wolfgang Spießl introduces context-aware car systems

Wolfgang visited us for 3 days and we talked a lot about context-awareness in the automotive domain. Given the sensors included in the cars and some recent ideas on context-fusion it seems feasible that in the near future context-aware assistance and information systems will get new functionality. Since finishing my PhD dissertation [1] there has been a move towards two directions: context predication and communities as source for context. One example of a community based approach is http://www.iyouit.eu which evolved out of ContextWatcher /IST-Mobilife.

In his lecture he showed many examples how pervasive computing happens in the car already now. After the talk we had the chance see and discuss user interface elements in current cars – in particular the head up display. Wolfgang gave demonstration of the CAN bus signals related to interaction with the car that are available to create context-aware applications. The car head-up display (which appears as being just in front of the car) create discussions on interesting use cases for these types of displays – beyond navigation and essential driving information.
In the lecture questions about how feasible / easy it is to do your own developments using the UI elements in the car – basically how I can run my applications in the car. This is not yet really supported 😉 However I had a previous post [2] where I argue that this is probably to come… and I still see this trend… It may be an interesting though how one can provide third parties access to UI components in the car without giving away control…

Invited Lecture at CDTM, how fast do you walk?

Today I was at CDTM in Munich (http://www.cdtm.de/) to give a lecture to introduce Pervasive Computing. It was a great pleasure that I was invited again after last year’s visit. We discussed no less than how new computing technologies are going to change our lives and how we as developers are going to shape parts of the future. As everyone is aware there are significant challenges ahead – one is personal travel and I invited students to join our summer factory (basically setting up a company / team to create a news mobility platform). If you are interested, too drop me a mail.

Over lunch I met with Heiko to discuss the progress of his thesis and fishing for new topics as they often come up when writing 😉 To motivate some parts of his work he looked at behavioral research that describes how people use their eyes in communication. In [1] interesting aspects of human behavior are described and explained. I liked the page (251) with the graphs on walking speed as a function of the size of city (the bigger the city the faster people walk – it includes an interesting discussion what this effect is based on) and the eye contacts made dependent on gender and size of town. This can provide insight for some projects we are working on. Many of the results are not surprising – but it is often difficult to pinpoint the reference (at least for a computer science person), so this book may be helpful.

[1] Irenäus Eibl-Eibesfeldt. Die Biologie des menschlichen Verhaltens: Grundriss der Humanethologie. Blank; Auflage: 5. A. Dezember 2004.

Hans Visited our Group, Issues on sustainable energy / travel

Hans Gellersen, who was my supervisor while I was in Lancaster, visited our lab in Essen. We discussed options for future collaborations, ranging from student exchange to joined proposals. Besides other topics we discussed sustainable energy as this is more and more becoming a theme of great importance and Pervasive Computing offers many building blocks towards potential solutions. Hans pointed me to an interesting project going on at IBM Hursley „The House That Twitters Its Energy Use„.

At the Ubicomp PC meeting we recently discussed the value of face-2-face meetings in the context of scientific work and it seems there are two future directions to reduce resource consumption: (1) moving from physical travel to purely virtual meetings or (2) making travel feasible based on renewable energies. Personally I think we will see a mix – but I am sure real physical meetings are essential for certain tasks in medium term. I am convinced that in the future we will still travel and this will become viable as travel based on renewable energies will become feasible. Inspiring example project are SolarImpulse (its goal is to create a solar powered airplane) and Helios (solar-powered atmospheric satellites). There are alternative future scenarios and an interesting discussion by John Urry (e.g. a recent article [1], a book – now on my personal reading list [2]). These analyses (from a sociology perspective) are informative to read and can help to create interesting technology interventions. However I reject the dark scenarios, as I am too much of an optimist trusting in peoples good will, common sense, technology research and engineering – especially if the funding is available ;-).

[1] John Urry. Climate change, travel and complex futures. The British Journal of Sociology, Volume 59, Issue 2, Page 261-279, Jun 2008

[2] John Urry. Mobilities. October 2007.