How to proof that Ubicomp solutions are valid?

Over the last years there have been many workshops and sessions in the ubicomp community that address the evaluation of systems. At Pervasive 2005 in Munich I co-organized a workshop on Application led research with George Coulouris and others. For me one of the central outcomes was that we – as ubicomp researchers – need to team up in evaluating our technologies and solutions with experts in the application domain and that we stay involved in this part of the research. Just handing it over for evaluation into the other domain will not bring us the insights we need to move the field forward. There is a workshop report which appeared in the IEEE Pervasive Magazine, that discusses the topic in more detail [1].

On Friday I met we a very interesting expert in the domain of gerontology. Elisabeth Steinhagen-Thiessen is chief consultant and director of the protestant geriatric centre of Berlin and professor of internal medicine/gerontology at the Charite in Berlin. We talked about opportunities for activity recognition in this domain and discussed potential set-ups for studies.

[1] Richard Sharp, Kasim Rehman. What Makes Good Application-led Research? IEEE Pervasive Computing Magazin. Volume 4, Number 3. July-September 2005.

Innovative in-car systems, Taking photos while driving

Wolfgang just sent me another picture (taken by a colleague of him) with more information in the head-up display. It shows a speed of 180 km/h and I wonder who took the picture. Usually only the driver can see such a display 😉

For assistance, information and entertainment systems in cars (an I assume we could consider taking photos an entertainment task) there are guidelines [1, 2, 3] – an overview presentation in German can be found in [4]. Students in the Pervasive Computing class have to look at them and design a new information/assistance system that is context aware – perhaps photography in the car could be a theme… I am already curious about the results of the exercise.

[1] The European Statement of Principles (ESoP) on Human Machine Interface in Automotive Systems
[2] AAM Guidelines
[3] JAMA Japanese Guidelines
[4] Andreas Weimper, Harman International Industries, Neue EU Regelungen für Safety und Driver Distraction

(thanks to Wolfgang Spießl for sending the references to me)

Integration of Location into Photos, Tangible Interaction

Recently I came across a device that tracks the GPS position and has additionally a card reader (http://photofinder.atpinc.com/). If you plug in a card with photos it will integrate location data into the jpgs using time as common reference.

It is a further interesting example where software moves away from the generic computer/PC (where such programs that use a GPS track an combine it with photos are available, e.g. GPS photo linker) into a appliance and hence the usage complexity (on principle, did not try it out so far this specific device so far) can be massively reduced and the usability can be increased. See the simple analysis:

Tangible Interaction using the appliance:

  • buying the device
  • plug-in a card
  • wait till it is ready

vs.

GUI Interaction:

  • starting a PC
  • buy/download the application
  • install the application
  • finding an application
  • locating the images in a folder
  • locating the GPS track in a folder
  • wait till it is ready

.. could become one of my future examples where tangible UIs work 😉

Wolfgang Spießl introduces context-aware car systems

Wolfgang visited us for 3 days and we talked a lot about context-awareness in the automotive domain. Given the sensors included in the cars and some recent ideas on context-fusion it seems feasible that in the near future context-aware assistance and information systems will get new functionality. Since finishing my PhD dissertation [1] there has been a move towards two directions: context predication and communities as source for context. One example of a community based approach is http://www.iyouit.eu which evolved out of ContextWatcher /IST-Mobilife.

In his lecture he showed many examples how pervasive computing happens in the car already now. After the talk we had the chance see and discuss user interface elements in current cars – in particular the head up display. Wolfgang gave demonstration of the CAN bus signals related to interaction with the car that are available to create context-aware applications. The car head-up display (which appears as being just in front of the car) create discussions on interesting use cases for these types of displays – beyond navigation and essential driving information.
In the lecture questions about how feasible / easy it is to do your own developments using the UI elements in the car – basically how I can run my applications in the car. This is not yet really supported 😉 However I had a previous post [2] where I argue that this is probably to come… and I still see this trend… It may be an interesting though how one can provide third parties access to UI components in the car without giving away control…