Last month Aliresa Sahami finished his master thesis on multi-tactile interaction at BIT Bonn and joined our group in Essen. Ali worked for me a student resesearch assistant at Fraunhofer IAIS. During his studies in Bonn we published an interesting workshop paper on mobile health [1] and gave a related demo at Ubicomp [2].
Tagging Kids, Add-on to make digital cameras wireless
Reading the new products section in the IEEE pervasive computing magazine (Vol.7, No.2, April-June 2008) I came across a child monitoring systems: Kiddo Kidkeeper – In the smart-its project Henrik Jernström developed 2001 a similar system in his master thesis at PLAY which was published as a Demo at Ubicomp [1]. I remember very lively the discussion about the validity of this application (basically people – including me – asking “Who would want such technology?”). However it seems society and values are constantly changing – there is an interesting ongoing discussion related to that: Free Range Kids (this is the pro side 😉 The article in the IEEE Magazin hinted that the fact the you can take of the device is a problem – I see a clear message ahead – implant the device – and this time I am more careful with arguing that we don’t need it (even though I am sure we do not need it I expect that in 5 to 10 years we will have it)
There were two further interesting links in the article: an SD-card that includes WIFI and hence enables uploading of photos to the internet from any camera having an SD-slot (http://www.eye.fi/products/) – the idea is really simple but very powerful! And finally the UK has an educational laptop, too (http://www.elonexone.co.uk/). Seems the hardware is there (if not this year than next) and where is the software? I think we should put some more effort into this domain in Germany…
Not to forget the issue of the magazine contains our TEI conference report [2].
[1] Henrik Jernström. SiSSy Smart-its child Surveillance System. Poster at Ubicomp 2002, Adjunct Proceedings of Ubicomp 2002. http://citeseer.ist.psu.edu/572976.html
Fight for attention – changing cover display of a magazine
Attention is precious and there is a clear fight for it. This is very easy to observe on advertising boards and in news shops. Coming back from Berlin I went in Augsburg into the news agent to get a news paper – and not really looking at magazines is still discovered from the corner of my eyes an issue of FHM with a changing cover page. Technically it is very simple: a lenticular lens that presents and image depending on the viewing angle – alternating between 3 pictures – one of which is a full page advert (for details on how it works see lenticular printing in Wikipedia). A similar approach has already been used in various poster advertising campaigns – showing different pictures as people walk by (http://youtube.com/watch?v=0dqigww4gM8, http://youtube.com/watch?v=iShPBmtajH8). One could also create a context-aware advert, showing different images for small and tall people 😉
Ageing, Technology, Products, Services
Today and yesterday I am visiting a conference that is concerned with ageing – looking at the topic from different perspective (computer science, psychology, medicine, economics) run at the MPI in Berlin. The working group is associate with the the German Academy of Sciences Leopoldina and I was invited by Prof. Ulman Lindenberger who is director at the Max Planck Insititut and works in Lifespan Psychology. The working group is called ageing in Germany (in German).
Antonio Krüger and I represented the technology perspective with example from the domain of ubiquitous computing. My talk „ubiquitous computing in adulthood and old age“ is a literature review in pictures of selected ubicomp systems targeted as an introduction to non-CS people to the domain. The discussions were really inspiring. In one talk Prof. Jim-Chern Chiou from National Chiao Tung Univeristy in Taiwan (the brain research lab) presented interesting dry electrodes that can be used for EEG – but also for other applications where one need electrodes.
Slides from my talk: ubiquitous computing in adulthood and old age (PDF).
Impressions from Pervasive 2008
Using electrodes to detect eye movement and to detect reading [1] – relates to Heiko’s work but uses different sensing techniques. If the system can really be implemented in goggles this would be a great technologies for eye gestures as suggested in [2].
- Communicate while you sleep? Air pillow communication… Vivien loves the idea [4].
- A camera with additional sensors [5] – really interesting! We had in Munich a student project that looked at something similar [6]
- A cool vision video of the future is SROOM – everything becomes a digital counterpart. Communicates the idea of ubicomp in a great and fun way [7] – not sure if the video is online – it is on the conference DVD.
[1] Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography. Andreas Bulling, Jamie A. Ward, Hans-W. Gellersen and Gerhard Tröster. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 19-37, Sydney, Australia, May 2008. http://dx.doi.org/10.1007/978-3-540-79576-6_2
[2] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007. http://murx.medien.ifi.lmu.de/~albrecht/pdf/interact2007-gazegestures.pdf
[3] Shwetak N. Patel, Matthew S. Reynolds, Gregory D. Abowd: Detecting Human Movement by Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure Mediated Sensing. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 1-18, Sydney, Australia, May 2008. http://shwetak.com/papers/air_ims_pervasive2008.pdf
[4] Satoshi Iwaki et al. Air-pillow telephone: A pillow-shaped haptic device using a pneumatic actuator (Poster). Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/LBR/lbr11.pdf
[5] Katsuya Hashizume, Kazunori Takashio, Hideyuki Tokuda. exPhoto: a Novel Digital Photo Media for Conveying Experiences and Emotions. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Demo/d4.pdf
[6] P. Holleis, M. Kranz, M. Gall, A. Schmidt. Adding Context Information to Digital Photos. IWSAWC 2005. http://www.hcilab.org/documents/AddingContextInformationtoDigitalPhotos-HolleisKranzGallSchmidt-IWSAWC2005.pdf
[7] S-ROOM: Real-time content creation about the physical world using sensor network. Takeshi Okadome, Yasue Kishino, Takuya Maekawa, Kouji Kamei, Yutaka Yanagisawa, and Yasushi Sakurai. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Video/v2.pdf
Tutorial von Sensor to Context und Activity at Pervasive 2008
Pervasive 2007 introduced a new form of tutorials – having a number of experts talking one hour about their special topic – I was last year as participant and liked it a lot. This year Pervasive 2008 repeated this approach and I contributed a tutorial on how to get context and activity from sensors (tutorial slides in PDF).
Abstract. Intelligent environments, sensor network and smart objects are inherently connected to building systems that sense phenomena in the real world and make the perceived information available to applications. In the first part of the tutorial an overview of sensors and sensor systems commonly used in pervasive computing application is given. Additionally to the sensor properties means for connecting sensors to systems (e.g. ADC, PWM, I2C, serial line) are explained. In the second part it is discussed how to create meaningful information in the application domain. Some basic features, calculated in the time and frequency domain, are introduced to provide basic means for processing and abstraction of raw sensor data. This part is complemented by a brief overview of mechanisms and methods for relating (abstracted) sensor information to context, activity and situations. Additionally general problems that are associated with sensing context and activity will be addressed in this tutorial.
Gregor showed the potential of multi-tag interaction in a Demo
Gregor, a colleague from LMU Munich, presented work that was done in the context of the PERCI project, which started while I was in Munich. The demo showed several applications (e.g. buying tickets) that exploit the potential of interaction with multiple NFC-Tags. The basic idea is to have several NFC-Tags included in a printed poster with which the user can interact using a phone. By touching the tags in a certain order the selection can be made. For more details see the paper accompanying the demo [1].
Paul presented our paper at Pervasive 2008
Paul presented after lunch our full paper on a development approach and environment for mobile applications that supports underlying user models [1]. In the paper he shows how you can create applications while programming by example where the development environment automatically adds a KLM model. In this way the developer becomes automatically aware of estimated usage times for the application. The paper is work that builds on our paper on KLM for physical mobile interaction which was presented last year at CHI [2]. The underlying technology is the embedded interaction toolkit [3] – have a look – perhaps it makes you applications easier, too.
[2] Holleis, P.; Otto, F.; Hußmann, H.; Schmidt, A.: Keystroke-Level Model for Advanced Mobile Phone Interaction. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 – May 03, 2007). CHI ’07. ACM Press, New York, NY, 1505-1514.. 2007.
Keynote at Pervasive 2008: Mark Billinghurst
Mark Billinghurst presented an interesting history of augmented reality and he showed clearly that camera phones are the platform to look out for. He reminded us that currently the 3D performance of mobile phones is similar to the most powerful 3D graphics cards show 15 years ago at SIGGRAPH. Looking back at Steven Feiner’s backpack [1] – the first augmented reality system I saw – can tell us that we should not be afraid to create prototypes that may be a bit clumsy if they allow us to create a certain user experience and for exploring technology challenges.
In an example video Mark showed how they have integrated sensor information (using particle computers) into an augmented reality application. Especially for sensor-network applications this seems to create interesting user interface options.
One reference on to robust outdoor tracking done at Cambridge University [2] outlines interestingly how combining different methods (in this case GPS, inertial, computer vision and models) can move location techniques forward. This example shows that high precision tracking on mobile devices may not be far in the future. For our application led research this is motivating and should push us to be more daring with what we assume from future location systems.
Mark argue to look more for the value of experience – the idea is basically that selling a user experience is of higher value than selling a service or a technology. This view is at the moment quite common – we have seen this argument a lot at CHI2008, too. What I liked with Mark’s argument very much is that he sees it in a layered approach! Experience is at the top of a set of layers – but you cannot sell experience without having technology or services and it seems a lot of people forget this. In short – no experience design if you do not have a technology working. This is important to understand. He included an example of interactive advertisement (http://www.reactrix.com/) which is interesting as it relates to some of the work we do on interactive advertisement (there will more as soon as we have published our Mensch und Computer 2008 paper).
His further example on experience was why you value a coffee at Starbucks at 3€ (because of the overall experience) reminded me of a book I recently read – quite a good airline/park read (probably only if you are not an economist) – makes the world a bit understandable [3].
Build enabling technologies and toolkits as means to improve one’s citation count was one of Mark’s recommendations. Looking back at our own work as well as the work of the Pervasive/Ubicomp community there is a lot of room for improvement – but it is really hard to do it …
[1] S. Feiner, B. MacIntyre, T. Höllerer, and T. Webster, A touring machine: Prototyping 3D mobile augmented reality systems for exploring the urban environment. Proc. ISWC ’97 (First IEEE Int. Symp. on Wearable Computers), October 13-14, 1997, Cambridge, MA. Also in Personal Technologies, 1(4), 1997, pp. 208-217, http://www1.cs.columbia.edu/graphics/publications/iswc97.pdf, http://www1.cs.columbia.edu/graphics/projects/mars/touring.html
[2] Reitmayr, G., and Drummond, T. 2006. Going out: Robust model-based tracking for outdoor augmented reality. In Proceedings of IEEE ISMAR’06, 109–118.http://mi.eng.cam.ac.uk/~gr281/docs/ReitmayrIsmar06GoingOut.pdf, http://mi.eng.cam.ac.uk/~gr281/outdoortracking.html
[3] Book: Tim Harford. The Undercover Economist. 2007. (German Version: Ökonomics: Warum die Reichen reich sind und die Armen arm und Sie nie einen günstigen Gebrauchtwagen bekommen. 2006.)