Tactile interfaces, Visit from Gordon Bolduan

This afternoon Gordon Bolduan from Technology Review was visiting the lab. We talked about haptic and tactile interfaces and showed some demos (e.g. navigation with tactile cues). 
When preparing for the visit I looked for some good examples of tactile interaction – and interestingly there is more and more work out there that has the potential to change future interfaces and means of communication. 
Recent work on connecting people [1] and [2] at the boundary between computing and design shows new options for emotional communication. 

We used in our work multiple vibration motors and explored the potential for mobile devices [3]. What to use for tactile interaction beyond vibration is one obvious question, and I find the paper by Kevin Li [4] a good starting point to get some more ideas.
When talking about human computer interaction that includes stroking, tapping and rubbing an association to erotic and sexual interactions seem obvious; and there is more to that if you are curious just search for teledildonics and you will find interesting commercial products as well as a lot of DIY ideas.
[1] Eichhorn, E., Wettach, R., and Hornecker, E. 2008. A stroking device for spatially separated couples. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 303-306. DOI= http://doi.acm.org/10.1145/1409240.1409274 
[2] Werner, J., Wettach, R., and Hornecker, E. 2008. United-pulse: feeling your partner’s pulse. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 535-538. DOI= http://doi.acm.org/10.1145/1409240.1409338 
[3] Alireza Sahami, Paul Holleis, Albrecht Schmidt, Jonna Häkkilä: Rich Tactile Output on Mobile Devices. European Conference on Ambient Intelligence (Ami’08). Springer LNCS Nürnberg 2008, S. 210-221. DOI= http://dx.doi.org/10.1007/978-3-540-89617-3_14
[4] Li, K. A., Baudisch, P., Griswold, W. G., and Hollan, J. D. 2008. Tapping and rubbing: exploring new dimensions of tactile feedback with voice coil motors. In Proceedings of the 21st Annual ACM Symposium on User interface Software and Technology (Monterey, CA, USA, October 19 – 22, 2008). UIST ’08. ACM, New York, NY, 181-190. DOI= http://doi.acm.org/10.1145/1449715.1449744

Privacy – will our understanding change radically?

As one issue this morning we came across issues related to privacy. In particular it seems that social network analysis based on behavior in the real world (e.g. the reality mining project [1]) is creating serious interest beyond the technology people. Beyond measuring the frequency of encounters qualifying the way people interact (dominance, emotion, …) will reveal even more about social networks… 

In our discussion I made a reference to a book: „The Transparent Society“ by David Brin. Even Though it is now nearly 10 years since it was first published I still think it is an interesting starting point for a privacy discussion.

[1] Eagle, N. and (Sandy) Pentland, A. 2006. Reality mining: sensing complex social systems. Personal Ubiquitous Comput. 10, 4 (Mar. 2006), 255-268. DOI= http://dx.doi.org/10.1007/s00779-005-0046-3 

[2] The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom? David Brin, Basic Books (June 1, 1999). At Amazon

Trip to Dublin, Aaron’s Display Project

Visiting Dublin is always a pleasure – even if the weather is rainy. Most of the day I was at Trinity College reading master theses (which is the second best part of being external examiner, best part is to have lunch at the 1592 😉
In the evening I met with Aaron Quigley and we talked about some ongoing display and advertsing projects in our groups. He told me about one of their recent workshop papers [1] on public displays where they investigated what people take in and what people remember of the content on displays in an academic environment. It is online available in the workshop proceedings of AIS08 [2]. I found it worthwhile to browse the whole workshop proceedings.
[1] Rashid U. and Quigley A., „Ambient Displays in Academic Settings: Avoiding their Underutilization“, Ambient Information Systems Workshop at UbiComp 2008, September 21, Seoul, South Korea (download [2], see page 26 ff)

My Random Papers Selection from Ubicomp 2008

Over the last days there were a number of interesting papers presented and so it is not easy to pick a selection… Here is my random paper selection from Ubicomp 2008 that link to our work (the conference papers link into the ubicomp 2008 proceedings in the ACM DL, our references are below):

Don Patterson presented a survey on using IM. One of the finding surprised me: people seem to ignore „busy“ settings. In some work we did in 2000 on mobile availability and sharing context users indicated that they would respect this or at least explain when interrupt someone who is busy [1,2] – perhaps it is a cultural difference or people have changed. It may be interesting to run a similar study in Germany.

Woodman and Harle from Cambridge presented a pedestrian localization system for large indoor environments. Using a XSens device they combine dead reckoning with knowledge gained from a 2.5D map. In the experiment they seem to get similar results as with a active bat system – by only putting the device on the user (which is for large buildings much cheaper than putting up infrastructure).
Andreas Bulling presented work where he explored the use EOG goggles for context awareness and interaction. The EOG approach is complementary to video based systems. The use of gesturest for context-awarenes follows a similar idea as our work on eye gestures [3]. We had an interesting discussion about further ideas and perhaps there is chance in the future to directly compare the approaches and work together.
In one paper „on using existing time-use study data for ubiquitous computing applications“ links to interesting public data sets were given (e.g the US time-use survey). The time-use surevey data covers the US and gives detailed data on how people use their data.
University of Salzburg presented initial work on an augmented shopping system that builds on the idea of implicit interaction [4]. In the note they report a study where they used 2 cameras to observe a shopping area and they calculated the „busy spots“ in the area. Additional they used sales data to get best selling products. Everything was displayed on a public screen; and an interesting result was that it seems people where not really interesting in other shoppers behavior… (in contrast to what we observe in e-commerce systems).
Researchers from Hitachi presented a new idea for browsing and navigating content based on the metaphor of using a book. In is based on the concept to have a bendable surface. In complements interestingly previous work in this domain called Gummi presented in CHI 2004 by Schwesig et al.
[1] Schmidt, A., Takaluoma, A., and Mäntyjärvi, J. 2000. Context-Aware Telephony Over WAP. Personal Ubiquitous Comput. 4, 4 (Jan. 2000), 225-229. DOI= http://dx.doi.org/10.1007/s007790070008
[2] Albrecht Schmidt, Tanjev Stuhr, Hans Gellersen. Context-Phonebook – Extending Mobile Phone Applications with Context. Proceedings of Third Mobile HCI Workshop, September 2001, Lille, France.
[3] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007.
[4] Albrecht Schmidt. Implicit Human Computer Interaction Through Context. Personal Technologies, Vol 4(2), June 2000

Some random papers from Mobile HCI 2008

During mobile HCI I came across many interesting things (that is why one goes to conferences 😉 here is a selection of papers to look at – if you have more time it is worthwhile to look at the whole proceedings of mobile HCI 2008 in the ACM DL.

Gauntlet: a wearable interface for ubiquitous gaming – exploring a new gaming UI for gestures.

Mobile phones as artifacts children use in their games are discussed. Shows again how creative children are 😉

An Investigation into round touch screen Wristwatch interaction – interesting topic and good example how to do a small study. Ideas to create a tactile rim, e.g. 2 parts moving to have different tactile cues, were brought up in the discussion.

Programming with children – taking programming it into the environment away from the computer, relates to Tangible User Interfaces

Projector phone: a study of using mobile phones with integrated projector for interaction with maps

Interaction based on Speech seems possible – even in noisy environment – the paper reports interesting preliminary results in the context of a fishing boot. Interesting in-situ tests (e.g. platform in a wave tank)

Wearable computing user interfaces. Where should we put the controls and what functions do uses expect?

Learning-oriented vehicle navigation systems: a preliminary investigation in a driving simulator

Enrico Rukzio followed up the work from Munich pushing the idea of touch interaction with NFC devices further.

Color matching using a mobile phone. The idea is to use a color chart, take a photo of face with a color chart, sent by mms to server, server process look up color match, reply by sms; no software installation only using MMS, SMS. Application in cosmetics are discussed.

Using Second Life to demonstrate a concept automobile heads up display (A-HUD)

Paul Holleis presented our paper on Wearable Controls

Last year Paul did an internship a Nokia in Finland. He worked there on the integration of capacitive sensors in phones and clothing. After Paul was back we jointly followed up on the topic which resulted in an interesting set of guidelines for placing wearable controls [1].

The paper gives a good overview of wearable computing and interaction with wearable computers. In the work we focused on integrating touch sensitive controls into garments and accessories for a operating the music player integrated in a phone. The study showed that there are prime locations where to place controls on their body: the right hip and above the right knee (for more details see the paper [1]). It furthermore showed that it is not clear expectations of functions (e.g. forward, backward, volume up/down) with regard to controls laid out on the close.

During his internship he also did research on integrating touch into buttons, which was published at Tangible and Embedded Interaction 2008 [2].

[1] Holleis, P., Schmidt, A., Paasovaara, S., Puikkonen, A., and Häkkilä, J. 2008. Evaluating capacitive touch input on clothes. In Proceedings of the 10th international Conference on Human Computer interaction with Mobile Devices and Services (Amsterdam, The Netherlands, September 02 – 05, 2008). MobileHCI ’08. ACM, New York, NY, 81-90. DOI= http://doi.acm.org/10.1145/1409240.1409250

[2] Paul Holleis, Jonna Häkkilä, Jussi Huhtala. Studying Applications for Touch-Enabled Mobile Phone Keypads. Proceedings of the 2nd Tangible and Embedded Interaction Conference TEI’08. February 2008.

Andrew Greaves presents a study on photo browsing using projector phones

Since Enrico Rukzio (my first PhD student) went to Lancaster he discovered and advanced a very exciting topic for mobile interaction: mobile projector/projector phones. His group has a great presencs at this year’s mobile HCI (3 demonstrations, 2 short papers, 2 full papers, a workshop). In time for the conference the first projector phone appeared on the market (Cking Epoq EGP-PP01) – as to highlight the timeliness of the work.

The mobile projector study [1] revealed several interesting aspects. 1) it is faster to browser on the phone screen than using a project, 2) users do a lot of context switches between projection and device – even nothing is displayed on the screen, 3) the users see a great value in it (even if they may be slower). I am really looking forward to further results in this area. It may be significantly change the way we use mobile phones!

PS: see Enrico watching his student present I remember how exciting it is for a supervisor to just watch…

[1] Andrew Greaves, Enrico Rukzio. Evaluation of Picture Browsing using a Projector Phone. 10th International Conference on Human-Computer Interaction with Mobile Devices and Services (Mobile HCI 2008). Amsterdam, Netherlands. 2-5 September 2008.

GIST, Gwangju, Korea

Yesterday I arrived in Gwangju for the ISUVR-2008. It is my first time in Korea and it is an amazing place. Together with some of the other invited speakers and PhD students we went for a Korean style dinner (photos from the dinner). The campus (photos from the campus) is large and very new.

This morning we had the opportunity to see several demos from Woontack’s students in the U-VR lab. There is a lot of work on haptics and mobile augmented reality going on. See the pictures of the open lab demo for yourself…

In the afternoon we had some time for culture and sightseeing – the country side parks are very different from Europe. Here are some of the photos of the trip around Gwangju and see http://www.damyang.go.kr/

In 2005 Yoosoo Oh, a PhD student with Woontack Wo at GIST, was a visiting student in our lab in Munich. We worked together on issues related to context awareness and published a paper together discussing the whole design cycle and in particular the evaluation (based on a heuristic approach) of context-aware systems [1].

[1] Yoosoo Oh, Albrecht Schmidt, Woontack Woo: Designing, Developing, and Evaluating Context-Aware Systems. MUE 2007: 1158-1163

Photos – ISUVR2008 – GIST – Korea

Visual aid for navigation – using human image processing

While browsing the equator website I came again across an interesting publication – I had seen two years ago at MobileHCI – in the domain of pedestrian navigation [1]. The Basic idea is to use a collection of geo-tagged photos to provide visual cues to people in what direction they should go, e.g. “walk towards this building”. This is an interesting application linking two concepts we discussed in the part on location in my lecture on pervasive computing. It follows the approach of augmenting the user in a way that the user does what he does well (e.g. matching visual images) and the computer what it does well (e.g. acquiring GPS location, finding pictures related to a location in a DB).

[1] Beeharee, A. K. and Steed, A. 2006. A natural wayfinding exploiting photos in pedestrian navigation systems. In Proceedings of the 8th Conference on Human-Computer interaction with Mobile Devices and Services (Helsinki, Finland, September 12 – 15, 2006). MobileHCI ’06, vol. 159. ACM, New York, NY, 81-88. DOI= http://doi.acm.org/10.1145/1152215.1152233

Impressions from Pervasive 2008

Using electrodes to detect eye movement and to detect reading [1] – relates to Heiko’s work but uses different sensing techniques. If the system can really be implemented in goggles this would be a great technologies for eye gestures as suggested in [2].

Utilizing infrastructures that are in place for activity sensing – the example is a heating/air condition/ventilation system [3]. I wondered and put forward the question how well this would work in active mode – where you actively create an airflow (using the already installed system) to detect the state of an environment.

Further interesting ideas:

  • Communicate while you sleep? Air pillow communication… Vivien loves the idea [4].
  • A camera with additional sensors [5] – really interesting! We had in Munich a student project that looked at something similar [6]
  • A cool vision video of the future is SROOM – everything becomes a digital counterpart. Communicates the idea of ubicomp in a great and fun way [7] – not sure if the video is online – it is on the conference DVD.

[1] Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography. Andreas Bulling, Jamie A. Ward, Hans-W. Gellersen and Gerhard Tröster. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 19-37, Sydney, Australia, May 2008. http://dx.doi.org/10.1007/978-3-540-79576-6_2

[2] Heiko Drewes, Albrecht Schmidt. Interacting with the Computer using Gaze Gestures. Proceedings of INTERACT 2007. http://murx.medien.ifi.lmu.de/~albrecht/pdf/interact2007-gazegestures.pdf

[3] Shwetak N. Patel, Matthew S. Reynolds, Gregory D. Abowd: Detecting Human Movement by Differential Air Pressure Sensing in HVAC System Ductwork: An Exploration in Infrastructure Mediated Sensing. Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 1-18, Sydney, Australia, May 2008. http://shwetak.com/papers/air_ims_pervasive2008.pdf

[4] Satoshi Iwaki et al. Air-pillow telephone: A pillow-shaped haptic device using a pneumatic actuator (Poster). Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/LBR/lbr11.pdf

[5] Katsuya Hashizume, Kazunori Takashio, Hideyuki Tokuda. exPhoto: a Novel Digital Photo Media for Conveying Experiences and Emotions. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Demo/d4.pdf

[6] P. Holleis, M. Kranz, M. Gall, A. Schmidt. Adding Context Information to Digital Photos. IWSAWC 2005. http://www.hcilab.org/documents/AddingContextInformationtoDigitalPhotos-HolleisKranzGallSchmidt-IWSAWC2005.pdf

[7] S-ROOM: Real-time content creation about the physical world using sensor network. Takeshi Okadome, Yasue Kishino, Takuya Maekawa, Kouji Kamei, Yutaka Yanagisawa, and Yasushi Sakurai. Advances in Pervasive Computing. Adjunct proceedings of the 6th International Conference on Pervasive Computing (Pervasive 2008). http://www.pervasive2008.org/Papers/Video/v2.pdf