Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password essays security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.“ [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.„[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field essay study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.“ [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Paper and demo in Salzburg at Auto-UI-2011

At the automotive user interface conference in Salzburg we presented some of our research. Salzburg is a really nice place and Manfred and his team did a great job organizing the conference!

Based on the Bachelor Thesis of Stefan Schneegaß and some follow-up work we published a full paper [1] that describes a KLM-Model for the car and a prototyping tools that makes use of the model. In the model we look at the specific needs in the car, model rotary controllers, and cater for the limited attention while driving. The prototyping tool provides means to quickly estimate interaction times. It supports visual prototyping using images of the UI and tangible prototyping using Nic Villar´s VoodooIO. Looking forward to having Stefan on our team full-time 🙂

We additionally had a demo on a recently completed thesis by Michael Kienast. Here we looked at how speech and gestures can be combined for controlling functions, such as mirror adjustments or windscreen wipers, in the car. This multimodal approach combines the strength of gestural interaction and speech interaction [2].

The evening event of the conference was at Festung Hohensalzburg – with a magnificent view over the twon!

[1] Stefan Schneegaß, Bastian Pfleging, Dagmar Kern, Albrecht Schmidt. Support for modeling interaction with in-vehicle interfaces. (PDF) Proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

[2] Bastian Pfleging, Michael Kienast, Albrecht Schmidt. DEMO: A Multimodal Interaction Style Combining Speech and Touch Interaction in Automotive Environments. Adjunct proceedings of 3rd international conference on Automotive User Interfaces and Vehicular Applications 2011 (http://auto-ui.org). Salzburg. 30.11-2.12.2011

Our Paper and Note at CHI 2010

Over the last year we looked more closely into the potential of eye-gaze for implicit interaction. Gazemarks is an approach where the users‘ gaze is continuously monitored and when leaving a screen or display the last active gaze area is determined and store [1]. When the user looks back at this display this region is highlighted. By this the time for attention switching between displays was in our study reduced from about 2000ms to about 700ms. See the slides or paper for details. This could make the difference that we enable people to safely read in the car… but before this more studies are needed 🙂

Together with Nokia Research Center in Finland we looked at how we can convey the basic message of an incoming SMS already with the notification tone [2]. Try the Emodetector application for yourself or see the previous post.

[1] Kern, D., Marshall, P., and Schmidt, A. 2010. Gazemarks: gaze-based visual placeholders to ease attention switching. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 2093-2102. DOI= http://doi.acm.org/10.1145/1753326.1753646

[2] Sahami Shirazi, A., Sarjanoja, A., Alt, F., Schmidt, A., and Hkkilä, J. 2010. Understanding the impact of abstracted audio preview of SMS. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 – 15, 2010). CHI ’10. ACM, New York, NY, 1735-1738. DOI= http://doi.acm.org/10.1145/1753326.1753585

PS: the social event was at the aquarium in Atlanta – amazing creatures! Again supprised how well the N95 camera works even under difficult light conditions…

MUM 2009 in Cambridge, no technical solution for privacy

The 8th International Conference on Mobile and Ubiquitous Multimedia (MUM 2009) was held in Cambridge, UK. The conference is fairly specific and had an acceptance rate of about 33% – have a look at the table of content for an overview. Florian Michahelles presented our paper on a design space for ubiquitous product recommendation systems [1]. Our work contributes a comprehensive design space that outlines design options for product recommendation systems using mobile and ubiquitous technologies. We think that over the next years mobile recommendation systems have the potential to change the way we shop in the real world. It probably will be normal to have access in-depth information an price comparison while browsing in physical stores. The idea has been around for a while, e.g. the pocket bargain finder presented at the first ubicomp conference [2]. In Germany we see also a reaction of some electronics stores that asked users NOT to use a phone or camera in the shop.

The keynote on Tuesday morning was by Martin Rieser on the Art of Mobility. He blogs on this topic on http://mobileaudience.blogspot.com/.
The examples he presented in his keynote concentrated on locative and pervasive media. He characterized locative media as media that by social interaction that is linked to a specific place. He raised the awareness that mapping is very important for our perception of the world, using several different subjective maps – I particular liked the map encoding travel times to London . A further interesting examples was a project by Christian Nold: Bio mapping – emotional mapping of journeys. QR or other bar code markers on cloth (large and on the outside) have a potential … I see this now.

In the afternoon was panel on „Security and Privacy: Is it only a matter of time before a massive loss of personal data or identity theft happens on a smart mobile platform?“ with David Cleevely, Tim Kindberg, and Derek McAuley. I found the discussion very inspiring but in the end I doubt more and more that technical solutions will solve the problem. I think it is essential to consider the technological, social and legal framework in which we live. If I would need to live in a house that provides absolute safety (without a social and legal framework) it would be probably not a very nice place… hence I think here we need really interdisciplinary research in this domain.

[1] von Reischach, F., Michahelles, F., and Schmidt, A. 2009. The design space of ubiquitous product recommendation systems. In Proceedings of the 8th international Conference on Mobile and Ubiquitous Multimedia (Cambridge, United Kingdom, November 22 – 25, 2009). MUM ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1658550.1658552

[2] Brody, A. B. and Gottsman, E. J. 1999. Pocket Bargain Finder: A Handheld Device for Augmented Commerce. InProceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (Karlsruhe, Germany, September 27 – 29, 1999). H. Gellersen, Ed. Lecture Notes In Computer Science, vol. 1707. Springer-Verlag, London, 44-51.
http://www.springerlink.com/content/jxtd2ybejypr2kfr/

Tangible, Embedded, and Reality-Based Interaction

Together with Antonio’s group we looked at new forms of interaction beyond the desktop. The journal paper Tangible, Embedded, and Reality-Based Interaction [1] gives overview and examples of recent trends in human computer interaction and is a good starting point to learn about these topics.

Abstract: Tangible, embedded, and reality-based interaction are among novel concepts of interaction design that will change our usage of computers and be part of our daily life in coming years. In this article, we present an overview of the research area of tangible, embedded, and reality-based interaction as an area of media informatics. Potentials and challenges are demonstrated with four selected case studies from our research work.

[1] Tanja Döring, Antonio Krüger, Albrecht Schmidt, Johannes Schöning: Tangible, Embedded, and Reality-Based Interaction. it – Information Technology 51 (2009) 6 , S. 319-324. (pdf)
http://www.it-information-technology.de/

Our PERCI Article in IEEE Internet Computing

Base on work we did together with DoCoMo Eurolabs in Munich we have published the article „Perci: Pervasive Service Interaction with the Internet of Things“ in the IEEE Internet Computing special issue on the Internet of Things edited by Frédéric Thiesse and Florian Michahelles.

The paper discusses the linking of digital resources to the real world. We investigated how to augment everyday objects with RFID and Near Field Communication (NFC) tags to enable simpler ways for users to interact with service. We aim at creating a digital identities of real world objects and by this integrating them into the Internet of Things and associating them with digital information and services. In our experiments we explore how these objects can facilitate access to digital resources and support interaction with them-for example, through mobile devices that feature technologies for discovering, capturing, and using information from tagged objects. See [1] for the full article.

[1] Gregor Broll, Massimo Paolucci, Matthias Wagner, Enrico Rukzio, Albrecht Schmidt, and Heinrich Hußmann. Perci: Pervasive Service Interaction with the Internet of Things. IEEE Internet Computing. November/December 2009 (vol. 13 no. 6). pp. 74-81
http://doi.ieeecomputersociety.org/10.1109/MIC.2009.120

Workshop on Pervasive Advertising at Informatik 2009 in Lübeck

Following our first workshop on this topic in Nara during Pervasive 2009 earlier this year we had on Friday the 2nd Pervasive Advertising Workshop in Lübeck as part of the German computer science conference Informatik 2009.

The program was interesting and very diverse. Daniel Michelis discussed in his talk how we move from an attention economy towards an engagement economy. He argued that marketing has to move beyond the AIDA(S) model and to consider engagement as central issue. In this context he introduced the notion of Calm Advertising and interesting analogy to Calm Computing [1]. Peter van Waart talked about meaningful adverting and introduced the concept of meaningful experience. To stay with the economy term consider advertising in an experience economy. For more detail see the workshop webpage – proceedings will be soon online.

Jörg Müller talked about contextual advertising and he had a nice picture of the steaming manhole coffee ad – apparently from NY – but it is not clear if it is deployed.

If you are interested in getting sensor data on the web – and having them also geo-referenced – you should have a look at http://www.52north.org. This is an interesting open source software system that appears quite powerful.

Florian Alt presented our work interactive and context-aware advertising insight a taxi [2].

[1] Weiser, M., Brown, J.S.: The coming age of calm technology. (1996)

[2] Florian Alt, Alireza Sahami Shirazi, Max Pfeiffer, Paul Holleis, Albrecht Schmidt. TaxiMedia: An Interactive Context-Aware Entertainment and Advertising System (Workshop Paper). 2nd Pervasive Advertising Workshop @ Informatik 2009. Lübeck, Germany 2009.

Best papers at MobileHCI 2009

At the evening event of MobileHCI2009 the best paper awards were presented. The best short paper was „User expectations and user experience with different modalities in a mobile phone controlled home entertainment system“ [1]. There were two full papers that got a best paper award: „Sweep-Shake: finding digital resources in physical environments“ [2] and „PhotoMap: using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices“ [3]. We often look at best papers of a conference to better understand what makes a good paper for this community. All of the 3 papers above are really well done and worthwhile to read.

PhotoMap [3] is a simple but very cool idea. Many of you have probably taken photos of public maps with your mobile phone (e.g. at a park, city map) and PhotoMap explores how to link them to realtime location data from the GPS on the device. The goal is that you can move around in the real space and you have a dot marking where you are on the taken photo. The implementation however seems not completely simple… There is a youtube movie on PhotoMap (there would be more movies from the evening event – but I do not link them here – the photo above gives you an idea…)

Since last year there is also a history best paper award (most influential paper from 10 years ago). Being at the beginning of a new field sometimes pays of… I got this award for the paper on implicit interaction [4] I presented in Edinburgh at MobileHCI 1999.

[1] Turunen, M., Melto, A., Hella, J., Heimonen, T., Hakulinen, J., Mäkinen, E., Laivo, T., and Soronen, H. 2009. User expectations and user experience with different modalities in a mobile phone controlled home entertainment system. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-4. DOI= http://doi.acm.org/10.1145/1613858.1613898

[2] Robinson, S., Eslambolchilar, P., and Jones, M. 2009. Sweep-Shake: finding digital resources in physical environments. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1613858.1613874

[3] Schöning, J., Krüger, A., Cheverst, K., Rohs, M., Löchtefeld, M., and Taher, F. 2009. PhotoMap: using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1613858.1613876

[4] Albrecht Schmidt. Implicit human computer interaction through context. Personal and Ubiquitous Computing Journal, Springer Verlag London, ISSN:1617-4909, Volume 4, Numbers 2-3 / Juni 2000. DOI:10.1007/BF01324126, pp. 191-199 (initial version presented at MobileHCI1999). http://www.springerlink.com/content/u3q14156h6r648h8/

Papers are all similar – Where are the tools to make writing more effective?

Yesterday we discussed (again during the evening event of MobileHCI2009) how hard it would be to support the process of writing a high quality research paper and essays. In many conference there is a very defined style what you need to follow, specific things to include, and certain ways of how to present information. This obviously depends on the type of contribution but within one contribution type there could be probably provided a lot of help to create the skeleton of the paper… In many other areas Sounds like another project idea 😉

You ought to keep your essay presentation for the IELTS paper short. Recall that you just have 40 minutes to compose the exposition, and some of this time should be spent arranging. Along these lines, you should have the capacity to compose your presentation decently fast so you can begin composing your body sections and ask if needed.

Ethics as material for innovation – German HCI conference – Mensch und Computer

On Tuesday I was at the German human computer interaction conference called Mensch und Computer. The keynote by Alex Kirlik was on Ethical Design (slides from his talk) and he showed how ethics extends beyond action to technology leading to the central question: Why should we build certain systems? His examples and the following discussion made me wonder whether „Ethics become the next Material for innovation“. Taking his example of 9/11 where old technology (air planes) and a different view on ethics was used to strike this is in contrast to previous/typical warfare where new technologies (e.g. Gun powder, Nuclear bomb) have changed the way wars are conducted.

Considering ethics as material for innovation is obviously risky but looking at successful businesses of the last decade such a trend can be argued for (e.g. google collecting information about the user to provide new services, youtube allowing users to share content with limited insurance that it is not copyrighted). Would be interesting to have a workshop on this topic sometime in the future…

Grace who left our group after finishing her Master’s degree (to work in the real world outside of university 😉 presented her paper on how to aid communication in the car between driver and passenger [1].

In the afternoon the working group on tangible interaction in mixed realities (in German Be-greifbare Interaktion in Gemischten Wirklichkeiten) had a workshop and a meeting. We will host the next workshop of the working group in Essen early next year (probably late February or early March).

PS: the next Mensch & Computer Conference ist at the University of Duisburg-Essen 🙂

[1] Grace Tai, Dagmar Kern, Albrecht Schmidt. Bridging the Communication Gap: A Driver-Passenger Video Link. Mensch und Computer 2009. Berlin.