Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password essays security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.“ [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.„[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field essay study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.“ [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Best Paper at AmI 2009, Visions

Florian Alt presented our work on pervasive advertising, in particular on creating dynamic user profiles in the real world [1]. This research was carried out together with colleagues in marketing and software systems and implemented in one of our course. We are proud that it got named best paper at AmI 2009! Another paper to look at in this context was presented by Jörg Müller; it looked at pervasive advertisement utilizing screens on public phone boxes. The study is amazing in size (20 displays all over Münster, 17 participating shops) and duration (1 year, 24/7) [2]. Even though the results are not completely conclusive it is very interesting to read about the experience of such a large deployment real world in a research context.

This year at AmI included a vision panel with Juan Carlos Augusto, Florian Michahelles, Jörg Müller, Donald Patterson, which I chaired with Norbert Streiz. The main questions for us were: Is there a need for a vision? What are drivers for a new Vision? And what is the value of having a technology vision? The discussion was very diverse touching on various topics. One interesting observation is that many people – including me – see that a mobile personal device (what now is the mobile phone) will stay at the center of a new vision. A further very insightful comment was that we as a community should try to develop more specific visions (e.g. how will education be in 2020, or how will public transport be in 2030) rather than have a Version 2.0 of the overall vision. I think such more specific visions could be valuable to guide research in Ambient Intelligence in the next years.

[1] Florian Alt, Moritz Balz, Stefanie Kristes, Alireza Sahami Shirazi, Julian Mennenöh, Albrecht Schmidt, Hendrik Schröder and Michael Goedicke: Adaptive User Profiles in Pervasive Advertising Environments. Proceedings of the 3rd European Conference on Ambient Intelligence (AmI ’09). Springer Berlin / Heidelberg. Salzburg, Austria 2009.

[2] Jörg Müller, Antonio Krüger: MobiDiC: Context Adaptive Digital Signage with Coupons. Proceedings of the 3rd European Conference on Ambient Intelligence (AmI ’09). Springer Berlin / Heidelberg. Salzburg, Austria 2009.

Best papers at MobileHCI 2009

At the evening event of MobileHCI2009 the best paper awards were presented. The best short paper was „User expectations and user experience with different modalities in a mobile phone controlled home entertainment system“ [1]. There were two full papers that got a best paper award: „Sweep-Shake: finding digital resources in physical environments“ [2] and „PhotoMap: using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices“ [3]. We often look at best papers of a conference to better understand what makes a good paper for this community. All of the 3 papers above are really well done and worthwhile to read.

PhotoMap [3] is a simple but very cool idea. Many of you have probably taken photos of public maps with your mobile phone (e.g. at a park, city map) and PhotoMap explores how to link them to realtime location data from the GPS on the device. The goal is that you can move around in the real space and you have a dot marking where you are on the taken photo. The implementation however seems not completely simple… There is a youtube movie on PhotoMap (there would be more movies from the evening event – but I do not link them here – the photo above gives you an idea…)

Since last year there is also a history best paper award (most influential paper from 10 years ago). Being at the beginning of a new field sometimes pays of… I got this award for the paper on implicit interaction [4] I presented in Edinburgh at MobileHCI 1999.

[1] Turunen, M., Melto, A., Hella, J., Heimonen, T., Hakulinen, J., Mäkinen, E., Laivo, T., and Soronen, H. 2009. User expectations and user experience with different modalities in a mobile phone controlled home entertainment system. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-4. DOI= http://doi.acm.org/10.1145/1613858.1613898

[2] Robinson, S., Eslambolchilar, P., and Jones, M. 2009. Sweep-Shake: finding digital resources in physical environments. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1613858.1613874

[3] Schöning, J., Krüger, A., Cheverst, K., Rohs, M., Löchtefeld, M., and Taher, F. 2009. PhotoMap: using spontaneously taken images of public maps for pedestrian navigation tasks on mobile devices. In Proceedings of the 11th international Conference on Human-Computer interaction with Mobile Devices and Services (Bonn, Germany, September 15 – 18, 2009). MobileHCI ’09. ACM, New York, NY, 1-10. DOI= http://doi.acm.org/10.1145/1613858.1613876

[4] Albrecht Schmidt. Implicit human computer interaction through context. Personal and Ubiquitous Computing Journal, Springer Verlag London, ISSN:1617-4909, Volume 4, Numbers 2-3 / Juni 2000. DOI:10.1007/BF01324126, pp. 191-199 (initial version presented at MobileHCI1999). http://www.springerlink.com/content/u3q14156h6r648h8/