Karin Bee has defended her dissertation.

Karin Bee (nee Leichtenstern) has defended her dissertation at the University of Augsburg. In her dissertation she worked on methods and tools to support a user centered design process for mobile applications that use a variety of modalities. There are some papers that describe her work, e.g. [1] and [2]. To me it was particularly interesting that she revisited the experiment done in her master thesis in a smart home in Essex [3] and reproduced some of it in her hybrid evaluation environment.

It is great to see that now most of our students (HiWis and project students) who worked with us in Munich on the Embedded Interaction Project have finished their PhD (there are some who still need to hand in – Florian? Raphael?, Gregor? You have enough papers – finish it 😉

In the afternoon I got to see some demos. Elisabeth André has a great team of students. They work on various topics in human computer interaction, including public display interaction, physiological sensing and emotion detection, and gesture interaction. I am looking forward to a joined workshop of both groups. Elisabeth has an impressive set of publications which is always a good starting point for affective user interface technologies.

[1] Karin Leichtenstern, Elisabeth André,and Matthias Rehm. Tool-Supported User-Centred Prototyping of Mobile Applications. IJHCR. 2011, 1-21.

[2] Karin Leichtenstern and Elisabeth André. 2010. MoPeDT: features and evaluation of a user-centred prototyping tool. In Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems (EICS ’10). ACM, New York, NY, USA, 93-102. DOI=10.1145/1822018.1822033 http://doi.acm.org/10.1145/1822018.1822033

[3] Enrico Rukzio, Karin Leichtenstern, Vic Callaghan, Paul Holleis, Albrecht Schmidt, and Jeannette Chin. 2006. An experimental comparison of physical mobile interaction techniques: touching, pointing and scanning. In Proceedings of the 8th international conference on Ubiquitous Computing (UbiComp’06), Paul Dourish and Adrian Friday (Eds.). Springer-Verlag, Berlin, Heidelberg, 87-104. DOI=10.1007/11853565_6 http://dx.doi.org/10.1007/11853565_6

MobiSys 2012, Keynote by Paul Jones on Mobile Health Challenges

This year’s ACM MobiSys conference is in the Lake District in the UK. I really love this region in the UK. Already 15 years back when I studied in Manchester I often came up over the weekend to hike in the mountains here. The setting of the conference hotel is brilliant, overlooking Lake Windermere.
The opening keynote of MobiSys 2012 was presented by Dr. Paul Jones, the NHS Chief Technology Officer who talked about “Mobile Challenges in Health”. Health is very dear to people and the approach to health care around the world is very different.

The NHS is a unique intuition that is providing healthcare to everyone in the UK. It is taxation funded and with its 110 billion pounds per year budget it is one of the cheaper (and yet efficient) health care systems in the world. The UK spends about 7% of its national cross product on health care, whereas the US or Germany nearly spend double of this percentage. Beside the economic size the NHS is also one of the biggest employers in the world, similar in size to the US department of defense and the Chinese people’s army. The major difference to other larger employers is, that a most part of the staff in the NHS is highly educated (e.g. doctors) and is not easily taking orders

Paul started out with the statement: technology is critical to providing health care in the future. Doing healthcare as it is currently done will not work in the future. Carrying on will not work as the cost would not be payable by society. In general information technology in the health sector is helping to create more efficient systems. He had some examples that often very simple system help to make a difference. In one case he explained that changing a hospitals scheduling practice from paper based diaries to a computer based systems reduced waiting times massively (from several month to weeks, without additional personal). In another case laptops were provided to community nurses. This saved 6 hours per week and freed nearly an extra day of work per week as it reduced their need for travelling back to the office. Paul argued, that this is only a starting point and not the best we can do. Mobile computing has the potential to create better solutions than a laptop that are more fitting the real working environment of the users and patients. One further example he used is dealing with vital signs of a patient. Traditionally this is measured and when degrading a nurse is calling a junior doctor and they have to respond in a certain time. In reality nurses have to ask more often and doctors may be delayed. In this case they introduced a system and mobile device to page/call the doctors and document the call (instead of nurses calling the doctors). It improved the response times of doctors – and the main reason is that actions are tracked and performance is measured (and in the medical field nobody wants to be the worst).

Paul shared a set of challenges and problems with the audience – in the hope that researchers take inspiration and solve some of the problems 😉

One major challenge is the fragmented nature of the way health care is provided. Each hospital has established processes and doctors have a way they want do certain procedures. These processes are different from each other – not a lot in many cases but different enough that the same software is not going to work. It is not each to streamline this, as doctors usually know best and many of them make a case why their solution is the only one that does the job properly. Hence general solutions are unlikely to work and solutions need to be customizable to specific needs.

Another interesting point was about records and paper. Paul argued that the amount of paper records in hospital is massive and they are less reliable and save as many think. It is common that a significant portion of the paper documentation is lost or misplaced. Here a digital solution (even if non-perfect) is most certainly better. From our own experience I agree on the observation, but I would think it is really hard to convince people about it.

The common element through the talk was, that it is key to create systems that fit the requirements. To achieve this it seems that having multidisciplinary teams that understand the user and patient needs is inevitable. Paul’s examples were based on his experience of seeing the user users and patient in context. He made firsthand the observation, that real world environments often do not permit the use of certain technologies or create sup-optimal solution. It is crucial that the needs to are understood by the people who design and implement the systems. It may be useful to go beyond the multidisciplinary team and make each developer spending one day in the environment they design for.

Some further problems he discussed are:

  • How to move the data around to the places where it is needed? Patients are transferred (e.g. ambulance to ER, ER to surgeons, etc.) and hence data needs to be handed over. This handover has to work across time (from one visit to the next) and across departments and institutions
  • Personal mobile devices (“bring your own device”) are a major issue. It seems easy for an individual to use them (e.g. a personal tablet to make notes) but on a system-level they create huge problems, from back-up to security. In the medical field another issue arises: the validity of data is guaranteed and hence the data gathered is not useful in the overall process.

A final and very interesting point was: if you are not seriously ill, being in a hospital is a bad idea. Paul argued, that the care you get at home or in the community is likely to be better and you are less likely to be exposed to additional risks. From this the main challenge for the MobiSys community arises: It will be crucial to provide mobile and distributed information systems that work in the context of home care and within the community.

PS: I like one of the side comments: Can we imagine doing a double blind study on a jumbo jet safety? This argument hinted, that some of the approaches to research in the medical field are not always most efficient to prove the validity of an approach.

If you do not research it – it will not happen?

Over the last days plans to do research on the use of public date from social networks to calculate someone’s credit risk made big news (e.g. DW). The public (as voiced by journalists) and politicians showed a strong opposition and declared something like this should not be done – or more specifically such research should not be done.

I am astonished and a bit surprised by the reaction. Do people really think if there is no research within universities this will (does) not happen? If you look at the value of facebook (even after the last few weeks) it must be very obvious that there is a value in the social network data which people hope to extract over time…

Personal credit risk assessment (in Germany Schufa) is widely used – from selling you a phone contract to lending you money when buying a house. If you believe that we need a personal credit risk assessment – why would you argue that they work on very incomplete data? Will it make it better? I think the logical consequence of the discussion would be to prohibit the pricing based on personal credit risk ratings – but this, too would be very unfair (at least to the majority). Hence the consequence we see now (the research is not done in Universities) is probably not doing much good… it just pushes it into a place where the public sees little about it (and the companies will not publish it in a few years…).

Keynote at the Pervasive Displays Symposium: Kenton O’Hara

Kenton O’Hara, a senior researcher in the Socio-Digital-Systems group at Microsoft Research in Cambridge, presented the keynote at the pervasive displays symposium in Porto on the topic “Social context and interaction proxemics in pervasive displays“. He highlighted the importance of the spatial relationship between the users and the interactive displays and the different opportunities for interaction that are available when looking at the interaction context.

Using examples from the medical field (operating theater) he showed the issues that arise from the need of sterile interaction and hence avoiding touch interaction and moving towards a touchless interaction mode. A prototype, that uses a Microsoft Kinect sensor,  allows the surgeon to interact with information (e.g. an x-ray image) while working on the patient. It was interesting to see that gestural interaction in this context is not straightforward, as surgeons use tools (and hence have their hands not free) or gesture as a part of the communication in the team.

Another example is a public space game; there are many balls on a screen and a camera looking at the audience. Users can move the balls by body movement based on a simple edge detection video tracking mechanism and when two balls touch they form a bigger ball.  Kenten argues that “body-based interaction becomes a public spectacle” and interactions of an individum are clearly visible to others. This visibilility can lead to inhibition and may reduce the motivation of user to interact. For the success of this game the designing of the simplistic tracking algorithms is one major factor. By tracking edges/blobs the users can play together (e.g. holding hands, parents with the kids in their arm) and hence a wide range of interaction proxemics are supported. He presented some further examples of public display games on BBC large screens, also showing that the concept of interaction proxemics can be use to explain interaction .

TVs have change eating behavoir. More recent research in displays in the context of food consumptions have been in contrast mainly pragmatic (corrective, problem solving). Kenton argued that we look at the cultural values of meals and see shared eating as a social practice. Using the example of eating in front of the television (even as a family) he discusses the implications on communication and interaction (basically the communication is not happening). Looking at more recent technologies such as phones, laptops and tablets and their impact on social dynamics probably many of us realized that this is impacting many of us in our daily lives already (or who is not taking their phone to table?). It is very obvious that social relationships and culture changes with these technologies. He showed “4Photos” [1] a designed piece of technology to be put on the center of the table showing 4 photographs. Users can interact with it from all sides. It is designed in a way to stimulate rather than inhibit communication and to provide opportunities for conversation. It introduces interaction with technologies as a social gesture.

Interested in more? Kenton published a book on public displays in 2003 [2] and has a set of relevant publications in the space of the symposium.

References

[1] Martijn ten Bhömer, John Helmes, Kenton O’Hara, and Elise van den Hoven. 2010. 4Photos: a collaborative photo sharing experience. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (NordiCHI ’10). ACM, New York, NY, USA, 52-61. DOI=10.1145/1868914.1868925 http://doi.acm.org/10.1145/1868914.1868925

[2] Kenton O’Hara, Mark Perry, Elizabeth Churchill, Dan Russell. Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies. Kluwer Academic, 2003

Visiting the Culture Lab in Newcastle

While being in the north of England I stopped by in Newcastle at the Culture Lab. If the CHI-conference is a measure for quality in research in Human Computer Interaction Culture Lab is currently one of the places to be – if you are not convinced have look at Patrick Olivier’s publications. The lab is one of a few places where I think a real ubicomp spirit is left – people developing new hardware and devices (e.g. mini data acquisition boards, specific wireless sensor, embedded actuators) and interdisciplinary research plays a central role. This is very refreshing to see, especially as so many others in Ubicomp have moved to mainly creating software on phones and tables…

Diana, one of our former students from Duisburg-Essen, is currently working on her master thesis in Newcastle. She looks into new tangible forms of interaction on table top UIs. Especially actuation of controls is a central question. The approach she uses for moving things is compared to other approached, e.g. [1], very simple but effective – looking forward to reading the paper on the technical details (I promised not to tell any details here). The example application she has developed is in chemistry education.

Some years back at a visit to the culture lab I had already seen some of the concepts and ideas for the kitchen. Over the last years this has progressed and the current state is very appealing. I really thing the screens behind glass in the black design make a huge difference. Using a set of small sensors they have implemented a set of aware kitchen utensils [2]. Matthias Kranz (back in our group in Munich) worked on a similar idea and created a knife that knows what it cuts [3]. It seems worthwhile to exploring the aware artifacts vision further …

References
[1] Gian Pangaro, Dan Maynes-Aminzade, and Hiroshi Ishii. 2002. The actuated workbench: computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST ’02). ACM, New York, NY, USA, 181-190. DOI=10.1145/571985.572011 http://doi.acm.org/10.1145/571985.572011 

[2] Wagner, J., Ploetz, T., Halteren, A. V., Hoonhout, J., Moynihan, P., Jackson, D., Ladha, C., et al. (2011). Towards a Pervasive Kitchen Infrastructure for Measuring Cooking Competence. Proc Int Conf Pervasive Computing Technologies for Healthcare (pp. 107-114). PDF

[3] Matthias Kranz, Albrecht Schmidt, Alexis Maldonado, Radu Bogdan Rusu, Michael Beetz, Benedikt Hörnler, and Gerhard Rigoll. 2007. Context-aware kitchen utilities. InProceedings of the 1st international conference on Tangible and embedded interaction (TEI ’07). ACM, New York, NY, USA, 213-214. DOI=10.1145/1226969.1227013 http://doi.acm.org/10.1145/1226969.1227013 (PDF)

Media art, VIS Excursion to ZKM in Karlsruhe

This afternoon we (over 40 people from VIS and VISUS at the University of Stuttgart) went to Karlsruhe to visit the ZKM. We got guided tours to the panorama laboratory, the historic video laboratory, to the SoundARt exhibition and some parts of the regular exhibition. Additionally Prof. Gunzenhäuser gave a short introduction to the Zuse Z22 that is in on show there, too.

 The ZKM is a leading center for digital and media art that includes a museum for media art and modern art, several research institutes, and an art and design school. The approach is to bring media artists, works of art, research in media art and teaching in this field close together (within a single large building). The exhibitions include major media art works from the last 40 years.

The panorama laboratory is a 360 degree (minus a door) projection. Even though the resolution of the powerwall at VISUS [1] is higher and the presentation is in 3D, the360 degree 10 Megapixel panorama screen results in an exciting immersion. Without 3D, being surrounded by media creates a feeling of being in the middle of something that happens around you. Vivien described the sensation of movement similar to sitting in a train. The moment another train pulls out of the station you have a hard time to tell who is moving. I think such immersive environment could become very common once we will have digital display wallpaper.

The historic video laboratory is concerned with “rescuing” old artistic video material. We sometimes complain about the variety of video codecs, but looking at the many different formats for tapes and cassettes, this problem has a long tradition. Looking at historic split screen videos that were created using analog technologies one appreciates the virtues of digital video editing… Two are two amazing films by Zbigniew Rybczyński: Nowa Książka (New Book): http://www.youtube.com/watch?v=46Kt0HmXfr4 and and Tango: http://vodpod.com/watch/3791700-zbigniew-rybczynski-tango-1983

The current SoundArt exhibition is worthwhile. There are several indoor and outdoor installations on sounds. In the yard there is a monument built of speakers (in analogy to the oracle of Delphi) that you can call from anywhere (+49 721 81001818) and get 3 minutes of time to talk to whom even is in the vicinity of the installation. Another exhibit sonfied electron magnetic fields from different environments in an installation called the cloud.

[1] Powerwall at VISUS at the Univeristy of Stuttgart (6m by 2.20, 88 million pixel in, 44 million pixel per eye for 3D). http://www.visus.uni-stuttgart.de/institut/visualisierungslabor/technischer-aufbau.html.

Golden Doctorate – 50 years since Prof. Gunzenhäuser completed his PhD

It is 50 years now that Prof. Rul Gunzenhäuser, my predecessor on human computer interaction and interactive systems at the University of Stuttgart, defended his PhD. Some month back I came across his PhD thesis “Ästhetisches Maß und ästhetische Information“ (aesthetic measure and aesthetic information) [1], supervised by Prof. Max Bense, and I was seriously impressed.

He is one of the few truly interdisciplinary people I know. And in contrast to modern interpretations of interdisciplinary (people from different working together) he is himself interdisciplinary in his own education and work. He studied Math, Physics and Philosophy, worked while he studied in a company making (radio) tubes, completed a teacher training, did his PhD in Philosophy but thematically very close to the then emerging field of computer science and became later a post-doc in the computing center. He taught didactic of mathematics in a teacher training University, was a visiting professor at the State University of New York and finally became in 1973 professor for computer science at the University of Stuttgart staring the department of dialog systems. This unique educational path shaped his research and I would expect his whole person. Seeing this career path I have even more trouble accepting the streamlining of our educational system and find it easier to relate to a renaissance educational ideal.

Yesterday evening we had a small seminar and gathering to mark the 50th anniversary of his PhD. Our colleague Prof. Catrin Misselhorn, a successor on the chair of philosophy held by Max Bense, talked about “Aesthetic as Science?” (with a question mark) and started with the statement that what people did in this area 50 years ago is completely dated, if not largely wrong. I found the analysis very interesting and enlightening as it highlights that scientific results, to be relevant, do not have a non-transient nature. For a mathematician this may be hard to grasp, but for someone in computing and especially in human computer interaction this is a relief. It shows that scientific endeavors have to be relevant in their time but the lasting value may be specifically in the fact, that they go a single step forward. Looking back a human computer interaction a lot of the research in 70ties, 80ties, and 90ties looks now really dated, but we should not be fouled, without this work we would not be in interactive systems where we are now, if this work would not have been done.


Prof. Frieder Nake, one of the pioneers of generative art and a friend and colleague of Prof. Gunzenhäuser, reflected on the early work of computers and aesthetics and on computer generated art. He too argued the original approach is ‚dead‘, but the spirit of computer generated art is stronger now than ever, with many new tools available. He described early and heated discussions between philosophers, artists, and people who made computer generated art. One interesting approach to solve the dispute is is that the computer generated art is “artificial art” (künstliche Kunst).

The short take away message from the event is:
If you do research in HCI, do something that is fundamentally new. Question the existing approach and creates new ideas and concepts. Don’t worry if this will last forever, accept that your research will likely be ‚only‘ one step along the way. It has to be relevant when it is done, it matters less that it may have little relevance some 20 or 50 years later.

[1] Rul Gunzenhäuser. Ästhetisches Maß und ästhetische Information. 1962.

Share your digital activities on Android – AppTicker

If you share an apartment with a friend you know what they do. There is no need to communicate “I am watching TV” or “I am cooking” as this is pretty obvious. In the digital space this is much more difficulty. Sharing what we engage with and peripherally perceive what others do is not yet trivial.

Niels Henze and Alireza Sahami in our group have made a new attempt to research how to bridge this gap. With the AppTicker for Android they have released a software, that offers means to share usage of applications on your phone with your friends on Facebook. You can choose that whenever you start a certain app (e.g. the web browser, the camera, or the public transport app) this is shared in your activities on Facebook. In the middle screen you can see the means for control.

The app provides additionally a personal log (left screen) of all the apps that were used. I found that feature quite interesting and when looking at it I really started to reflect on my app usage patterns. If you are curious, have an android phone and if you use Facebook, please have a go and try it out.

The App homepage on our server: http://projects.hcilab.org/appticker/
Get it directly from Google Play or search for AppTicker in Google Play.

To access it directly you can scan the following QR-Code:

Our Research at CHI2012 – usable security and public displays

This year we have the chance to share some of our research with the community at CHI2012. The work focuses on usable security ([1] and [2]) and public display systems [3]. Florian got together with the researchers from T-Labs a best paper award for [3].

Please have a look at the papers… I think it is really worthwhile.

Increasing the security of gaze-based graphical passwords [1]
With computers being used ever more ubiquitously in situations where privacy is important, secure user authentication is a central requirement. Gaze-based graphical passwords are a particularly promising means for shoulder-surfing-resistant authentication, but selecting secure passwords remains challenging. In this paper, we present a novel gaze-based authentication scheme that makes use of cued-recall graphical passwords on a single image. In order to increase password essays security, our approach uses a computational model of visual attention to mask those areas of the image that are most likely to attract visual attention. We create a realistic threat model for attacks that may occur in public settings, such as filming the user’s interaction while drawing money from an ATM. Based on a 12-participant user study, we show that our approach is significantly more secure than a standard image-based authentication and gaze-based 4-digit PIN entry.“ [1]

Assessing the vulnerability of magnetic gestural authentication [2]

Secure user authentication on mobile phones is crucial, as they store highly sensitive information. Common approaches to authenticate a user on a mobile phone are based either on entering a PIN, a password, or drawing a pattern. However, these authentication methods are vulnerable to the shoulder surfing attack. The risk of this attack has increased since means for recording high-resolution videos are cheaply and widely accessible. If the attacker can videotape the authentication process, PINs, passwords, and patterns do not even provide the most basic level of security. In this project, we assessed the vulnerability of a magnetic gestural authentication method to the video-based shoulder surfing attack. We chose a scenario that is favourable to the attacker. In a real world environment, we videotaped the interactions of four users performing magnetic signatures on a phone, in the presence of HD cameras from four different angles. We then recruited 22 participants and asked them to watch the videos and try to forge the signatures. The results revealed that with a certain threshold, i.e, th=1.67, none of the forging attacks was successful, whereas at this level all eligible login attempts were successfully recognized. The qualitative feedback also indicated that users found the magnetic gestural signature authentication method to be more secure than PIN-based and 2D signature methods.„[2] There is also a youtube video: http://www.youtube.com/watch?v=vhwURyTp_jY

Looking glass: a field essay study on noticing interactivity of a shop window[3]
In this paper we present our findings from a lab and a field study investigating how passers-by notice the interactivity of public displays. We designed an interactive installation that uses visual feedback to the incidental movements of passers-by to communicate its interactivity. The lab study reveals: (1) Mirrored user silhouettes and images are more effective than avatar-like representations. (2) It takes time to notice the interactivity (approx. 1.2s). In the field study, three displays were installed during three weeks in shop windows, and data about 502 interaction sessions were collected. Our observations show: (1) Significantly more passers-by interact when immediately showing the mirrored user image (+90%) or silhouette (+47%) compared to a traditional attract sequence with call-to-action. (2) Passers-by often notice interactivity late and have to walk back to interact (the landing effect). (3) If somebody is already interacting, others begin interaction behind the ones already interacting, forming multiple rows (the honeypot effect). Our findings can be used to design public display applications and shop windows that more effectively communicate interactivity to passers-by.“ [3]

References
[1] Andreas Bulling, Florian Alt, and Albrecht Schmidt. 2012. Increasing the security of gaze-based cued-recall graphical passwords using saliency masks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 3011-3020. DOI=10.1145/2208636.2208712 http://doi.acm.org/10.1145/2208636.2208712
[2] Alireza Sahami Shirazi, Peyman Moghadam, Hamed Ketabdar, and Albrecht Schmidt. 2012. Assessing the vulnerability of magnetic gestural authentication to video-based shoulder surfing attacks. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 2045-2048. DOI=10.1145/2208276.2208352 http://doi.acm.org/10.1145/2208276.2208352
[3] Jörg Müller, Robert Walter, Gilles Bailly, Michael Nischt, and Florian Alt. 2012. Looking glass: a field study on noticing interactivity of a shop window. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI ’12). ACM, New York, NY, USA, 297-306. DOI=10.1145/2207676.2207718 http://doi.acm.org/10.1145/2207676.2207718

Introduction to the special issue on interaction beyond the desktop

After coming back from CHI2012 in Austin I found my paper copy of the April 2012 issue of IEEE Computer magazine in my letter box. This is our special issue on interaction beyond the desktop. Having the physical copy is always nice (it is because I probably grew up with paper magazines ;-).

This guest editors’ introduction [1] is an experiment as we include photos from all papers on the theme. The rational is, that probably most people will not have the paper copy in their hand. When having the digital version the overview of the papers is harder to manage, that is why we think including the photos helps to make readers curious to look at the papers in the issue. Please let us know if you think this is a good idea…

[1] Albrecht Schmidt and Elizabeth Churchill. Interaction Beyond the Keyboard. IEEE Computer, April 2012, pp. 21–24. (PDF). Link to the article in Computing Now.