Karin Bee has defended her dissertation.

Karin Bee (nee Leichtenstern) has defended her dissertation at the University of Augsburg. In her dissertation she worked on methods and tools to support a user centered design process for mobile applications that use a variety of modalities. There are some papers that describe her work, e.g. [1] and [2]. To me it was particularly interesting that she revisited the experiment done in her master thesis in a smart home in Essex [3] and reproduced some of it in her hybrid evaluation environment.

It is great to see that now most of our students (HiWis and project students) who worked with us in Munich on the Embedded Interaction Project have finished their PhD (there are some who still need to hand in – Florian? Raphael?, Gregor? You have enough papers – finish it 😉

In the afternoon I got to see some demos. Elisabeth André has a great team of students. They work on various topics in human computer interaction, including public display interaction, physiological sensing and emotion detection, and gesture interaction. I am looking forward to a joined workshop of both groups. Elisabeth has an impressive set of publications which is always a good starting point for affective user interface technologies.

[1] Karin Leichtenstern, Elisabeth André,and Matthias Rehm. Tool-Supported User-Centred Prototyping of Mobile Applications. IJHCR. 2011, 1-21.

[2] Karin Leichtenstern and Elisabeth André. 2010. MoPeDT: features and evaluation of a user-centred prototyping tool. In Proceedings of the 2nd ACM SIGCHI symposium on Engineering interactive computing systems (EICS ’10). ACM, New York, NY, USA, 93-102. DOI=10.1145/1822018.1822033 http://doi.acm.org/10.1145/1822018.1822033

[3] Enrico Rukzio, Karin Leichtenstern, Vic Callaghan, Paul Holleis, Albrecht Schmidt, and Jeannette Chin. 2006. An experimental comparison of physical mobile interaction techniques: touching, pointing and scanning. In Proceedings of the 8th international conference on Ubiquitous Computing (UbiComp’06), Paul Dourish and Adrian Friday (Eds.). Springer-Verlag, Berlin, Heidelberg, 87-104. DOI=10.1007/11853565_6 http://dx.doi.org/10.1007/11853565_6

MobiSys 2012, Keynote by Paul Jones on Mobile Health Challenges

This year’s ACM MobiSys conference is in the Lake District in the UK. I really love this region in the UK. Already 15 years back when I studied in Manchester I often came up over the weekend to hike in the mountains here. The setting of the conference hotel is brilliant, overlooking Lake Windermere.
The opening keynote of MobiSys 2012 was presented by Dr. Paul Jones, the NHS Chief Technology Officer who talked about “Mobile Challenges in Health”. Health is very dear to people and the approach to health care around the world is very different.

The NHS is a unique intuition that is providing healthcare to everyone in the UK. It is taxation funded and with its 110 billion pounds per year budget it is one of the cheaper (and yet efficient) health care systems in the world. The UK spends about 7% of its national cross product on health care, whereas the US or Germany nearly spend double of this percentage. Beside the economic size the NHS is also one of the biggest employers in the world, similar in size to the US department of defense and the Chinese people’s army. The major difference to other larger employers is, that a most part of the staff in the NHS is highly educated (e.g. doctors) and is not easily taking orders

Paul started out with the statement: technology is critical to providing health care in the future. Doing healthcare as it is currently done will not work in the future. Carrying on will not work as the cost would not be payable by society. In general information technology in the health sector is helping to create more efficient systems. He had some examples that often very simple system help to make a difference. In one case he explained that changing a hospitals scheduling practice from paper based diaries to a computer based systems reduced waiting times massively (from several month to weeks, without additional personal). In another case laptops were provided to community nurses. This saved 6 hours per week and freed nearly an extra day of work per week as it reduced their need for travelling back to the office. Paul argued, that this is only a starting point and not the best we can do. Mobile computing has the potential to create better solutions than a laptop that are more fitting the real working environment of the users and patients. One further example he used is dealing with vital signs of a patient. Traditionally this is measured and when degrading a nurse is calling a junior doctor and they have to respond in a certain time. In reality nurses have to ask more often and doctors may be delayed. In this case they introduced a system and mobile device to page/call the doctors and document the call (instead of nurses calling the doctors). It improved the response times of doctors – and the main reason is that actions are tracked and performance is measured (and in the medical field nobody wants to be the worst).

Paul shared a set of challenges and problems with the audience – in the hope that researchers take inspiration and solve some of the problems 😉

One major challenge is the fragmented nature of the way health care is provided. Each hospital has established processes and doctors have a way they want do certain procedures. These processes are different from each other – not a lot in many cases but different enough that the same software is not going to work. It is not each to streamline this, as doctors usually know best and many of them make a case why their solution is the only one that does the job properly. Hence general solutions are unlikely to work and solutions need to be customizable to specific needs.

Another interesting point was about records and paper. Paul argued that the amount of paper records in hospital is massive and they are less reliable and save as many think. It is common that a significant portion of the paper documentation is lost or misplaced. Here a digital solution (even if non-perfect) is most certainly better. From our own experience I agree on the observation, but I would think it is really hard to convince people about it.

The common element through the talk was, that it is key to create systems that fit the requirements. To achieve this it seems that having multidisciplinary teams that understand the user and patient needs is inevitable. Paul’s examples were based on his experience of seeing the user users and patient in context. He made firsthand the observation, that real world environments often do not permit the use of certain technologies or create sup-optimal solution. It is crucial that the needs to are understood by the people who design and implement the systems. It may be useful to go beyond the multidisciplinary team and make each developer spending one day in the environment they design for.

Some further problems he discussed are:

  • How to move the data around to the places where it is needed? Patients are transferred (e.g. ambulance to ER, ER to surgeons, etc.) and hence data needs to be handed over. This handover has to work across time (from one visit to the next) and across departments and institutions
  • Personal mobile devices (“bring your own device”) are a major issue. It seems easy for an individual to use them (e.g. a personal tablet to make notes) but on a system-level they create huge problems, from back-up to security. In the medical field another issue arises: the validity of data is guaranteed and hence the data gathered is not useful in the overall process.

A final and very interesting point was: if you are not seriously ill, being in a hospital is a bad idea. Paul argued, that the care you get at home or in the community is likely to be better and you are less likely to be exposed to additional risks. From this the main challenge for the MobiSys community arises: It will be crucial to provide mobile and distributed information systems that work in the context of home care and within the community.

PS: I like one of the side comments: Can we imagine doing a double blind study on a jumbo jet safety? This argument hinted, that some of the approaches to research in the medical field are not always most efficient to prove the validity of an approach.

If you do not research it – it will not happen?

Over the last days plans to do research on the use of public date from social networks to calculate someone’s credit risk made big news (e.g. DW). The public (as voiced by journalists) and politicians showed a strong opposition and declared something like this should not be done – or more specifically such research should not be done.

I am astonished and a bit surprised by the reaction. Do people really think if there is no research within universities this will (does) not happen? If you look at the value of facebook (even after the last few weeks) it must be very obvious that there is a value in the social network data which people hope to extract over time…

Personal credit risk assessment (in Germany Schufa) is widely used – from selling you a phone contract to lending you money when buying a house. If you believe that we need a personal credit risk assessment – why would you argue that they work on very incomplete data? Will it make it better? I think the logical consequence of the discussion would be to prohibit the pricing based on personal credit risk ratings – but this, too would be very unfair (at least to the majority). Hence the consequence we see now (the research is not done in Universities) is probably not doing much good… it just pushes it into a place where the public sees little about it (and the companies will not publish it in a few years…).

Keynote at the Pervasive Displays Symposium: Kenton O’Hara

Kenton O’Hara, a senior researcher in the Socio-Digital-Systems group at Microsoft Research in Cambridge, presented the keynote at the pervasive displays symposium in Porto on the topic “Social context and interaction proxemics in pervasive displays“. He highlighted the importance of the spatial relationship between the users and the interactive displays and the different opportunities for interaction that are available when looking at the interaction context.

Using examples from the medical field (operating theater) he showed the issues that arise from the need of sterile interaction and hence avoiding touch interaction and moving towards a touchless interaction mode. A prototype, that uses a Microsoft Kinect sensor,  allows the surgeon to interact with information (e.g. an x-ray image) while working on the patient. It was interesting to see that gestural interaction in this context is not straightforward, as surgeons use tools (and hence have their hands not free) or gesture as a part of the communication in the team.

Another example is a public space game; there are many balls on a screen and a camera looking at the audience. Users can move the balls by body movement based on a simple edge detection video tracking mechanism and when two balls touch they form a bigger ball.  Kenten argues that “body-based interaction becomes a public spectacle” and interactions of an individum are clearly visible to others. This visibilility can lead to inhibition and may reduce the motivation of user to interact. For the success of this game the designing of the simplistic tracking algorithms is one major factor. By tracking edges/blobs the users can play together (e.g. holding hands, parents with the kids in their arm) and hence a wide range of interaction proxemics are supported. He presented some further examples of public display games on BBC large screens, also showing that the concept of interaction proxemics can be use to explain interaction .

TVs have change eating behavoir. More recent research in displays in the context of food consumptions have been in contrast mainly pragmatic (corrective, problem solving). Kenton argued that we look at the cultural values of meals and see shared eating as a social practice. Using the example of eating in front of the television (even as a family) he discusses the implications on communication and interaction (basically the communication is not happening). Looking at more recent technologies such as phones, laptops and tablets and their impact on social dynamics probably many of us realized that this is impacting many of us in our daily lives already (or who is not taking their phone to table?). It is very obvious that social relationships and culture changes with these technologies. He showed “4Photos” [1] a designed piece of technology to be put on the center of the table showing 4 photographs. Users can interact with it from all sides. It is designed in a way to stimulate rather than inhibit communication and to provide opportunities for conversation. It introduces interaction with technologies as a social gesture.

Interested in more? Kenton published a book on public displays in 2003 [2] and has a set of relevant publications in the space of the symposium.

References

[1] Martijn ten Bhömer, John Helmes, Kenton O’Hara, and Elise van den Hoven. 2010. 4Photos: a collaborative photo sharing experience. In Proceedings of the 6th Nordic Conference on Human-Computer Interaction: Extending Boundaries (NordiCHI ’10). ACM, New York, NY, USA, 52-61. DOI=10.1145/1868914.1868925 http://doi.acm.org/10.1145/1868914.1868925

[2] Kenton O’Hara, Mark Perry, Elizabeth Churchill, Dan Russell. Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies. Kluwer Academic, 2003