Session on Tactile UIs

Tampere University presented a study where a rotation element is used to create tactile output and the assessed emotional perception of the stimuli (http://mobilehaptics.cs.uta.fi [1]). One application scenario is to use haptics feedback to create applications that allow us to “be in touch”. From Steven Brewsters group a project was presented that looks into how the performance of a touchscreen keyboard can be enhanced by tactile feedback [2]. In one condition they use two actuators. Both papers are interesting and provide insight for two of our current projects on multi-tactile output.

[1] Salminen, K., Surakka, V., Lylykangas, J., Raisamo, J., Saarinen, R., Raisamo, R., Rantala, J., and Evreinov, G. 2008. Emotional and behavioral responses to haptic stimulation. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1555-1562. DOI= http://doi.acm.org/10.1145/1357054.1357298

[2]
Hoggan, E., Brewster, S. A., and Johnston, J. 2008. Investigating the effectiveness of tactile feedback for mobile touchscreens. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 – 10, 2008). CHI ’08. ACM, New York, NY, 1573-1582. DOI= http://doi.acm.org/10.1145/1357054.1357300

CHI Conference in Florence

On Sunday afternoon I flew to Florence and we met up in the evening with former colleagues – CHI always feels like a school reunion 😉 and it is great to get first hand reports on what everyone is working currently. On the plane I met Peter Thomas (editor of Ubquitous Computing Journal) and we talked about the option of a special issue on automotive…

We have rented a house in the Tuscany Mountains together with Antonio’s group and collaborators from BMW research and T-Labs. Even though we have to commute into Florence everyday it is just great that we have our “own” house – and it is much cheaper (but we have to do our dishes).

The conference is massive – 2300 people. There is a lot of interesting work and hence it is not feasible to cover it in a few sentences. Nevertheless there are some random pointers:

In the keynote a reference to an old reading machine by Athanasius Kircher was mentioned.

Mouse Mischief – educational software – 30 mice connected to 1 PC – cool!

Reality based interaction – conceptual paper – arguing that things should behaves as in the real world – interesting concept bringing together many new UI ideas

Inflatable mouse – cool technology from Korea– interesting use cases – we could integrate this in some of our projects (not inflating the mouse but inflating other things)

Multiple Maps – Synthesizing many maps – could be interesting for new navigation functions

Rub the Stane – interactive surfaces – detection of scratching noises only using a microphone

Usability evaluation considered harmful – the every-year discussion on how to make CHI more interesting continues

It seems there is currently some work going on looking at technologies in religious practice. Over lunch we had developed interesting ideas towards remote access to multimedia information (e.g. services of ‘once’ local church) and sharing awareness. This domain is intriguing because churches often form tight communities and content is regularly produced and available. Perhaps we should follow up on this with a project…

Dairy study on mobile information needs – good base literature of what information people need/use when they are mobile

K-Sketch – cool sketching technique.

Crowdsourcing user studies – reminded me of my visit at http://humangrid.eu

Lean and Zoom – simple idea – you come close it gets bigger – nicely done

Work on our new lab space started – ideas for intelligent building material

This week work on our new lab space started 🙂 With all the drilling and hammering leaving for CHI in Florence seemed like perfect timing. Our rooms are located in a listed historical building and hence planning is always a little bit more complicated but we are compensated by working in a really nice building.

As I was involved in the planning space for the lab we had the opportunity to integrate a space dedicated to large interactive surfaces where we can explore different options for interaction.

Seeing the process of planning and carrying out indoor building work ideas related to smart building materials inevitably spring to mind. Much work goes into communication between different people involved in the process and into establishing and communicating the current status (structure, power routing, ventilation shafts, insulation, etc.) of the building. When imagine that brick, fixture, panel, screw and cable used could provide information about its position and status we could create valuable applications. Obviously always based on the assumption that computing and communication gets cheaper… I think it could be an interesting student project to systematically assess what building material would most benefit from sensing (or self-awareness) and processing and what applications this would enable; and in a second step create and validate a prototype.

Humangrid – are humans easier to program than systems?

In the afternoon I visited humangrid, a startup company in Dortmund. Their basic idea is to create a platform that offers opportunities for crowdsourcing – basically outsourcing small tasks that are easy to perform by humans to a large number of clickworkers. One example for such a scenario is tagging and classification of media. It is interesting that they aim to create a platform that offers real contracts and provides guaranties – which makes it in my eyes more ambitious than Amazon’s Mechanical Turk.

One interesting argument is that programming humans (as intelligent processors) to do a certain task that involves intelligence is easier and cheaper than creating software that does this completely automated. Obviously with software there is nearly zero-cost for performing the tasks – after the software is completed, however if the development costs are extremely high paying a small amount to the human processor for each task may still be cheaper. The idea is a bit like creating a prototype using wizard of oz – and not replacing the wizard in the final version.

In our discussion we developed some idea where pervasive computing and mobile technologies can link to the overall concept of the human grid and crowdsourcing creating opportunities for new services that are currently not possible. One of our students will start next month a master thesis on this idea – I am already curious if we get the idea working.

Have Not Changed Profession – Hospitals are complex

This morning we had the great opportunity to observe and discuss workflows and work practice in the operating area in the Elisabeth hospital in Essen. It was amazing how much time from (really busy) personnel we got and this provided us with many new insights.

The complexity of scheduling patients, operations, equipment and consumables in a very dynamic environment poses a great challenges and it was interesting to see how well it works with current technologies. However looking at the systems used and considering upcoming pervasive computing technologies a great potential for easing tasks and processes is apparent. Keeping tracking of things and people as well as well as documentation of actions are central areas that could benefit.

From a user interface perspective it is very clear that paper and phone communication play an important role, even in such high-tech environment. We should look a bit more into the Anoto Pen technology – perhaps this could be an enabler for some ideas we discussed. Several ideas that relate to implicit interaction and context awareness (already partly discussed in the context of a project in Munich [1]) re-surfaced. Similarly questions related to data access and search tools seem to play an interesting role. With all the need for documentation it is relevant to re-thing in what ways data is stored and when to analyses data (at storage time or at retrieval time).

One general message from such a visit is to appreciate people’s insight in these processes which clearly indicates that a user centered design process is the only suitable way to move innovation in such environments forward and create by this ownership and acceptance.

[1] A. Schmidt, F. Alt, D. Wilhelm, J. Niggemann, H. Feussner. Experimenting with ubiquitous computing technologies in productive environments. e & i Elektrotechnik und Informationstechnik, Springer Verlag. Volume 123, Number 4 / April, 2006. pages 135-139

DIY automotive UI design – or how hard is it to design for older people

The picture does not show a research prototype – it shows the actual interior of a 5-series BMW (fairly recent model). The driver (an elderly lady) adapted the UI to suit her needs. This modification includes the labeling of controls which are important, writing some instructions for more complicate controls close to them (hereby implementing one of the key ideas of embedded information [1]), an covering some to the user “useless” controls.

At first I assumed this is a prank* – but it seems to be genuine and that makes it really interesting and carries important lessons with regard to designing for drivers of 80 years and older. Having different skins (and not just GUIs more in a physical approach) as well as UI components that can be composed (e.g. based on user needs) in the embedded and tangible domain seem challenging but may new opportunities for customized UIs. Perhaps investigating ideas for personalizing physical user interfaces – and in particular car UIs – may be an interesting project.

[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‚Ubiquitous Display Environments‘, September 2004 http://www.hcilab.org/documents/EmbeddedInformationWorkshopUbiComp2004.pdf

* will try to get more evidence that it is real 🙂

Application Workshop of KDUbiq in Porto

After having frost and snow yesterday morning in Germany being in Porto (Portugal) is quite a treat. The KDubiq application workshop is in parallel to the summer school and yesterday evening it was interesting to meet up with some people teaching there.

The more I learn about data mining and machine learning the more I see even greater potential in many ubicomp application domains. In my talk “Ubicomp Applications and Beyond – Research Challenges and Visions” I looked back at selected applications and systems that we have developed over the last 10 year (have a look at the slides – I, too was surprised what variety of projects we did in the last years ;-). So far we have often used basic machine learning methods to implement – in many cases creating a version 2 of these systems where machine learning research is brought together with ubicomp research and new technology platforms could make a real difference.

Alessandro Donati from ESA gave a talk “Technology for challenging future space missions” which introduced several challenges. He explained their approach to technology introduction into mission control. The basic idea is that the technology providers create together with the users a new application or tool. He strongly argued for a user centred design and development process. It is interesting to see that the concept of user centred development processes are becoming more widespread and go beyond classical user interfaces into complex system development.

User-generated tutorials – implicit interaction as basis for learning

After inspiring discussions during the workshop and in the evening I reconsidered some ideas for automatically generated tutorials by user interaction. The basic idea is to capture usage of applications (e.g. using usa-proxy and doing screen capture) continuously – hard disks are nowadays big enough 😉 Using query mechanisms and data mining a user can ask for a topic and will then get samples of use (related to this situation). It creates some privacy questions but I think this approach could create a new approach to creating e-learning content…. maybe a project topic?

Visiting the inHaus in Duisburg

This morning we visited the inHaus innovation center in Duisburg (run by Fraunhofer, located on the University campus). The inHaus is a prototype of a smart environment and a pretty unique research, development and experimentation facility in Germany. We got a tour of the house and Torsten Stevens from Fraunhofer IMS showed us some current developments and several demos. Some of the demos reminded me of work we started in Lancaster, but never pushed forward beyond a research prototype, e.g. the load sensing experiments [1], [2].

The inHaus demonstrates impressively the technical feasibility of home automation and the potential of intelligent living spaces. However beyond that I strongly believe that intelligent environments have to move towards the user – embracing more the way people life their lives and providing support for user needs. Together with colleagues from Microsoft Research and Georgia Tech we organize the workshop Pervasive Computing at Home which is held as a part of Pervasive 2008 in Sydney that focuses on this topic.

Currently the market size for smart homes is still small. But looking at technological advances it is not hard to image that some technologies and services will soon move from “a luxury gadget” to “a common tool”. Perhaps wellness, ambient assistive living and home health care are initial areas. In this field we will jointly supervise a thesis project of one of our students over the next month.

Currently most products for smart homes are high quality, premium, high priced, and providing a long lifetime (typically 10 to 20 years). Looking what happened in other markets (e.g. navigation systems, now sold at 150€ retail prices including a GPS unit, maps, touch screen and video player) it seems to me there is definitely an interesting space for non-premium products in the domain of intelligent environments.

[1] Schmidt, A., Strohbach, M., Laerhoven, K. v., Friday, A., and Gellersen, H. 2002. Context Acquisition Based on Load Sensing. In Proceedings of the 4th international Conference on Ubiquitous Computing (Göteborg, Sweden, September 29 – October 01, 2002). G. Borriello and L. E. Holmquist, Eds. Lecture Notes In Computer Science, vol. 2498. Springer-Verlag, London, 333-350.

[2] Albrecht Schmidt, Martin Strohbach, Kristof Van Laerhoven, Hans-Werner Gellersen: Ubiquitous Interaction – Using Surfaces in Everyday Environments as Pointing Devices. User Interfaces for All 2002. Springer LNCS.

OLPC – cute and interesting – but what type of computer is it?

After the conference I had finally some time to try out my new XO Laptop (OLPC). It is fairly small, has a rubber keyboard and a very good screen. It can be used in laptop and e-book mode. A colleague described it as somewhere between a mobile phone and a notebook-computer – first I did not get it – but after using it I fully understand.

There is good documentation out – the getting started manual at laptop.org provides a very good entry point. Getting it up and running was really easy (finding the key for my WIFI-Access point at home was the most difficult part 😉

There are two interesting wikis with material online at olpcaustria.org and laptop.org. I am looking forward to trying the development environments supplied with the standard distribution (Pippy and Etoys).

I would expect when Vivien get up in the morning and sees it I will be second in line for exploring the XO further. It is really designed in a way that makes it attractive for children. To say more about about the usability (in particular the software) I need to explore it more…

To me it is not understandable why it is so difficult to get them in Europe. I think the buy 1 and donate 1 approach was very good (but again this was only in the US)…