DIY automotive UI design – or how hard is it to design for older people

The picture does not show a research prototype – it shows the actual interior of a 5-series BMW (fairly recent model). The driver (an elderly lady) adapted the UI to suit her needs. This modification includes the labeling of controls which are important, writing some instructions for more complicate controls close to them (hereby implementing one of the key ideas of embedded information [1]), an covering some to the user “useless” controls.

At first I assumed this is a prank* – but it seems to be genuine and that makes it really interesting and carries important lessons with regard to designing for drivers of 80 years and older. Having different skins (and not just GUIs more in a physical approach) as well as UI components that can be composed (e.g. based on user needs) in the embedded and tangible domain seem challenging but may new opportunities for customized UIs. Perhaps investigating ideas for personalizing physical user interfaces – and in particular car UIs – may be an interesting project.

[1] Albrecht Schmidt, Matthias Kranz, Paul Holleis. Embedded Information. UbiComp 2004, Workshop ‚Ubiquitous Display Environments‘, September 2004 http://www.hcilab.org/documents/EmbeddedInformationWorkshopUbiComp2004.pdf

* will try to get more evidence that it is real 🙂

Application Workshop of KDUbiq in Porto

After having frost and snow yesterday morning in Germany being in Porto (Portugal) is quite a treat. The KDubiq application workshop is in parallel to the summer school and yesterday evening it was interesting to meet up with some people teaching there.

The more I learn about data mining and machine learning the more I see even greater potential in many ubicomp application domains. In my talk “Ubicomp Applications and Beyond – Research Challenges and Visions” I looked back at selected applications and systems that we have developed over the last 10 year (have a look at the slides – I, too was surprised what variety of projects we did in the last years ;-). So far we have often used basic machine learning methods to implement – in many cases creating a version 2 of these systems where machine learning research is brought together with ubicomp research and new technology platforms could make a real difference.

Alessandro Donati from ESA gave a talk “Technology for challenging future space missions” which introduced several challenges. He explained their approach to technology introduction into mission control. The basic idea is that the technology providers create together with the users a new application or tool. He strongly argued for a user centred design and development process. It is interesting to see that the concept of user centred development processes are becoming more widespread and go beyond classical user interfaces into complex system development.

User-generated tutorials – implicit interaction as basis for learning

After inspiring discussions during the workshop and in the evening I reconsidered some ideas for automatically generated tutorials by user interaction. The basic idea is to capture usage of applications (e.g. using usa-proxy and doing screen capture) continuously – hard disks are nowadays big enough 😉 Using query mechanisms and data mining a user can ask for a topic and will then get samples of use (related to this situation). It creates some privacy questions but I think this approach could create a new approach to creating e-learning content…. maybe a project topic?

Advertising 2.0 – Presentation at CeBIT

This morning I presented a short talk on new ways for outdoor advertisement at CeBIT. Based on results from interviews we know that shop windows and billboards are a well received medium. Similarly to measures we know from web logfiles I argue that it is interesting to have similar information on visitors, views, returns, durations for real world adverts.

The general approach we suggest is to use senor to measure activity – and as such measurements are incomplete there is need for data analysis and models. In two examples I show how this can be done using Bluetooth as sensing device. The slides (in German) are online.

TEI08 Proceedings in the ACM DL online, mandatory reading

The proceedings of the 2nd international conference on Tangible and embedded interaction are online in the ACM digital library. Still not in the search index and with a few corrections to do…

Hiroshi Ishii contributed a paper on „Tangible bits: beyond pixels“ – the first paper in the proceedings [1] and a great overview and introduction to the topic. If you are a student to start on tangible interaction or if you have students that start this paper is a mandatory reading!

[1] Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 – 20, 2008). TEI ’08. ACM, New York, NY, xv-xxv. DOI= http://doi.acm.org/10.1145/1347390.1347392

CeBIT Demo – always last minute…

Yesterday afternoon I was in Hannover at CeBIT to set up our part in a demo at the Fraunhofer stand (Hall 9, Stand B36). The overall topic of the fraunhofer presence is „Researching for the people„.

After some difficulties with our implementation on the phone, the server and the network (wired and wireless – my laptop showed more than 30 wifi-accesspoints and a BT-scan showed 12 devices) we got the demo going. The demo is related to outdoor advertisement and together with Fraunhofer IAIS we provide an approach to estimate the number viewers/visitors. On Wednesday I will give a talk at CeBIT to explain some more details.

It seems demo are always finished last minute…

Visiting the inHaus in Duisburg

This morning we visited the inHaus innovation center in Duisburg (run by Fraunhofer, located on the University campus). The inHaus is a prototype of a smart environment and a pretty unique research, development and experimentation facility in Germany. We got a tour of the house and Torsten Stevens from Fraunhofer IMS showed us some current developments and several demos. Some of the demos reminded me of work we started in Lancaster, but never pushed forward beyond a research prototype, e.g. the load sensing experiments [1], [2].

The inHaus demonstrates impressively the technical feasibility of home automation and the potential of intelligent living spaces. However beyond that I strongly believe that intelligent environments have to move towards the user – embracing more the way people life their lives and providing support for user needs. Together with colleagues from Microsoft Research and Georgia Tech we organize the workshop Pervasive Computing at Home which is held as a part of Pervasive 2008 in Sydney that focuses on this topic.

Currently the market size for smart homes is still small. But looking at technological advances it is not hard to image that some technologies and services will soon move from “a luxury gadget” to “a common tool”. Perhaps wellness, ambient assistive living and home health care are initial areas. In this field we will jointly supervise a thesis project of one of our students over the next month.

Currently most products for smart homes are high quality, premium, high priced, and providing a long lifetime (typically 10 to 20 years). Looking what happened in other markets (e.g. navigation systems, now sold at 150€ retail prices including a GPS unit, maps, touch screen and video player) it seems to me there is definitely an interesting space for non-premium products in the domain of intelligent environments.

[1] Schmidt, A., Strohbach, M., Laerhoven, K. v., Friday, A., and Gellersen, H. 2002. Context Acquisition Based on Load Sensing. In Proceedings of the 4th international Conference on Ubiquitous Computing (Göteborg, Sweden, September 29 – October 01, 2002). G. Borriello and L. E. Holmquist, Eds. Lecture Notes In Computer Science, vol. 2498. Springer-Verlag, London, 333-350.

[2] Albrecht Schmidt, Martin Strohbach, Kristof Van Laerhoven, Hans-Werner Gellersen: Ubiquitous Interaction – Using Surfaces in Everyday Environments as Pointing Devices. User Interfaces for All 2002. Springer LNCS.

OLPC – new interface guidelines – no file menu

We have tried several of the applications (called activities) and the basic functions seem OK. Vivien liked it and was quite curious to explore it further. The photos you can take with the built-in camera are similar in quality to a good web cam.

After discussing the Microsoft Vista interface guide in the last week of our course on User Interface Engineering it was really interesting to see the OLPC/Sugar user interface guidelines. Especially the shift away from save/open to keep and the journal are enormous changes (and hence probably quite hard for people who have used computers – obviously it is not really designed for them).

Using the measure activity provides basic tools for electronics measurements. The microphone input can be used as a simple oscilloscope and the USB port provides 1A – this makes it really interesting for experimenting, see the hardware reference.

OLPC – cute and interesting – but what type of computer is it?

After the conference I had finally some time to try out my new XO Laptop (OLPC). It is fairly small, has a rubber keyboard and a very good screen. It can be used in laptop and e-book mode. A colleague described it as somewhere between a mobile phone and a notebook-computer – first I did not get it – but after using it I fully understand.

There is good documentation out – the getting started manual at laptop.org provides a very good entry point. Getting it up and running was really easy (finding the key for my WIFI-Access point at home was the most difficult part 😉

There are two interesting wikis with material online at olpcaustria.org and laptop.org. I am looking forward to trying the development environments supplied with the standard distribution (Pippy and Etoys).

I would expect when Vivien get up in the morning and sees it I will be second in line for exploring the XO further. It is really designed in a way that makes it attractive for children. To say more about about the usability (in particular the software) I need to explore it more…

To me it is not understandable why it is so difficult to get them in Europe. I think the buy 1 and donate 1 approach was very good (but again this was only in the US)…

Thought on Keys

Many keys (to rooms and buildings) are still tangible objects, where the tangible properties and affordances imply certain ways of usage .Who has not gotten a hotel key that you hand in at reception, because it is too big to be carried in a pocket? Moving digital many keys we get lack craft and unique affordances as they are just plastic cards or RFID tags in a specific form. With moving towards biometric authentication it seems that the key is intangible (so we loose options in the design space) but embedded into us (which opens up new possibilities).

The major drawback of physical and tangible keys is that if you don’t have it with you – when you are in front of the door they can not help you. Even if you know where the key is and you communicate with the person having the key.
… but thinking back a few days to the visions in Hiroshi Ishii’s keynote its seems that this is very short term problem. Having atoms that can be controlled (tangible bits) we can just get the data for the key from remote and reproduce it locally. With current technology this seems already very feasible – on principle – ( some Person uses a 3D scanner, e.g. embedded in a mobile device that has a camera and communication) and the other person has a 3D printer/laser cutter. Still the question remains if moving to digital keys is not much easier.

However if you do not have the key – and even so there is a solution “on principle” – it does not really help 😉