I was invited to give a talk on „Embedded interaction with display environments“ to discuss human computer interaction and technology issue for creating interactive display systems. The summer school has very diverse program! and I have enjoyed listening to my colleagues as much as presenting myself 🙂
In the talk I have a (more or less random) selection of technologies for making display environments interactive. There are the obvious vision based approaches (see the talk for the references) but I think there are many interesting approaches that are not yet fully explored. – including spatial audio location [1], eye tracking, and physiological sensors. Sebastian Boring create a focus and context input by combing different input technologies [2] – this can be especially interesting when scaling interaction up to larger surfaces. Additionally I think looking at the floor and the ceiling is worthwhile…
Please feel free to add further technologies and approaches for creating interactive displays in the comment.
[1] James Scott, Boris Dragovic: Audio Location: Accurate Low-Cost Location Sensing. Pervasive Computing: Third International Conference, PERVASIVE 2005, Munich, Germany, May 8-13, 2005. Springer LNCS 3468/2005. pp 1-18. http://dx.doi.org/10.1007/11428572_1
[2] S. Boring, O. Hilliges, A. Butz. A Wall-sized Focus plus Context Display. In Proceedings of the Fifth Annual IEEE Conference on Pervasive Computing and Communications (PerCom), New York, NY, USA, Mar. 2007
Scratch Input, UIST 08
http://www.chrisharrison.net/projects/scratchinput/index.html