3DUI Technologies for Interactive Content by Prof. Yoshifumi Kitamura

In the context of multimodal interaction in ubiquitous computing professor Yoshifumi Kitamura presented a Simtech guest lecture on 3D user interface technologies. His research goal is to create 3D display technologies that allow multi-user direct interaction. Users should be able to move in front of the display and different users should have different perspectives according to the location in front of the display. He showed a set of rotating displays (volumetric displays) that allow for the visual presentation, but not for interaction.

His approach is based on an illusion hole that allows for multiple users and direct manipulation. The approach is to have different projections for different users, that are not visible for others but that creates the illusion of interaction with a single object. It uses a display mask that physically limits the view of each user. Have a look at their SIGGRAPH Paper for more details [1]. More recent work on this can be found on the webpage of Yoshifumi Kitamura’s web page [2]

Example of the IllusionHole from [2].

Over 10 years ago they worked on tangible user interfaces based on blocks. Their system is based on a set of small electronic components with input and output, that can be connected and used to create larger structures and that provide input and output functionality. See [3] and [4] for details and applications of Cognitive Cubes and Active Cubes.

He showed examples of interaction with a map based on the concept of electric materials. Elastic scroll and elastic zoom allow to navigate with maps in an apparently intuitive ways. The mental model is straight forward, as the users can image the surface as an elastic material, see [5].

One really cool new display technology was presented at last year ITS is a furry multi-touch display [6]. This is a must read paper!

The furry display prototype – from [6].

References
[1] Yoshifumi Kitamura, Takashige Konishi, Sumihiko Yamamoto, and Fumio Kishino. 2001. Interactive stereoscopic display for three or more users. In Proceedings of the 28th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’01). ACM, New York, NY, USA, 231-240. DOI=10.1145/383259.383285 http://doi.acm.org/10.1145/383259.383285
[2] http://www.icd.riec.tohoku.ac.jp/project/displays-and-interface/index.html
[3] Ehud Sharlin, Yuichi Itoh, Benjamin Watson, Yoshifumi Kitamura, Steve Sutphen, and Lili Liu. 2002. Cognitive cubes: a tangible user interface for cognitive assessment. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’02). ACM, New York, NY, USA, 347-354. DOI=10.1145/503376.503438 http://doi.acm.org/10.1145/503376.503438
[4] Ryoichi Watanabe, Yuichi Itoh, Masatsugu Asai, Yoshifumi Kitamura, Fumio Kishino, and Hideo Kikuchi. 2004. The soul of ActiveCube: implementing a flexible, multimodal, three-dimensional spatial tangible interface. Comput. Entertain. 2, 4 (October 2004), 15-15. DOI=10.1145/1037851.1037874 http://doi.acm.org/10.1145/1037851.1037874
[5] Kazuki Takashima, Kazuyuki Fujita, Yuichi Itoh, and Yoshifumi Kitamura. 2012. Elastic scroll for multi-focus interactions. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings ’12). ACM, New York, NY, USA, 19-20. DOI=10.1145/2380296.2380307 http://doi.acm.org/10.1145/2380296.2380307
[6] Kosuke Nakajima, Yuichi Itoh, Takayuki Tsukitani, Kazuyuki Fujita, Kazuki Takashima, Yoshifumi Kitamura, and Fumio Kishino. 2011. FuSA touch display: a furry and scalable multi-touch display. In Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces (ITS ’11). ACM, New York, NY, USA, 35-44. DOI=10.1145/2076354.2076361 http://doi.acm.org/10.1145/2076354.2076361

SIGCHI Rebuttals – Some suggestions to write them

ACM SIGCHI has in it’s review process the opportunity for the authors to respond to the comments of the reviewers. I find this a good thing and to me it has two main functions:

  1. The reviewers are usually more careful in what they write as they know they have to face a response for the authors
  2. Authors can clarify points that they did not get across in the first place in the original submission.

We usually write for all submissions with an average score over 2.0 a rebuttal. For lower ranked submissions it may be OK if we think we have a chance to counter some of the arguments, which we believe are wrong or unfair.

For the rebuttal it is most critical to address the meta-review as good as possible. The primary will be in the PC meeting and if the rebuttal wins this person over the job is well done. The other reviews should be addressed, too.

For all the papers where we write a rebuttal I suggest the following steps(a table may be helpful):

  1. read all reviews in detail
  2. copy out all statements that have questions, criticism, suggestions for improvement from each review
  3. for each of these statement make a short version (bullet points, short sentence) in your own words
  4. sort the all the extracted statements by topic
  5. combine all statements that address the same issue
  6. order the combined statements according to priority (highest priority to primary reviewer)
  7. for each combined statement decide if the criticism is justified, misunderstood, or unjustified
  8. make a response for each combined statement
  9. create a rebuttal that addresses as many points as possible, without being short (trade-off in the number of issue to address and detail one can give)

Point 8 is the core…
There are three basic options:

  • if justified: acknowledge that this is an issue and propose how to fix it
  • if misunderstood: explain again and propose you will improve the explanaition in the final version
  • if unjustified: explain that this point may be disputed and provide additional evidence why you think it should be as it is

The unjustified ones are the most tricky ones. We had cases where reviewers stated that the method we used is not appropriate. Here a response could be to cite other work that used this method in the same context. Similarly we had reviewers arguing that the statistical tests we used cannot be used on our data, here we also explained in more details the distribution of the data and why the test is appropriate. Sometimes it may be better to ignore cases where the criticism is unjustified – especially if it is not from the primary.

Some additional points

  • be respectful to the reviewers – they put work in to review the papers
  • if the reviewers did not understand – we probably did not communicate well
  • do not promise unrealistic things in the rebuttal
  • try to answer direct questions with precise and direct answers
  • if you expect that one reviewer did not read the paper – do not directly write this – try to address the points (and perhaps add a hint it is in the paper, e.g. “ANSWER as we outline already in section X)

If you do not research it – it will not happen?

Over the last days plans to do research on the use of public date from social networks to calculate someone’s credit risk made big news (e.g. DW). The public (as voiced by journalists) and politicians showed a strong opposition and declared something like this should not be done – or more specifically such research should not be done.

I am astonished and a bit surprised by the reaction. Do people really think if there is no research within universities this will (does) not happen? If you look at the value of facebook (even after the last few weeks) it must be very obvious that there is a value in the social network data which people hope to extract over time…

Personal credit risk assessment (in Germany Schufa) is widely used – from selling you a phone contract to lending you money when buying a house. If you believe that we need a personal credit risk assessment – why would you argue that they work on very incomplete data? Will it make it better? I think the logical consequence of the discussion would be to prohibit the pricing based on personal credit risk ratings – but this, too would be very unfair (at least to the majority). Hence the consequence we see now (the research is not done in Universities) is probably not doing much good… it just pushes it into a place where the public sees little about it (and the companies will not publish it in a few years…).

Visiting the Culture Lab in Newcastle

While being in the north of England I stopped by in Newcastle at the Culture Lab. If the CHI-conference is a measure for quality in research in Human Computer Interaction Culture Lab is currently one of the places to be – if you are not convinced have look at Patrick Olivier’s publications. The lab is one of a few places where I think a real ubicomp spirit is left – people developing new hardware and devices (e.g. mini data acquisition boards, specific wireless sensor, embedded actuators) and interdisciplinary research plays a central role. This is very refreshing to see, especially as so many others in Ubicomp have moved to mainly creating software on phones and tables…

Diana, one of our former students from Duisburg-Essen, is currently working on her master thesis in Newcastle. She looks into new tangible forms of interaction on table top UIs. Especially actuation of controls is a central question. The approach she uses for moving things is compared to other approached, e.g. [1], very simple but effective – looking forward to reading the paper on the technical details (I promised not to tell any details here). The example application she has developed is in chemistry education.

Some years back at a visit to the culture lab I had already seen some of the concepts and ideas for the kitchen. Over the last years this has progressed and the current state is very appealing. I really thing the screens behind glass in the black design make a huge difference. Using a set of small sensors they have implemented a set of aware kitchen utensils [2]. Matthias Kranz (back in our group in Munich) worked on a similar idea and created a knife that knows what it cuts [3]. It seems worthwhile to exploring the aware artifacts vision further …

References
[1] Gian Pangaro, Dan Maynes-Aminzade, and Hiroshi Ishii. 2002. The actuated workbench: computer-controlled actuation in tabletop tangible interfaces. In Proceedings of the 15th annual ACM symposium on User interface software and technology (UIST ’02). ACM, New York, NY, USA, 181-190. DOI=10.1145/571985.572011 http://doi.acm.org/10.1145/571985.572011 

[2] Wagner, J., Ploetz, T., Halteren, A. V., Hoonhout, J., Moynihan, P., Jackson, D., Ladha, C., et al. (2011). Towards a Pervasive Kitchen Infrastructure for Measuring Cooking Competence. Proc Int Conf Pervasive Computing Technologies for Healthcare (pp. 107-114). PDF

[3] Matthias Kranz, Albrecht Schmidt, Alexis Maldonado, Radu Bogdan Rusu, Michael Beetz, Benedikt Hörnler, and Gerhard Rigoll. 2007. Context-aware kitchen utilities. InProceedings of the 1st international conference on Tangible and embedded interaction (TEI ’07). ACM, New York, NY, USA, 213-214. DOI=10.1145/1226969.1227013 http://doi.acm.org/10.1145/1226969.1227013 (PDF)