Some weeks ago I saw for the first time one of the intelligent scales in the wild (=outside the lab). At that time I was really impressed how well it worked (sample size: n=1, product: banana, pack: no-bag, recognition performance: 100%). Last time I was too late so there was no time to play or see other people using it – but today I had some 5 minutes to invest.
The basic idea of the scale is simple and quite convincing. The customers put their purchase on the scales. A camera makes a guess what it is and the selection menu is reduced to the candidates that match the guess of the camera. Additionally, there is always a button to get all the options (as in the old version without the camera). It appears that this should make things easier.
I observed people trying to weigh different fruits and vegetables in bags and without bags (obviously I tried it myself, too). It did not work very often but interestingly people did not care much. It looked as most people did not really realise that this is meant to be an intelligent user interface. They probably just wondered why the display is showing always different things, but as they are intelligent themselves they found a way to deal with.
Overall it seems that it does really well on bananas which are not wrapped in a bag (my initial test case) and does not too well on many other things. I think the scales are an interesting example of a invisible interface.
Overall this is again a remainder that user tests that are small may be utterly wrong.