Sunday, September 5, 2010

Reading #3: “Those Look Similar!” Issues in Automating Gesture Design Advice (Long)

Comments on others

Jonathan Hall

Summary

Quill is a software program for creating pen gestures while giving important feedback to the user about the created gestures. Long describes the main functioning of the software, exposes the empirical results of using it, and provides insights about the challenges they faced during development. Quill tool uses the Rubine algorithm to recognize pen gestures based upon training by the user. Then, when a new gesture is entered, quill gives a warning to the user if one of two things happens: the gesture is very hard to recognize by the computer or a gesture is created where there is much perceptual similarity with another and might be difficult to remember for the final user. In both cases the user should change it for one that’s not alike previous ones or that has more unambiguous features (e.g. sharper corners for one shape smooth for the other). The second kind of warning however is based upon human conception of similarity so the software had to be preloaded with a model that judges perceptual similarity based upon a series of experiments made by the author.
Long finally discuses the challenges faced while developing, some of them in the user interface, some in implementation and some in the metrics used to determine similarity. This last one appears to be the one with more improvement to be done.

Discussion

In contrast to the Rubine paper this article is less deep in the algorithms and mathematics, as it is more of a presentation of a software program rather than of the algorithms and techniques behind it. I think that the idea of giving advice to the user is very good since unambiguous gestures will most probably lead to better results in recognition. However I think that the reach of the advice might be too eager when trying to advice a human about perception. For the computer It is easy and accurate to give advice in what can be recognize or not, but the perception of gestures on the other hand is a natural human skill which cannot easily be preloaded on the software, it will probably be a better approach if the user could qualify this kind of feedback (feedback the feedback) so the computer can continuously learn what it can judge as perceptually similar or not.

3 comments:

  1. I was planing to see their algorithms on telling the similarity, but finally the paper did not present that instead of pointing to some other related work. I agreed with you that I think it should be people instead of computer to give suggestions as computers are always imprecise.

    ReplyDelete
  2. When you say "feedback the feedback" do you mean that quill should have some feature that allows the user to rate how helpful the alerts are? If so, I think that's a great idea. But how could the authors keep track of an alert's usefulness in every situation to which it applies? I think that may be a challenge to implement. What do you think?

    ReplyDelete
  3. Yes, that is exactly what I meant with feedbacking the feedback. You are right, it is probably not easy to implement and in order to know if in overall a perception alert is useful or not, there would have to be a shared database or other rate merging mechanism which may also be nontrivial. Nonetheless I think it may worth the effort knowing that no one better than the final user can help evolve the perception alerts.

    ReplyDelete