May be mine is a dummy question.
Looking at touchlib features, I noticed that library works on geometry information like centroid’s position, rotation, movement, acceleration
plus other like gesture class, id instance more related to semantic of interaction. This produces point based events.
We were wandering if does make sense building a richer event metadata containg for instace shape of detected blobs. We think FTIR device could detects and fire shape based events.
We are not talking of classify events looking at the blob’s shape but directly use the shape instead of centroid.
Do you think a UI widget like a button could use this kind of info? For example we were looking at a finger (or hand) interacts with a squizzable button only hits it partially. Button should reason on this presenting different behavious for instance on the total area of pressure or something like these.
We can’t imagine lots of reasonable scenario but we are thinking why do we use FTIR device just like a simple multitouch system and not as something better.
Of course this open lots of pratical issues as capabilites of event system, more calculation power, sophisticated dispatching algorithms and so on.
Thanks in advice for responce, and please forget my english.