Another idea for using this with MT:
‘Normal’ MT systems require the image and touch surface to be in pretty much the same place (so that your touch corresponds to the correct image location). But, if you know where the user is (and thus the angle/distance/etc that they are seeing things from), you can separate the two. Just compute where the user would see their point of interaction relative to the actual image…
The basic idea is possible without it, but having some sort of user tracking system would allow the system to automatically recalibrate on-the-fly (meaning the user can move around freely).
And of course, it could work with any other sort of sufficiently accurate way to track the user (facial recognition, for example).
Click thumbnail to see full-size image