First of all I want to thank the NUI Group and the PyMT team for their great contribution to the multi-touch world. A few months ago we started building a multi-touch table as a part of a special course at DTU (Technical University of Denmark). Our budget was around 700€, excluding the 4000€ HD Projector we received for the project. Given the quality of the projector we sort of designed the whole thing around it, with the aim of maximizing the visible projected area and the condition that average sized people wouldn’t have to stand on their toes while using it
It has been bumpy road the whole way, we encountered a lot of problems, both expected and unexpected ones. I’m going to talk a little bit about them as a reference here.
1)We bought 4x 850nm IR lasers and a matching narrow-band filter for the camera, so it would block out all other IR sources. It seemed to work great at first, even blocked out the majority of the wide-range IR light coming from a bright halogen lamp in the ceiling above the table. For some weird reason, all IR light in the camera output started fading out relative to the distance from the center of the captured frames, with no light being captured for the corners. The camera was seeing everything normally, but IR light was only visible in a circular region! After lots of headscratching and remembering some raytracing theory from a rendering course we had taken, we figured that incoming IR light was simply reflecting off the surface of the filter relative to the angle of incidence. We therefore had to resort to the simpler filter, a diskette
2)For future LLP guys, don’t forget to focus your lasers and make sure the lens is correctly fitted (see the NUI manual about lasers). We didn’t see it there until after we had spent days of frustration wondering why the lasers were giving so FAT ass planes, WAY above the surface. We used a seperate freehand IR camera for focusing them properly (seeing the results on screen) along with the faint “debugging-light” they emit, for fine-tuning.
3) Another issue was the framerate of the camera, a Logitech C300. It was running at . Without the filter, the camera ran just fine at 640x480@33fps, but once the filter was applied, it dropped. This indicated that the camera was trying to compensate for the low light, while keeping the image quality, in a way a webcamera would normally do. We looked at the camera settings and there was nothing about it there. We then decided to get the Logitech drivers, which we had skipped since the camera was working out of the box in Windows and Linux. From them we gained access to the more advanced configuration dialog for the camera. There we turned off everything called “Auto” e.g. “Automatic Gain Control” “Low Light boost” etc. and manually configured the exposure and the gain. Just to be clear, the exposure is simply the time the camera will take to gather for each frame. If the exposure is 1 / 33 then you are getting ~33 fps. With the gain set so high, the camera picked up plenty of noise, which the CCV tracker filtered out easily with the nice filters available. In the end, our camera ran at 640x400@33FPS, which gives a very smooth motion as you can see in our video.
4) At some point, everything seemed to work but the movements seemed to be in very low “resolution” with constant step-size between position updates. That was because we had simply ignored the “Movement threshold” setting and set it to 20 it should be between 0-1 and if you can’t find a balance and the touch motion is either too sensitive or too jittery, try to use smoothing to reduce the minor movements and keep the movement threshold as low as you can.
Here is the video of our work (CCV 1.3 + PyMT 0.4):