Hello group. I was wondering if anyone has scaled (or done some other math to) the x and y data, on the fly, of blob tracking. I thought maybe I can scale such input in flash, take the touch events and perform some math of touch positions? possible? probable?
how about a method of scaling the numbers before it hits osc?
basically, it’s an attempt to use more than one webcam for tracking (as mentioned in another post I just found). if I can consider webcam#1 positioned at topleft (0,0) like normal, but then have webcam#2’s data look like x+800,y to move all touch over by 800 pixels to the right of an extended desktop.
And how about the guys who have works on mouse drivers, is there possibility to put this kind of logic into that work?
Thanks for any insight.