1 of 2
1
Touché 1.0b2 - TUIO Support, Color Inversion Filter and more
Posted: 01 September 2008 04:33 PM   [ Ignore ]
Rank
Joined  2008-08-15
Total Posts:  53
New Member

Hello,

I just wanted to announce Touché 1.0b2, the second beta of the multitouch environment for MacOS X that I’ve been working on.

Here’s a quick run-down of the changes:

- TUIO output support
- Color Inversion Filter (for people needing to track dark blobs on a bright background)
- Improved/more flexible background subtraction
- Better support for libdc1394 cameras
- 32/64-bit universal binary (ppc, ppc64, i386 and x86_64)
- Significant performance improvements
- Bugfixes

A more detailed description is available in the Sparkle Release Notes. I’ve also created a screencast of the new features.

You can download Touché 1.0b2 from its homepage or by clicking “Check for updates...” in the main menu.

Just one note: The new framework is API-compatible with the older version, but not ABI-compatible, meaning that you’ll have to re-compile any apps you might already have written with the new framework, but you don’t have to change your code. Also, the old demo apps won’t work with this, please use the demo apps that come in the 1.0b2 download zip instead.

Thanks for all the positive feedback I’ve received about Touché so far! I hope you like the new version!

Profile
 
 
Posted: 04 September 2008 04:58 PM   [ Ignore ]   [ # 1 ]
Rank
Joined  2008-09-04
Total Posts:  13
New Member

Hi Georg,

Just wanted to say thanks for Touché —it’s working perfectly with my FTIR board.  Nice to see that there’s some good Mac support out there.  I’m currently evaluating all of the Mac development options out there for use with some existing software I have.

Profile
 
 
Posted: 04 September 2008 05:38 PM   [ Ignore ]   [ # 2 ]
Rank
Joined  2008-05-17
Total Posts:  92
New Member

what hardware are you using touche with? I tried r60 last night and the video was lagging to the point where it would stop processing video. I have an older single g5 1.6 with 2 gigs of ram. I expect it to be slow, but I was wondering how much more I need to get it running smoothly.

 Signature 

Nuigroup wiki

Profile
 
 
Posted: 04 September 2008 05:53 PM   [ Ignore ]   [ # 3 ]
Rank
Joined  2008-08-15
Total Posts:  53
New Member
thescreamingdrills - 04 September 2008 05:38 PM

I have an older single g5 1.6 with 2 gigs of ram. I expect it to be slow, but I was wondering how much more I need to get it running smoothly.

A single-core G5 isn’t much, but the tracking and frame capturing are independent (on separate threads), so if the tracking is slow on your system, video still shouldn’t be choppy. That has more to do with the camera and/or capturing resolution: Which camera and capture resolution are you using?

Profile
 
 
Posted: 04 September 2008 06:03 PM   [ Ignore ]   [ # 4 ]
Rank
Joined  2008-05-17
Total Posts:  92
New Member

I’m using the xbox 360 live camera, 30fps, and 640x480. I tried cutting it down to 320x240, but the results weren’t much better. Maybe I should go to 15fps?

 Signature 

Nuigroup wiki

Profile
 
 
Posted: 04 September 2008 06:10 PM   [ Ignore ]   [ # 5 ]
Rank
Joined  2008-08-15
Total Posts:  53
New Member
thescreamingdrills - 04 September 2008 06:03 PM

I’m using the xbox 360 live camera, 30fps, and 640x480. I tried cutting it down to 320x240, but the results weren’t much better. Maybe I should go to 15fps?

The XBox360 camera is super-slow at 640x480 even on fast systems. With a single-core G5, I’d definitely go with a FireWire cam at 320x240, I don’t think anything else will be usable. Mind you, an USB cam sends compressed frames, which are costly to decompress.

Profile
 
 
Posted: 04 September 2008 06:19 PM   [ Ignore ]   [ # 6 ]
Rank
Joined  2008-05-17
Total Posts:  92
New Member

got it, that makes sense because my sony minidv is much faster, but more of a pain in the ass. oh well, thanks!

 Signature 

Nuigroup wiki

Profile
 
 
Posted: 04 September 2008 09:32 PM   [ Ignore ]   [ # 7 ]
Rank
Joined  2008-09-04
Total Posts:  13
New Member
thescreamingdrills - 04 September 2008 05:38 PM

what hardware are you using touche with? I tried r60 last night and the video was lagging to the point where it would stop processing video. I have an older single g5 1.6 with 2 gigs of ram. I expect it to be slow, but I was wondering how much more I need to get it running smoothly.

The camera I’m using is an IIDC-based firewire industrial camera.  It’s probably most similar to the Unibrain Fire-i.

I can use it at both 320x240 @ 30fps and 640x480 @ 30fps without problems.  However, for some reason it only works with the Quicktime mode.  The libdc1394 mode causes a crash (but that may be fixed in beta 2, haven’t tried it yet).  The odd thing is, this is the exact camera I used when I originally developed libdc1394 on Linux, so it should work just fine. wink

Also, I’m using a 2GHz Core Duo MacBook Pro (1st gen) with 2GB of RAM.

Profile
 
 
Posted: 05 September 2008 04:48 AM   [ Ignore ]   [ # 8 ]
Rank
Joined  2008-08-15
Total Posts:  53
New Member
discotek - 04 September 2008 09:32 PM

I can use it at both 320x240 @ 30fps and 640x480 @ 30fps without problems.  However, for some reason it only works with the Quicktime mode.  The libdc1394 mode causes a crash (but that may be fixed in beta 2, haven’t tried it yet).  The odd thing is, this is the exact camera I used when I originally developed libdc1394 on Linux, so it should work just fine.

Please try beta 2, I rewrote large parts of the libdc1394 capture source. I previously only had an iSight to test libdc1394 capturing with, but for beta 2, I also had a unibrain fire-i to test with.

If beta 2 crashes with your camera too, then please submit a report to the issue tracker and attach the crash log, so that I can have a look at it.

Profile
 
 
Posted: 07 September 2008 12:36 PM   [ Ignore ]   [ # 9 ]
Rank
Joined  2008-09-04
Total Posts:  13
New Member

I had a chance to try again with beta 2 and it doesn’t crash now.  However, I’m waiting on a projector here since it’s a real pain to recalibrate without one.  So I’ll stick with Quicktime for now until I get my projector next week (likely tomorrow or the day after at the latest).

I do like the extra camera options afforded by libdc1394 (focus, white balance, etc) so I’ll definitely use it if possible.

Profile
 
 
Posted: 08 September 2008 02:19 PM   [ Ignore ]   [ # 10 ]
Rank
Joined  2008-09-04
Total Posts:  13
New Member

An update and a request:

I ported some software I have to use Touche for the touch input and it’s working great.  The API is a lot like UIKit/UIResponder in the iPhone SDK, which is nice.

Also, I got my projector today.  I just need to figure out how to mount it properly and then I’ll test out the libdc1394 stuff.

One request: I like the fact that I can get the center point + the size of the blobs in the events.  However, what would be great is if you could add something to the calibration so that it’s possible to determine how hard someone is pressing.  Perhaps a calibration step where the user presses softly, then hard and recording the size of the blob for each type of touch.  Then, taking these measurements into account, add some information into the touch events which gives the strength of the press.  That would be very useful information to have.

I could try to determine the strength of the press myself using the blob size, but this will only be valid for my particular setup.  So using a calibration step in Touche is the way to go I think.

Profile
 
 
Posted: 08 September 2008 02:58 PM   [ Ignore ]   [ # 11 ]
Rank
Joined  2008-08-15
Total Posts:  53
New Member
discotek - 08 September 2008 02:19 PM

I could try to determine the strength of the press myself using the blob size, but this will only be valid for my particular setup.  So using a calibration step in Touche is the way to go I think.

I agree that pressure information would be good to have, but I’m hesitant to add it because I don’t think it would be possible to do this reliably: For example, if the camera is slightly skewed in respect to the touch surface, blobs of different pressure levels might have the same size in different regions of the screen. Of course we could try to use a mesh of soft/hard pressure calibration points and interpolate, but I don’t think it’s worth the effort.

What you could try, however, is to query the tracking client of the pixels/cm or pixels/inch, then calculate the size of a blob in cm/inches rather than pixels and use this to deduce the touch pressure. This would work on any setup, as long as the projection resolution is properly set up in the “Screen Setup” view.

Profile
 
 
Posted: 08 September 2008 03:49 PM   [ Ignore ]   [ # 12 ]
Rank
Joined  2008-09-04
Total Posts:  13
New Member

Actually, I wasn’t thinking about the skew from the camera and the resolution, I was more thinking about the thickness of the silicone used to make it a compliant surface and how that would affect the size of the blobs.  That and the actual size of the person’s fingers.

But you’re right, the camera skew would definitely affect it too.  I wonder if it would be possible to determine the inverse transformations which would compensate for the fish-eye effect by measuring a dead-center touch (relative to the camera) and then a number of touches in a circle of constant radius around that point.  I guess we’re getting into some heavy calculation territory at that point though…

Profile
 
 
Posted: 08 September 2008 06:26 PM   [ Ignore ]   [ # 13 ]
Rank
Joined  2008-08-15
Total Posts:  53
New Member
discotek - 08 September 2008 03:49 PM

I wonder if it would be possible to determine the inverse transformations which would compensate for the fish-eye effect by measuring a dead-center touch (relative to the camera) and then a number of touches in a circle of constant radius around that point.  I guess we’re getting into some heavy calculation territory at that point though…

It would be possible to calculate the lens distortion inverse from the calibration mesh, since the size and aspect ratio of the screen is known, but the problem is that the calibration mesh will never be pixel-perfect (because the user won’t hit a calibration point dead center during the calibration process, but only so that it is sufficiently accurate for making touches appear under the user’s finger after a camera-to-screen coordinate conversion, but not accurate enough to get a meaningful lens distortion matrix).

A better way would be to use a checkerboard pattern that is put over the screen, but that would mean that the users have to print out a pattern, put it over the screen as perfectly aligned as possible, and then have the camera snap a picture. I don’t think it’s worth the effort for multitouch tracking, though, since tracking accuracy usually isn’t a problem anyway and can be made sufficiently accurate by adding more calibration points, which will implicitly accomodate for lens distortion (up to a certain degree).

Profile
 
 
Posted: 15 September 2008 01:02 PM   [ Ignore ]   [ # 14 ]
Avatar
Rank
Joined  2008-08-02
Total Posts:  18
New Member

I have tested Touche with a firelfly and it works great at 640x480 and 60FPS

Very nice

Alex

tangibleinteraction.com

“tangible is the new virtual”

 Signature 

Alex

Tangibleinteraction.com

“Tangible is the new virtual”

Profile
 
 
Posted: 15 September 2008 02:21 PM   [ Ignore ]   [ # 15 ]
Rank
Joined  2008-09-13
Total Posts:  11
New Member

hi
i have a macbook with an isight
i just like to try out your software.
i thought i just have to run your setup assistant and then it should work.
and it did.
until the point with Blob Tracking.
It really worked, but the problem is that it was side-switched.
So my Setup is like the following(just to test purposes)
Macbook i sight and in front 2 light sources.
but when i move the light sources to my right, on the screen i see the blob tracking to the left.
Is this right?
Or do i have to side-invert the picture?
Nam

Profile
 
 
   
1 of 2
1