iTouch - Mac Specific FTIR Driver Solution
Posted: 18 March 2008 10:47 AM   [ Ignore ]
Avatar
Rank
Joined  2008-01-31
Total Posts:  38
New Member

Howdy,

First I would like to thank the community for all of the information that they have provided me in regards to getting my screen built. Now, a little about me, and my project. My name is Carmen, and my colleague’s name is Jon. We are working on a Mac specific version of an FTIR driver for our senior project. The project has been dubbed the iTouch. We chose the project because we were really interested in Jeff Han’s research, and are both very keen on the idea of natural user interfaces. The design process started almost a year ago around the beginning of May. We decided that the idea of a cross platform application would be nice, but we realized that the task at hand was more of a driver and less of a traditional application. When a hardware company produces a driver they make OS dependent drivers in order to leverage native performance.  We wanted to produce a driver that would be extremely powerful and yet utilize minimal system resources.

After researching, we felt that OS X provided the best solutions to help us accomplish our goals. Our driver is unique from current solutions because it utilize native OS X tablet events instead of utilizing a 3rd party event system. Image processing API’s was another benefit OS X offered that was a key deciding factor. Apple provides two methods for handling images, vImage and their Core Framework. vImage is an extremely powerful wrapper around low-level C and assembly instructions, SSE and the PPC equivalent, that allows for quick calculations using image kernels. The Core Framework is in a way similar to the OpenCV framework. It allows for easy processing of images with prepackaged functions. When you boil down the CF you will end up back at vImage and a few other low level libraries. Now to answer a few important questions.

What technologies are behind the iTouch?
Programming Language: Currently it is written in C/C++, but that will change very soon over to Obj. C. It is not OO. I am just using Vectors from C++.
Video Capture: We are currently using the Sequence Grabber API. I intend to move this over to the QTkit API that arrived with Leopard as soon as it becomes stable.
Image Processing: This is currently a toss up between the Core Framework and vImage. I would be using vImage because of the immense performance difference, but I have not figured out all of the kernels. Currently we are using the Core Framework.
Blob Detection: Another colleague and I have written our own blob detection algorithm. It is the only part of the project that is platform independent.
Events: We are utilizing the Quartz Event Services API, Event Taps, to send our events. Apple provides two types of system tablet events: Point and Proximity. A proximity event occurs when a pen is hovering over the surface, and when the pen leaves the surface. The point event represents a touch. When a new finger touches the screen we fire a proximity event followed by a point event.  While that finger is still on the screen we send point events.

When can I get my hands on it?
Our project is due on April 24th. I currently do not plan on releasing that version to the public. Mostly because it will be a little clumsy on the UI side. If the community will be a little forgiving I will consider releasing it. I plan on taking the summer to convert the project over to Cocoa and Obj C, so I can make a pretty UI with all of the fancy Cocoa widgets.

When it is release, how will it be release?
I am not 100% sure on this one yet. If I release the first version, I will only be releasing the application. When I release the first official version, it will be released under some version of the GPL.

I think I have pretty much covered all of the bases. I have a bunch of data as far as results go to share with community, but I have yet to have a chance to compile them. As I mentioned before, our project is due the 24th of April, so this is I crunch time for us to get all of the kinks worked out and some demo apps written. I will post more information as it arises. Please feel free to ask any questions. I will do my best to answer them as quickly as possible.

Profile
 
 
Posted: 18 March 2008 11:40 AM   [ Ignore ]   [ # 1 ]
RankRank
Joined  2007-05-09
Total Posts:  137
Member

Hi!
This sounds great, and I’m sure a few mac users will be delighted to hear of more mac support. Will you release code or details on the blob detection? Have you a blog that details more info on the project?
And good luck!

 Signature 

my weblog
peepfair.com, slowly being developed, please use freely

Profile
 
 
Posted: 18 March 2008 12:02 PM   [ Ignore ]   [ # 2 ]
Avatar
Rank
Joined  2008-01-31
Total Posts:  38
New Member

Code for the second version will be released once it has been converted to Obj. C. I have never release open source software before, so I need to choose a license that will protect the project’s best interests. The blob detection algorithm is essentially a search algorithm. I will post some pseudo code later on it. The approach may at first seem slow, but after optimizing it we were able to surprise ourselves. I will also try to analyze the algorithm and give it a formal Big-O evaluation. Unfortunately, I do not have anything documenting our progress. We have been keeping notes about our do’s and don’ts, but we have been more concerned with getting the project delivered to our profs for a grade than publishing it to the community. As soon as the semester is over I will compile a website with all of our info. Until then I will continue posting here.

Feel free to catch me on IRC as macdaddy314.

Profile
 
 
Posted: 18 March 2008 12:33 PM   [ Ignore ]   [ # 3 ]
Avatar
Rank
Joined  2008-01-31
Total Posts:  38
New Member

/*
 *****************************************************************************
 * Function: computeBlobs
 * Params:
 *        - int* data : A pointer to the array of pixel data.
 *        - int width: The width of the image.
 *        _ int height: The height of the image.
 *        - int* blobLocations: A pointer to the blob locations array.
 *        - vector<Blob>* blobStats: Vector of the statistics about the blobs
 *                                            found in the image.
 *        - vector<Finger>* fingersfound: Vector that will hold the found blobs in
 *                                            the image.
 *        
 * Return: This returns the number of blobs found in an image.
 * Descrip: This algorithm is the brain child of Mathew A. We are
 *                not claiming that this algorithm is an original one. I only
 *                helped with the debugging and the coding of the algorith. I just
 *                want to give credit, to where credit is due.
 *
 *                The comments in the code should describe the algorithm well
 *                enough, but if you are still lost please read the read me file.
 *****************************************************************************
*/

There are 2 helper arrays/vectors in this algorithmbloblocations is a int array the size of 
the image
The contents of this array represents what blob each pixel is connected toblobstats 
is a vector of blob structs
The contents of this vector are not the final results
The house keeping done at the end of the algorithm performs some final calculations and dumps
the results in fingersfound


The details are a little sparcebut this gives the general idea of how it worksThere is a
reasonable amount of looping involved
but I can assure you it has been optimized to give some
surprising performance results
.



Algorithm:

  for 
the height of the image
     
for the width of the image
      
      
if the pixel >= threshold
         
         
If there is there a blob above me?
           
           
I must be part of that blob
                 
                 Add myself to that blob
                 
                   Is there a blob to the left of me
, and if sois it different?
                     
YesI need to assimilate it because it is now part of my blob
                      
                      Start scanning to the left 
and changing all the blob numbers to reflect the assimilation
                      Start scanning one row up 
and to the right to reflect the assimilation as well
                
               Ok there wasn
't a blob above me, is there one to the left of me?
       
                 Yes, I must be part of that blob then

                 Add my self to that blob.
            
              
               Ok there wasn'
t a blob above me or to the left of meI must be anew blob
      
    House Keeping

Profile
 
 
Posted: 18 March 2008 03:35 PM   [ Ignore ]   [ # 4 ]
Avatar
Rank
Joined  2008-01-31
Total Posts:  38
New Member

Here are the pics of my results

http://multitouch.uacodeweaver.com/pics/

This is bare acrylic. I have yet to pour silicon on the screen.

Profile
 
 
Posted: 27 March 2008 01:11 PM   [ Ignore ]   [ # 5 ]
Rank
Joined  2007-11-22
Total Posts:  22
New Member

hoooooo…
great.
I am a mac user......when are you releasing the code......

Profile
 
 
Posted: 01 April 2008 12:12 PM   [ Ignore ]   [ # 6 ]
Rank
Joined  2007-11-15
Total Posts:  59
New Member

Mac support would be amazing… Boot camp sucks…

Profile
 
 
Posted: 08 May 2008 03:00 PM   [ Ignore ]   [ # 7 ]
Rank
Joined  2008-05-07
Total Posts:  2
New Member

This is fantastic mate. Keep up the good work. Look forward to its release.

Profile