Name: Ariel Molina
Education/Qualifications: PhD student
Academic and Industry Background: I studied physics and math, made a Masters in computer science (thesis in 3D monocular object tracking), currently i am studying a PhD in computer science, my work is on markerless 3D human hand posture tracking.
Open source development experience: I created a monocular 3D object tracker which i opensourced via the local server at my institute. I am very experienced in collaborative tools and versioning systems (GIT and SVN)
Activity level within the NUI Group Community: Visited on a regular basis, reading a lot of posts.
Time working multi-touch, natural user interfaces, HCI and related fields: More than 3 years.
The reason for picking specific project:
I chose NUI because i am working on vision based human-machine interfaces so it is natural for me to be interested in vision based stuff. The problems faced by NUI group in finger tracking are in my direct area of research (object tracking). I am sure good contributions can be made in refining object tracking and even create a basis for hand detection.
Finger Detection Improvements (hand detection?)
Finger detection in DI and DSI is a greater challenge than in FTIR, because of the reduced contrast and additional artifacts (hand palm, arms). DSI and DI are also preferred for creating nice looking tables (MS Surface does DI) because there is no need of a compliant surface. I propose to improve finger detection using Meanshift for detecting “peaks”, please read  for a great Meanshift explanation. This would work great for DSI, and DI where the pams of the hands detection are a side effect and currently this can be an issue because just thresholding is not always the ideal way to do things and value boosting introduces noise. This kind of finger detection can be used in conjunction with current contour detection methods to greatly improve accuracy and to separate fingers (gaussian peaks), non-fingers and hand palms (mid level regions non-peaks). Fingers can be seen as peaks in a 3D surface like the ones displayed here (in spanish, Figura A) . This could be a basis for palm detection.
There are two approaches to explore, a rather simplistic one is contour analysis over the already detected blobs, other idea is to apply Continuously Adaptive Mean Shift . Camshift works on probability maps represented as grayscale images, and luckily the finger detection can be seen as a probability map because the finger is more likely to be where more pixels are, and where whiter pixels are. Thus the finger tracking can be seen as a probability map and the Continuously Adaptive Mean Shift created by Gary Bradsky can be applied. As a Bonus by using Continuously Adaptive Mean Shift we get an estimated window of where the fingers are, and then we can restrict detection areas to those areas and by boosting probabilities in such areas we could theoretically get improved finger tracking. Read  and scroll down for an animated explanation (girl painting) of how Camshift works and how it gets orientation.
I would like to propose myself to work on any of these areas, the first one is the one i think has more potential, as it can be the start point for true hand detection, it also it the one that requires more work. And additionally i can do the ROI selection stuff as this is not really a difficult task.
My background: I studied physics and math, masters in computer science (thesis in 3D monocular object tracking), and currently i am in a PhD in computer science my work is on markerless 3D human hand posture tracking.
I can elaborate on the methodology of the proposed ideas at your request.