NeoVisus Gaze Interaction
Posted: 10 June 2008 09:28 AM   [ Ignore ]
Avatar
Rank
Joined  2008-01-30
Total Posts:  24
New Member

Hey guys,

I’ve been working on a gaze driven interface, the NeoVisus prototype, during the last couple of months.

Released my masters thesis and a video demo yesterday:
http://www.martintall.com/neovisus/

I’ve been been blogging about the topics for a couple of months, gathering pretty much all interfaces out there.
http://gazeinteraction.blogspot.com

Been thinking about the merge of multitouch and gaze, opens up some really neat opportunities for novel interaction methods.

The interface components that I’ve built are done in C# using WPF, they enable drag-and-drop development of future applications.

Would like some feedback on my work, what do you guys think?

/Martin

Profile
 
 
Posted: 10 June 2008 11:27 AM   [ Ignore ]   [ # 1 ]
Avatar
RankRank
Joined  2007-12-12
Total Posts:  116
Member

Hey Martin,

That is very impressive!  I think it would be really nice to merge MT with Gaze.  Will you make the software available to us anytime soon?  smile
Keep up the great work, because that is just amazing!!!!

 Signature 

http://multitouchproject.blogspot.com/

Profile
 
 
Posted: 10 June 2008 11:34 AM   [ Ignore ]   [ # 2 ]
Avatar
RankRankRank
Joined  2008-06-01
Total Posts:  338
Sr. Member

This is a great interface. Very nice work.
Of course my first thought is how this opens up computing to people with disabilities.

I love the simplicity of your tracking method. IR light reflections. Genius!

Yes, I agree that the two interfaces, eye tracking and multi-touch, could be integrated. The possibilties are endless…

This will change computing the way the invention of the mouse did!

 Signature 

Blobs the likes of which even the Gods have not seen!

Profile
 
 
Posted: 10 June 2008 11:49 AM   [ Ignore ]   [ # 3 ]
Avatar
RankRankRank
Joined  2007-09-22
Total Posts:  263
Sr. Member

Your implementation is great , never thought eye tracking could be used for something as precise as your video shows.
For people with disabilities this is just incredible.
Tough i don’t really see this meshing with MultiTouch based inputs.I
mean MT as a whole is not really mature.We have the hardware tracking , we have demo showcases , but nothing really usefull.
The interfaces for Gaze and MT based apps would be similar and one project can aspire from the other , implementing eyetracking with MT wouldn’t bring any real benefit and would just waste development time.
I really like your project , props to you it’s really impressive , and the kind of work that guys like you do got me into the MultiTouch domain.

Profile
 
 
Posted: 10 June 2008 01:51 PM   [ Ignore ]   [ # 4 ]
Avatar
RankRankRankRankRankRank
Joined  2007-04-08
Total Posts:  2539
Dedicated

I think Gaze + multi touch definitely has it’s use. There’s limitations to both technologies, therefore one benefits the other. Of course more experimentation would need to be done in both areas to see how both can thrive in the same environment.

 Signature 

MTmini, MTbiggie, & Audiotouch creator & Community Core Vision Co-founder

Follow on:
My Blog | Facebook | Twitter | Youtube

Profile
 
 
Posted: 10 June 2008 03:08 PM   [ Ignore ]   [ # 5 ]
Avatar
Rank
Joined  2008-01-30
Total Posts:  24
New Member

Thanks for the comment guys!

As mentioned a clear user group is those suffering from ALS, CP or similar disabilities. This is where gaze interaction is used today. My work is not directly aimed at their needs but the components can be configured and placed to suit their individual capabilities.

A major issue with gaze driven interaction is the lack of a “click”. Most solutions employ the dwell time solution where the objects are to be fixated a specific period of time (450 ms. or so) This adds stress to the interaction since everywhere you look something seems to be activating. This is the main concern for my thesis and the components address this since a second eye movement is required to make the actual selection.

Simply put I take the gaze position, X&Y coordinates, and redirects the mouse position to it. The mouse cursor is then hidden from the interface. The components are activated by MouseOver events with configurable settings for activation and selection time. Perhaps they could be used for touch as well. One finger to activate the component and a second finger tap to make the selection.  I future versions I will consider multiple modalities, a set of generic interface components for drag-and-drop development would be ideal. I’m looking into the use of EMG sensors as well (like the OCZ NIA)

In general, knowing the position of ones gaze gives away much of their intentions. It does not have to drive the interaction directly but could be used to make the interface more intelligent and fun. It would be really cool to have games where characters knows where you are looking and would, for example, turn around to watch the same object, look in your eyes (and follow them), adapt the conversation if your paying attention or not etc. etc.

Another thing I’ve been thinking about is the combination of gaze to select objects from large displays (eg. wallmounted LCD) and then use multitouch (iPhone, table surface..) to manipulate the object. If you had a list of albums on the wall the one you just looked at would instantly appear when you look down at the surface again.

Many things that could be done, any other cool ideas that come to mind?

Profile
 
 
Posted: 10 June 2008 05:38 PM   [ Ignore ]   [ # 6 ]
Avatar
RankRankRankRank
Joined  2007-09-18
Total Posts:  882
Moderator

absolutely amazing at the first glance !

an you tell us more about the setup ?

 Signature 

How many touches can you simultaneously perform ? 
Coming soon : EveryWall MT / Multi LaserPointers / MT SMS Wall
le WIKI en Fran├žais

Profile
 
 
Posted: 10 June 2008 06:49 PM   [ Ignore ]   [ # 7 ]
Avatar
Rank
Joined  2008-01-30
Total Posts:  24
New Member

Jimihertz,

Have a look at the documentation for more info:
http://www.martintall.com/docs/Tall.2008.NeoVisus_GazeInteraction.pdf

I does not go into deep specifics on the actual software development, its more on the interaction methods (my area) and the interface components (buttons, menus etc.) I’m no expert in eye tracking technology per se but have gained some insight into the technical area. The eye tracker I’m using comes from SMI. It is a professional product, it is fairly robust and works for 90% of all people including glasses and contact lenses but it does come at the associated price.

However, there are projects working towards open source solutions using low cost hardware (good webcams etc.) Unfortunately, most work seems to be project based (thesis etc.) with no ongoing, consistent development.

For a DIY solution:
http://www.inference.phy.cam.ac.uk/opengazer/
http://www.jasonbabcock.com/eyetracking.html
http://www.walterpiechulla.de/ftr_by_wlp_08/doc/index.html

For more info on the image processing algorithms used for eye tracking in general I suggest
http://gazeinteraction.blogspot.com/2008/04/eye-tracking-informatics-and.html
http://gazeinteraction.blogspot.com/2008/04/novel-eye-gaze-tracking-techniques.html

The area around the eye is fist defined at a Region of Interest. For an example see EyeFinder This eye image is then passed on to image processing, ex. using the OpenCV library and then do elliptic fitting of the pupil. Using IR LEDs creates glints/reflections on the eyes that in combination with the pupil increases the accuracy.

I’ve got tons of more information that I’m happy to share.

Profile
 
 
Posted: 10 June 2008 09:28 PM   [ Ignore ]   [ # 8 ]
Avatar
RankRankRankRank
Joined  2006-11-09
Total Posts:  1499
Administrator

really amazing stuff martintall… congratulations on your work… definitely pioneering smile

I really look forward when I get more time to study gaze tracking… and hopefully formulate a DIY method so we can share results.

 Signature 

~

Profile
 
 
Posted: 11 June 2008 08:24 PM   [ Ignore ]   [ # 9 ]
Avatar
Rank
Joined  2008-01-30
Total Posts:  24
New Member

I’m getting good response on this with several articles and thousands of viewings. Kinda overwhelming after all those long and lonely days and nights in the lab wondering if it could be done..

I really would like to get this technology out there. Accessibility and cost is key. Applications and interface components much needed. Interested in collaborating with you guys to make it happen, seems like there is some really talented people in the NUIGroup with skills in appropriate areas.

Might be bold but I think it has the potential to change computing as we know it.

Profile
 
 
Posted: 13 June 2008 06:08 PM   [ Ignore ]   [ # 10 ]
Avatar
RankRankRank
Joined  2007-09-13
Total Posts:  333
Sr. Member

You’ve done a terrific job, martintall.

I just had a quick look at the whole thing, so forgive me if I’m being ignorant or saying anything obvious: you mentioned the click action as being a major concern for this sort of interface, but what if you mix technologies and use voice commands for mouse click actions? The user could verbalise actions like “click”, “double-click”, “drag” or “right-click”.

I just thought about it because sometimes it’s hard to control exactly how long you look at something, or when to blink (or not to).

Any streams of thought in this direction?

Profile
 
 
Posted: 13 June 2008 10:47 PM   [ Ignore ]   [ # 11 ]
Rank
Joined  2008-05-12
Total Posts:  8
New Member

This is now on Digg. Please head over to spread the word.

http://digg.com/software/NEW_Eye_Movement_Computer_Control

Profile
 
 
Posted: 15 June 2008 07:49 PM   [ Ignore ]   [ # 12 ]
Avatar
Rank
Joined  2008-01-30
Total Posts:  24
New Member

GFantini,

Thanks! Yes, I’ve been thinking of additional modalities, in the first version I wanted to push the envelope for gaze only. A potential group of users are the disabled who suffer from ALS, CP or similar. Most of which cannot produce speech or precisely controlled limb movement. On the other side I feel that pretty much everyone can benefit from this so speech is a future candidate for text entry (if you are in an environment where you can / want to talk to you system that is). Also the other way around by speech synthesis, so that the mute can produce sentences which are articulated by the system.

I would be great to develop components that are flexible enough to support multiple interaction methods, gaze-only, gaze + keyboard, gaze + touch etc.  Besides incorporating gaze and multitouch another neat little thing would be to use EMG/EEG (like OCZ NIA, Emotive etc.) Typically they allow activations in the 150ms range which would allow for hysterical fast computer interaction. I took part in a series of EEG BrainPong sessions while I was at UCSD, it requries rather extensive training (except for facial muscle contractions which are very distinct)

Profile
 
 
Posted: 18 June 2008 04:15 PM   [ Ignore ]   [ # 13 ]
Rank
Joined  2008-05-23
Total Posts:  30
New Member

Absolutely incredible work. Great job Martin. I will be keeping up with your further progress on this because I am interested in all forms of human computer interaction.

Profile
 
 
Posted: 21 June 2008 01:55 PM   [ Ignore ]   [ # 14 ]
RankRank
Joined  2008-02-01
Total Posts:  243
Member

Fantastic, and it would sit well with something we are working on, maybe a collaboration is possible in the future?

 Signature 

fingerpuk.tumblr.com

Profile