Thanks for the comment guys!
As mentioned a clear user group is those suffering from ALS, CP or similar disabilities. This is where gaze interaction is used today. My work is not directly aimed at their needs but the components can be configured and placed to suit their individual capabilities.
A major issue with gaze driven interaction is the lack of a “click”. Most solutions employ the dwell time solution where the objects are to be fixated a specific period of time (450 ms. or so) This adds stress to the interaction since everywhere you look something seems to be activating. This is the main concern for my thesis and the components address this since a second eye movement is required to make the actual selection.
Simply put I take the gaze position, X&Y coordinates, and redirects the mouse position to it. The mouse cursor is then hidden from the interface. The components are activated by MouseOver events with configurable settings for activation and selection time. Perhaps they could be used for touch as well. One finger to activate the component and a second finger tap to make the selection. I future versions I will consider multiple modalities, a set of generic interface components for drag-and-drop development would be ideal. I’m looking into the use of EMG sensors as well (like the OCZ NIA)
In general, knowing the position of ones gaze gives away much of their intentions. It does not have to drive the interaction directly but could be used to make the interface more intelligent and fun. It would be really cool to have games where characters knows where you are looking and would, for example, turn around to watch the same object, look in your eyes (and follow them), adapt the conversation if your paying attention or not etc. etc.
Another thing I’ve been thinking about is the combination of gaze to select objects from large displays (eg. wallmounted LCD) and then use multitouch (iPhone, table surface..) to manipulate the object. If you had a list of albums on the wall the one you just looked at would instantly appear when you look down at the surface again.
Many things that could be done, any other cool ideas that come to mind?