Ripples: Utilizing Per-Contact Visualizations to Improve User Interaction with Touch Displays
Posted: 05 October 2009 11:55 PM   [ Ignore ]
Rank
Joined  2009-02-04
Total Posts:  93
New Member

Here is a paper on “Ripples”, which are the Microsoft Surface auras/contact visualizations introduced in Surface SDK SP1.

http://research.microsoft.com/en-us/um/people/benko/publications/2009/Ripples%20UIST09.pdf

It is a really excellent paper.  Robert Levy (rlevy) was one of the co-authors.  It covers the reasoning behind the visualizations, how much they improve user touch accuracy (significantly, with accuracy improvement bonus with time), and some of the failed visualizations they rejected.

 Signature 

Joshua Blake
Co-founder & VP of Engineering, Orbbec
http://www.orbbec3d.com

Profile
 
 
Posted: 06 October 2009 01:56 AM   [ Ignore ]   [ # 1 ]
Avatar
Rank
Joined  2008-08-05
Total Posts:  83
New Member

Very interesting
Thank you

 Signature 

Florian / CTO at So touch
http://www.so-touch.com

Profile
 
 
Posted: 06 October 2009 02:18 PM   [ Ignore ]   [ # 2 ]
Avatar
RankRankRankRankRankRank
Joined  2007-04-08
Total Posts:  2539
Dedicated

Thanks for the post joshb. It is indeed very interesting.

I’m curious if the reasoning behind this somewhat ‘phenomena’ (for lack of a better word) is due to accidental issues of the software/hardware. The reason I say that is because if we’re talking ‘direct manipulation’ there really shouldn’t be an expectation to have a visual feedback when touching an object ( we don’t normally get one in the physical world - at least not the type demonstrated here ). You don’t see this type of feedback on the iphone and I often find putting cues on the table distract the users, but maybe that depends on the application. The time’s i’ve personally found userful adding such visual elements are when the interaction isn’t inherent and the user isn’t sure what to do. I’m just wondering if putting such visual cues in will just create bad habits down the road, as it’s natural for it to take time for people to get used to new devices/interaction and trying to speed up the learning process by adding visual feedback that helps aid ‘initial’ interaction might not be the way of handling this in the long road.

With that said, Microsoft does this in a much more clever way and much of my criticism is for other products i’ve seen who put circles around even touch point which only exploits latency and shows something to the user that they already know (their finger is here).

 Signature 

MTmini, MTbiggie, & Audiotouch creator & Community Core Vision Co-founder

Follow on:
My Blog | Facebook | Twitter | Youtube

Profile
 
 
Posted: 19 October 2009 01:01 PM   [ Ignore ]   [ # 3 ]
Rank
Joined  2007-11-27
Total Posts:  78
New Member

Hey Setch - this is actually one of my features so I can explain in detail smile

You’re right that in the physical world you usually don’t get visual feedback.  You do get other types of feedback (tactile in particular) that we lack on today’s touchscreens.  This feature attempts to use visuals to make up for the lack of tactile feedback.

The other thing we put a lot of time into is making it so the feedback is very subtle (barely noticeable) when you are touching the system in a ‘correct’ way (tapping a buttong, scrolling within the bounds of a list, etc).  The feedback becomes more visible (both size and duration increase) when you touch ‘incorrectly’ (trying to resize something beyond it’s max, tapping or dragging on the background of an app instead of interactive control, etc.) So it’s out of the way for expert users but for novices it lets them know “yeah, this is a touch screen.  here’s how your touch is interpreted.  try touching something else now”

Then for specific controls we put extra stuff into the visuals to make them communicate other info in a passive and contextual way (the exact point for a finger that we use for hit testing, the max size of a scatterview item, the bounds of a scrollviewer, etc).

Profile
 
 
Posted: 19 October 2009 01:53 PM   [ Ignore ]   [ # 4 ]
Avatar
RankRankRankRankRankRank
Joined  2007-04-08
Total Posts:  2539
Dedicated

Thanks for the great explanation; it’s not often we get to hear from ‘the source.’ smile

I guess one of the key things you pointed out is the subtle nature. I guess it can be thought as ‘you don’t necessarily notice it’s there, but you’d notice if it was gone’ type things. The examples you mention I think are a good use cases and having specific controls have specific feedback (as you do) would seem like a better implementation than a static approach.  I’m curious if audible cues could be another alternative or joint addition for user feedback. Various actions we associate with a sound; have you experimented with this at all as to how effective it is at producing a similar outcome?

Thanks again for the explanation rlevy [msft] wink

 Signature 

MTmini, MTbiggie, & Audiotouch creator & Community Core Vision Co-founder

Follow on:
My Blog | Facebook | Twitter | Youtube

Profile
 
 
Posted: 19 October 2009 02:03 PM   [ Ignore ]   [ # 5 ]
Rank
Joined  2007-11-27
Total Posts:  78
New Member

Audible cues are definitely worth exploration but what we’ve seen in the past is that the latency between touch + sound is much more noticeable than the latency between touch + animation.  This is one reason why Surface as a whole has very few places which use audio feedback.

Profile
 
 
Posted: 19 October 2009 02:22 PM   [ Ignore ]   [ # 6 ]
Avatar
RankRankRankRankRankRank
Joined  2007-04-08
Total Posts:  2539
Dedicated

Ah yeah, that’s a good point. Anything more than 60-100ms can already start to disassociate the sound from the action. That would definitely be a concern.

 Signature 

MTmini, MTbiggie, & Audiotouch creator & Community Core Vision Co-founder

Follow on:
My Blog | Facebook | Twitter | Youtube

Profile
 
 
Posted: 27 October 2009 04:57 AM   [ Ignore ]   [ # 7 ]
Avatar
RankRank
Joined  2008-06-26
Total Posts:  243
Member

Hey Robert,

thanks for the great explanation and also for the great work. Although our research is not so profound like the one from the msft (how could that be grin) we have made the same observations over the time. The users are appreciating some guidance while using the MT tables. This is especially usefull for “bad” scenarios (also for unintendet touches), but it is increasing the user experience also in “good” situations. In the new version of the Molecules project developed for the ExpoReal 2009 we have implemented different kinds of visual feedback elements. This was possible because we had to build different presentations (with different visuals/UI/etc) for a few clients, but based on the same framework. Some of the visual helpers we used where very apparent (clearly visible cursors and spinning wheels) other where more subtle (not a cursor but a washed out type of blob etc.)… We made the experience that the more subtle type of feedback was more accepted than the clearly visible ones: “A particularly surprising outcome of our process was the transition away from simple, minimal circles around the fingers (‘halos’) to the partially transparent amorphous shape (Figure 13). It was this iteration that largely eliminated complaints of visual overhead and distraction.”

Well this is exactly what we experienced also grin I have uploaded an image sequence showing a basic “tap&hold” action. This is a kind of “mixture” between amorphous and very clear. You have to tap and hold the touch for a certain ammount of time to flip the videos. Once they are flipped they can be manipulated (start/stop/etc) with “conventional” buttons. I think that today i would abandon the very sharp&clear green circle and would prefere something more transparent and less distracting…

I like the visual style described and showed in your in your paper. Also that “trails” like approach with the visual feedback for reaching a max/min size for manipulation. I was thinking of implementing something similar in our next work. Is it allowed btw. to use the conclusions from your paper for our work or do we are running into some “intellectual property” stuff using that ...?

Cheers,

File Attachments
blobs.zip  (File Size: 1938KB - Downloads: 447)
 Signature 

Sandor Rozsa
--
http://www.xtuio.com - home of uniTUIO: bringing MultiTouch in the 3’rd dimension
http://www.cd-cologne.de - my company homepage

Profile
 
 
Posted: 04 November 2009 08:37 PM   [ Ignore ]   [ # 8 ]
Avatar
Rank
Joined  2007-05-21
Total Posts:  100
New Member

Thanks for the link to the article, Josh!

 Signature 

Interactive Multimedia Technology
The World Is My Interface
TechPsych
Blogs=Online Filing Cabinets

Seeking Sustainable Innovation

Profile