The MOADtouch Project
Posted: 28 September 2009 11:24 PM   [ Ignore ]
Rank
Joined  2009-02-04
Total Posts:  93
New Member

Open your mind. Relax your brain muscles. This will take a little creativity.

Rewind to the 1968. Douglas Engelbart is about to put on The Mother of All Demos (MOAD). This time, though, instead of inventing and demoing the first computer mouse, he invents a touch (or even multi-touch) screen and shows off an interface that has more in common with Surface or iPhone than Windows or OS X. <img style="float:right; margin:10px 10px 10px 10px;cursor:pointer; cursor:hand;width: 300px; height: 200px;” src="http://1.bp.blogspot.com/_X33DmaaF8aA/SrEqoTMAXJI/AAAAAAAAALQ/xmC7hmHIr7A/s400/Firstmouseunderside.jpg" border="0" alt=""id="BLOGGER_PHOTO_ID_5382129901518216338" >

Pretend that mice were never invented. What would today’s computer interfaces look like with 41 years of research and evolution and industry focus on multi-touch interfaces?

Recently, there have been several articles that show touch in a negative light, with conclusions like these: “Touch is Dead On Arrival” and “[multi-touch] adds little of value.”

Those types of comments come about because when some people think of touch on a PC, they visualize a GUI (mouse + keyboard) application and then add some touch to it, and see that touch is redundant or unnecessary. Well of course it is! I agree! Who would want to use an application or OS designed for a mouse pointer with +- 5 pixel precision with their fat fingers with +- 30 pixel precision?  In our imagined alternate MOAD touch demo world, though, every application would be designed to work well with gestures, multi-touch, and fluid, high-frequency touch interaction.

<img id="BLOGGER_PHOTO_ID_5288214753320751586" style="FLOAT: left; MARGIN: 10px 10px 10px 10px; WIDTH: 136px; CURSOR: hand; HEIGHT: 151px” alt="” src="http://3.bp.blogspot.com/_X33DmaaF8aA/SWODTMup0eI/AAAAAAAAAGM/4-wLmPEoiIM/s200/finger.jpg" border="0" >I’m proposing the MOADtouch project: Let’s help people imagine what a NUI world would look like. Take your favorite GUI application, throw out the GUI, and totally reinvent it as a NUI. Pretend touch was only acceptable input device to the mass market. Forget the Windows, Icons, Menus, Pointers. Forget the rectangles. Start with your finger tips and design out from there.

I want everyone to participate, regardless of individual skills. Write blogs, create wireframes, mock up screenshots, mock up videos or even code up prototype interactions. Describe or show the experience. Do not throw out any possible application just because someone else doesn’t think it would work with touch. Recreate the interface so that it does work.

I want this to be viral. Send this to all your friends and colleagues. Get them all to create and post something simple, even a snippet of an idea. “It’d be cool if XYZ was like this: ...” Twitter it. Blog it. Flickr it.

Tag everything: MOADtouch.

Remember: Open your mind. Relax your brain muscles. This will take a little creativity.

[Cross-posted on my blog: Deconstructing the NUI]

 Signature 

Joshua Blake
Co-founder & VP of Engineering, Orbbec
http://www.orbbec3d.com

Profile
 
 
Posted: 29 September 2009 03:11 AM   [ Ignore ]   [ # 1 ]
RankRank
Joined  2008-09-21
Total Posts:  120
Member

+1

Profile
 
 
Posted: 29 September 2009 09:35 AM   [ Ignore ]   [ # 2 ]
RankRank
Joined  2008-05-07
Total Posts:  173
Member

these articles were written by people who are used to the “traditional” input devices—mouse, keyboards, and current state of multi-touch technology (HP TouchSmart) also gives the impression that fingers are used to replace mouse, in that limited sense.

There are types of things that a mouse can not do and are not natural—map navigation, 3D viewing, etc. Even in a collaborative environment, a mouse just seems useless.

just my two cents

Profile
 
 
Posted: 29 September 2009 10:55 AM   [ Ignore ]   [ # 3 ]
Rank
Joined  2009-02-04
Total Posts:  93
New Member

Here is someone’s blog post in response to MOADtouch:

3D NUI (Natural User Interface)

Robin talks about 3D interfaces and open-air gestures.

 Signature 

Joshua Blake
Co-founder & VP of Engineering, Orbbec
http://www.orbbec3d.com

Profile
 
 
Posted: 29 September 2009 12:54 PM   [ Ignore ]   [ # 4 ]
RankRank
Joined  2009-07-31
Total Posts:  213
Member

First realize is that you are not just replacing the mouse. A fully functional MT device should replace these three I/O devices: mouse, keyboard, monitor. The keyboard is tough to replace because it’s pretty efficient and the haptic feedback is invaluable for touch typing. I’m a moderately fast typist (55-65 WPM). However, that’s probably about 1/2 to 2/3 as fast as I speak. Voice recognition has been in development for about as long as the mouse. So, voice recognition can certainly replace the keyboard, which would make it great for things like contextual menus, shift/ctrl/alt function (or their MT equivalents) and the like. My GFs myTouch phone has a pretty impressive voice recognition capability that makes me wonder why I’m still using a keyboard on my desktop.

Once you’ve successfully integrated voice recognition, the Imprecision Argument pretty much goes away. You’re now using voice commands - which can include a near-limitless library of function calls all on it’s own - along with gestural input for the spacial manipulation aspects - which is just about as NUI as you can get.

So, let’s take a simple app like a web browser. You can tap an icon or simply say “go to web” or “open firefox” and the app opens to your homepage. Say “favorites” and screenshots of your top-10 visited websites pop up. You tap the one you want and it comes to the front. You quickly resize the screen using the pinch move, then you place another finger on the window to “hold” it in place and pinch again to increase the font size.  You can quickly scroll up, down, sideways and diagonally using a single finger. Zooming into a hyperlinked word or image automatically begins to take you to the new page as the new link is superimposed onto the zoomed-in old page. This contextual zooming allows you to quickly peak at the link. Not what you’re interested in? Quickly zoom back to the original page. In this way, you’ve eliminated the need to tap a forward or backward button. You simply zoom forward or backward.

Want to listen to some music? As you browse the web say, “play song - marley”. The program is intuitive - it knows two things right away: you didn’t specify a song and you have both bob and damian song files. However, you only ever call bob by his last name so it picks a random song from the legends album and starts playing at a low sound level. You want it louder so you say, “volume” and a knob appears on the screen. You turn the knob up to full blast and as soon as you take your fingers off the screen it quickly fades away without the need to click a “close” button.

This is how I imagine interacting with my computer in the next 5 years or so. No mouse. No keyboard.

Profile
 
 
Posted: 29 September 2009 01:36 PM   [ Ignore ]   [ # 5 ]
Rank
Joined  2008-11-26
Total Posts:  6
New Member
Coffeegod - 29 September 2009 12:54 PM

First realize is that you are not just replacing the mouse. A fully functional MT device should replace these three I/O devices: mouse, keyboard, monitor. The keyboard is tough to replace because it’s pretty efficient and the haptic feedback is invaluable for touch typing. I’m a moderately fast typist (55-65 WPM). However, that’s probably about 1/2 to 2/3 as fast as I speak. Voice recognition has been in development for about as long as the mouse. So, voice recognition can certainly replace the keyboard, which would make it great for things like contextual menus, shift/ctrl/alt function (or their MT equivalents) and the like. My GFs myTouch phone has a pretty impressive voice recognition capability that makes me wonder why I’m still using a keyboard on my desktop.

Once you’ve successfully integrated voice recognition, the Imprecision Argument pretty much goes away. You’re now using voice commands - which can include a near-limitless library of function calls all on it’s own - along with gestural input for the spacial manipulation aspects - which is just about as NUI as you can get.

So, let’s take a simple app like a web browser. You can tap an icon or simply say “go to web” or “open firefox” and the app opens to your homepage. Say “favorites” and screenshots of your top-10 visited websites pop up. You tap the one you want and it comes to the front. You quickly resize the screen using the pinch move, then you place another finger on the window to “hold” it in place and pinch again to increase the font size.  You can quickly scroll up, down, sideways and diagonally using a single finger. Zooming into a hyperlinked word or image automatically begins to take you to the new page as the new link is superimposed onto the zoomed-in old page. This contextual zooming allows you to quickly peak at the link. Not what you’re interested in? Quickly zoom back to the original page. In this way, you’ve eliminated the need to tap a forward or backward button. You simply zoom forward or backward.

Want to listen to some music? As you browse the web say, “play song - marley”. The program is intuitive - it knows two things right away: you didn’t specify a song and you have both bob and damian song files. However, you only ever call bob by his last name so it picks a random song from the legends album and starts playing at a low sound level. You want it louder so you say, “volume” and a knob appears on the screen. You turn the knob up to full blast and as soon as you take your fingers off the screen it quickly fades away without the need to click a “close” button.

This is how I imagine interacting with my computer in the next 5 years or so. No mouse. No keyboard.

Sure, you can get a computer to respond to voice—yes, most cellphones take voice commands these days, but do you see people using that feature often? Rarely, if ever. A major problem with voice is that there’s no privacy to your interaction. At office, in school, in your office next to your kid’s room, sitting at Starbucks, how often do you want to be blurting out, “Computer: goto google.com!”

Speaking of multitouch, gestures can certainly mediate many forms of interaction, but coupling it with voice to become the ‘primary’ method of HCI for personal computing seems a bit narrow-sighted. Solution looking for problem.

Profile
 
 
Posted: 29 September 2009 01:46 PM   [ Ignore ]   [ # 6 ]
RankRank
Joined  2009-07-31
Total Posts:  213
Member
ddiakopoulos - 29 September 2009 01:36 PM

Sure, you can get a computer to respond to voice—yes, most cellphones take voice commands these days, but do you see people using that feature often? Rarely, if ever. A major problem with voice is that there’s no privacy to your interaction. At office, in school, in your office next to your kid’s room, sitting at Starbucks, how often do you want to be blurting out, “Computer: goto google.com!”

Speaking of multitouch, gestures can certainly mediate many forms of interaction, but coupling it with voice to become the ‘primary’ method of HCI for personal computing seems a bit narrow-sighted. Solution looking for problem.

I already thought of that. You should check out the work that’s been done on subvocal speech recognition over the past half decade. links:

http://en.wikipedia.org/wiki/Subvocal_recognition
http://www.forbes.com/free_forbes/2006/0410/084.html
http://www.youtube.com/watch?v=spFIBtTVtAA

Profile
 
 
Posted: 29 September 2009 06:00 PM   [ Ignore ]   [ # 7 ]
Avatar
RankRankRankRankRankRank
Joined  2007-04-08
Total Posts:  2539
Dedicated

I wouldn’t say that I agree about this “First realize is that you are not just replacing the mouse. A fully functional MT device should replace these three I/O devices: mouse, keyboard, monitor” and Dimitri has some good points. Vocal recognition will most likely never be the cure for keyboarding. There’s countless reasons for this, not just including the privacy issue. People type a good amount of the time and if I were to replace this typing with vocal features, I would be hurting by the end of the day; not to mention that I really don’t like using my voice for unnecessary things (I don’t even like talking on the phone). At the end of the video above he talks about people wanting to feel in control and therefore they’ll probably want to always hold a device rather than be implanted with that and I tend to agree. While a mouse isn’t the most natural thing, people often play with them for the sake of just interacting; moving your arm around rather than just thinking of something gives people a sense of both control and ‘play.’

The point with future technology isn’t to replace previous technology, but to make the experience more efficient and effective. I think we often get caught up thinking that in order to adopt something new, we need to get rid of the old; but that’s not always the case. Everything has it’s purpose. Jeff Han has said many times that he believes that multitouch (in his mind) isn’t meant to replace pen, mouse, and keyboard interaction; it’s just meant to aid when multitouch is more effective. Sometimes it’s just not more effective. Using a technology just for the sake of using it doesn’t make sense; just as using a gesture for the sake of using a gesture isn’t efficient or effective. There should be a particular reason that makes it more efficient and effective than using something else.

Lastly, if we’re going to use the term multitouch, then it only applies to multitouch. It doesn’t involve any other aspects such as voice, fiducials, brain wave analysis, etc. That’s a collection of things that don’t embody multitouch itself.

 Signature 

MTmini, MTbiggie, & Audiotouch creator & Community Core Vision Co-founder

Follow on:
My Blog | Facebook | Twitter | Youtube

Profile
 
 
Posted: 29 September 2009 08:06 PM   [ Ignore ]   [ # 8 ]
RankRank
Joined  2008-05-07
Total Posts:  173
Member

well, I agree, MT is not going to replace the mouse, keyboard, ec. But I think the authors of those articles had that in mind, hence their bias towards MT is only a cool factor. But there are cases, like I mentioned before, mouse is just too un-natural to use.

Take ATM for example, touch technology has made it easier to use and less maintenance. Imagine if that ATM is mouse driven! So are many POS (point of sale) systems, touch technology has made those system effective to use.

MT technology will make computers ubiquitous—particularly at home. I think Microsoft has the right vision, to make Microsoft Surface a home appliance eventually—smart home controller, home information center (maps, weathers, security).

Profile
 
 
Posted: 06 October 2009 01:52 PM   [ Ignore ]   [ # 9 ]
Rank
Joined  2009-02-04
Total Posts:  93
New Member

On voice, I agree that voice will likely not be a primary interaction method in this next generation of interfaces.  It might be useful secondary input, but the problem is that in order for voice to work well, the relationship between computers and users would have to change significantly.  Think about when you want to call someone on the phone—you must devote your full attention to the conversation, you must be aware of the social context and whether having a voice conversation is appropriate in a particular environment, and overall having a vocal conversation is much more formal of an interaction.  You have to form your abstract thoughts into the limitation of words and vocalize them, and the interaction is very linear.

Compare to how you interact with a pen and paper.  It is informal, you can play around, doodle, do whatever your private, abstract thoughts want to do.  The language is not limited.  Your relationship to the paper is very personal.  It is the same with PCs right now.  The way users interact with a PC reflects their mental state and it is not subject to many social constraints, as it usually doesn’t affect or interupt anyone around you.  Almost anywhere where you could get away with taking notes on a notepad you could get away with using a smartphone or laptop. 

Seth (cerupcat) - 29 September 2009 06:00 PM

The point with future technology isn’t to replace previous technology, but to make the experience more efficient and effective. I think we often get caught up thinking that in order to adopt something new, we need to get rid of the old; but that’s not always the case. Everything has it’s purpose. Jeff Han has said many times that he believes that multitouch (in his mind) isn’t meant to replace pen, mouse, and keyboard interaction; it’s just meant to aid when multitouch is more effective. Sometimes it’s just not more effective. Using a technology just for the sake of using it doesn’t make sense; just as using a gesture for the sake of using a gesture isn’t efficient or effective. There should be a particular reason that makes it more efficient and effective than using something else.

I do agree that the purpose of multi-touch displays isn’t to get rid of the keyboard and mouse.  In some cases (kiosks mostly, and embedded devices like a multi-touch remote control or similar) the multi-touch interface could be the sole input.  Most of the time it will be a combination, depending upon what is best for the scenario.  I don’t think that physical keyboards are going away, and mice probably are not either.  However; the point of MOADtouch is to get us past the mental block of mouse-based interactions and tradition and think of what multi-touch really is good at. 

A properly designed multi-touch interface might be completely superior to a mouse-based interface in every way, but we have to get rid of the mouse cruft ideas and form new ones around multi-touch.  Then we will get a fair comparison of the utility of multi-touch verses mouse, and make good arguments for mass adoption of multi-touch hardware.

(Note—until robust shape-changing force-feedback multi-touch surfaces are available, I assume that keyboard will not be seriously challenged by multi-touch displays on PCs.  The touch feedback is too important for rapid text input.  Multi-touch display plus physical keyboard will still be a great combo for NUIs.)

 Signature 

Joshua Blake
Co-founder & VP of Engineering, Orbbec
http://www.orbbec3d.com

Profile