I was talking with Vlado, Larky and Cerupcat on IRC today, and Larky showed me a new way of thinking about this whole multi-touch thing. WIMP is a more then 20 year old computer interaction model, and certainly does not work well with 2 year old, hardly developed, brand new state of the art technology. Like Larky said, we are used to hitting buttons to do things, as we would in real life, but now, it is open to so much more. Buttons, sliders, windows, keyboards, they should all disappear. To [partially] quote Jeff Han, “Once you leverage the technology right, the interface just disappears”. I was thinking about this, and even in nuiman’s Lux/Google Earth demo he uses buttons to navigate around. IMO that’s not an ideal way to do it, I would think you should just pan around with finger, etc. You should be able to use the interface with no manual, even if you have little or no prior computer experience.
So, thinking on the more technical side(the one I am good at), I was thinking that 20 year old window rendering technologies(Aqua, X11, Whatever the windows one is) will not work for this very well. I think we could use the same model, communicating over TCP, thus being completely network aware(this is what X11 does, not idea on others). This way, we would not even code in stuff for keyboard and mouse, but rather broadcast touch events system wide over a system like multicast, instead of mouse and keyboard events. We could them implement a system like dbus(like it, I know there is not dbus on Mac os and Windows) for seamless communication between programs. This would allow for something like, I just got a VOIP call, and the user is watching a DVD. Ill pause the DVD and ask him if he wants to answer it without closing the movie playing.
I was thinking we already have a lot of the work done, Community Core is a good tracker, Lux will be a great framework, but we need a layer in between. We can only take the standard one right now so far. It would be a huge undertaking to write one, but I think it would be the best way to take true advantage of Multi-Touch technology. Also, given enough people, all tasks are small(an application of Linus’ Law to stuff other then bugs). We have an amazing amount and variety of people here, Graphic Artists, Animation People, GUI Programmers, Back End Programmers, Hardware People, if we leverage the power of the community correctly we can probably pull this off.
A problem with this approach(I just though about) is existing apps would not work. I don’t think this is a major drawback, because the whole WIMP setup doesn’t work anyway. It just promotes new fully MT Aware apps to be written.
Great! We will need a ton of people to do this, the more help the better. I was also thinking, this would allow us to implement ZUI to its full potential, which would really bring out the true abilities of Multi-Touch. Some other things, we could implement gestures at this level, so they could be built in to all apps with little or no code on that end. What I think the ultimate goal should be is something that can allow for 100% seamless computing. Where it kind of “knows what to do” without being an AI, and without being like Windows(This user has not used this file for 3 days, so obviously they don’t want it any more: *Deletes important system file*).
Xela , the thing is , WIMP is so wide-spread beacause it’s practical in alot of ways. In alot of apps you will need the rectangled view , you can’t have photoshop rounded can you?
Atleast if you go for multi-user , you need rectangles.Also the no button idea I agree , but not entirely.
Your example the voip call and the dvd.You ask the user if he wants to pause and he presses what? Buttons are good , the idea with multitouch is to try to keep it simple , we have a chance to start over , windows , mac osx , eaven linux they are all to crowded by complexity.
Wouldn’t it be nice to start web browsing with no bookmarks menu , no nada. Making everything highly intuitive and fun to do.
A video editing app would be really interesting , photo editing , 3d design. They all benefit from multi-touch.
Curently I’ll be gone for a week or two on vacation , hopefully that will give me some more inspiration.
Also here’s a few concepts I’ve been toying around with , they are rough drafts and you’ve seen them , but I’d like the community to express ideas on how to improve over them , how they are flawed etc.
Curently for multi-user interaction and simplicity the plant-growth one is the best.
Also stuff like Tile-ui , and the Java Looking Glass project should be tools of inspiration to us.
P.S: Sorry for the spelling mistakes but it’s 4AM here , dead tired.
I do agree we must stay with rectangles, and buttons. We can use them differently, however. We have to be able to rotate, scale, and even do other odd things to them. If we were not bound to rectangles(I think compiz does 90% of the work for this) you could grab two parts of the window in the middle and rip it in half. This could duplicate it, or actually rip it in half, its up to the application. How do you get to your bookmarks if there is no menu? The idea of the menu growing out of the finger works, but its kinda slow and useless. Something like that would be nice though. On this stock Ubuntu install I have 6.2 centimeters of menu intruding into my browsing(vs my Debian/wmii install with about 1, I love Debian:D).
In my opinion, we would want as few UI elements in the desktop(we should stop calling it that, but whatever) as possible. This includes taskbar. We can get to menu’s with gestures. I do not know how app switching will work, we will have to be able to distinguish from different users apps. For managing them we could leverage the power of Compiz’s Group feature, which pretty much has this all coded out.
Tile-ui looks like it could be implemented well. I like their stack idea. Like I was saying in a previous post somewhere else, we need to take the ability to interface with real world object to its full potential. Binding a window to an object is one example. I could bind emacs(*nix text editor for those to don’t know) to my keyboard, I could type with a real tactile keyboard(I hate soft keybaords), and if someone else comes to check their email I can just slide my keyboard over. Emacs would act like it is one with the keyboard, and move with it.
Java Looking Glass, and other similar projects have always bothered me. Yes, they look cool and sell fast, but they have not real use. Not that I can come up with anyway, other people might see one. I think ZUI is totally the way to go instead, which would allow for great multi tasking, and no need for virtual desktops.
I like your mockups, but in the respect of multi user I have a few gripes about each. I don’t mean to be negative, they are really awesome and you are great in Photoshop, its just a goal sort of thing. IMO the growing out of finger would be cool, but too slow to be of real use. Cover Flow is cool, but rips of apple too much, and is in my opinion useless eye candy*. The circular menu looks awesome, but assuming MT grows like computers did, there isn’t enough room for all the apps
Looking from a more technical standpoint, I think we can make the existing X11 Work. Provided we rip out the keyboard and mouse stuff. All the windows handling etc is done at the window manager level, so if we just code TUIO processing into X11 and write out own WM. This would allow us pretty much all the customization we need, while saving us the writing of thousands of lines of code. Apparently X11 _can_ be compiled on windows, but if I get it working ill make some binaries.
We should all get on IRC and talk about this, it really would help if this was real time. Also, sorry for not reading this over before I post it, but my hacked X11 just finished compiling and I wanna try it out
All the best,
*Says the Debian user that runs a Command Line most of the time, and when he doesn’t runs a tiling window manager(wmii) with no graphics acceleration. That means my opinion is useless in that context.
Ok got some Wi-Fi here , posting from my PSP , I like the ZUI idea but it needs to be writen from scratch for the desktop beacause there is no ZUI desktop project ,atleast i don’t think so, we can’t use zoomorama or silverlight here.
If you think youre up to the task , then I’m all up to designing it , but it’s gonna require a few coders , you can’t do this by yourself fast.
I am thinking we can just write a Window Manager build on X11, and not a whole new windowing system. X11 does not handle windows, or anything like that, so it would probably give us enough freedom to write whatever we need. I am not sure about making Compiz still work though, I think that is something we do want. We will definitely need more people to develop it, but I don’t know how many we are going to get Maybe if we gave people free tables… I think we should start by taking the X11 Code and removing the Mouse Dependency(Keep the keyboard, I like it:D), then add TUIO support so it broadcasts it in the same format as mouse coordinates. That way existing apps will still work(like Compiz). I do not know if this is possible, maybe MPX would let us do it, or similar.
We should try and collaborate with the Ubuntu people, then we could do some great stuff. They might finish it if we get a good amount done, like I am sure they could integrate it well. I was aware KDE had a feature like that, but I guess I didn’t realize it had uses. Ill install KDE4.1 on Tuesday and try it out.
So, does anyone else want to help with this? I think there was someone in another thread that was going to convert MPX to TUIO. Once he does that we 1/3 there. I still wonder how well MPX works with non-multi-touch written apps, like GIMP or Compiz. Ill try it out today and see if its really worth it.
MPX Is definitely the way to go, then we just need to write our own WM(or even use someone else and modify it). Ill find the guy who said we was going to add TUIO to MPX, and see if we can all get on IRC sometime.
From the last chat, yes, I think the idea of creating the driver to handle a current environment would be brilliant, but ultimately you’d want a windows manager that would inherently act correctly from a multi-touch standpoint.
Yes, definitely. We need to broadcast the touch events so they look like mouse events. Then have a window manager written with more multi-touch awatr stuff, like window rotation and finger friendly menus. I don’t know what the XInput is(though it sounds big), but that is what clutter uses for multi-touch.
We all need to get on IRC sometime, probably next week, when Vlado gets back.