Microsoft Surface Released


[via gizmoda]


Microsoft Surface is a 30-inch display in a table-like form factor that’s easy for individuals or small groups to interact with in a way that feels familiar, just like in the real world. In essence, it’s a surface that comes to life for exploring, learning, sharing, creating, buying and much more. Soon to be available in restaurants, hotels, retail establishments and public entertainment venues, this experience will transform the way people shop, dine, entertain and live.

A video of it in real use: [via on10.net]
Also see:  Top Interactive Displays [via gizmowatch]


1- Touch Surface / Acrylic (DNP HoloScreen?)
2- (4-5) Cameras (IR pass filter)
3- Vista Computer (Wifi, Bluetooth, Wireless USB)
4- Infrared LED Array (IR Illuminent)
5- DLP Projector (IR cut filter)

Responses

Some nice features sure, but nothing really we haven’t made or thought about within the community. I like the design of the table. You can even put stuff on it without interfering with the image. Rather quite expansive device though. And it runs vista moehaha

ah, they integrated multi-touch with reactable and bluetooth.
but that “lay the credit card on the surface to pay"-example is hilarious, but that will be the next thing, RFID-enabled credit cards, wuahaha.
So, who’s the first to lay hands on an SDK and port it to linux?

im not quite sure how using 4 seperate cameras would be ideal. Wouldn’t the system that runs these 4 cameras be bogged down that much more since its streaming 4 cams as opposed to just 1?

Pascal, I’m working on that SDK. And in the meantime I’m going to be studying up on WPF and XNA, as it’s apparently as close as I can get to that SDK at the moment.

Does anyone know how the IR array coming from near the projector would work?

here is my idea for multiple cameras. using different cameras for different purposes. let’s say that 2 of the cameras are used for triangulating the point of touch illuminated using ftir with a specific radiation wavelength, and the other 2 using an ir illuminator from the bottom with a different wavelength of IR. the second set of cameras will be used only for fiducials. doing that we have 2 sets of cameras “seeing” different things. It is absolutely necessary to do the hardware video processing and not software, in this way you would take the load of the CPU.
I’ll be starting a similar project as soon as I’ll have the time and the resources for that.

According to the Ars article on Surface (http://arstechnica.com/news.ars/post/20070530-what-lurks-below-microsofts-surface-a-qa-with-microsoft.html), the multiple cameras are just so they can keep the height of the box down.

“five cameras were needed because of field angle issues. In order to get the table as low as it is, five cameras are used so that each one can have a small field of view. That translates into better resolution and speed (measured in pixels/second) than a single camera with an exceptionally wide-angle view of the table surface. “

smile I was just going to post that. Using several cameras for different areas of the screen , stitching the image captured , by this increasing the resolution.
Thank you for the link.

Not sure that they use FTIR..., is not using that system called touchlight?

FTIR is jut a way of illuminating a surface.

Flixxy has a more in depth movie about this.  Have a look!

http://www.flixxy.com/microsoft-surface-multitouch-computing-demo.htm

yeah, we have thought about that. like the splitting of bills at a restaurant and so. i believe we would also be at that level - of response - if we had their budget. also they have dedicated teams, where most of us do it in our spare time

Is it me or is the MS product looking like it is less responsive than other solution we’ve already seen ?

In some part of the demo, it look likes the interraction is lagging behing the finger. In other it look like the table is not sensible to very soft touch (see brush for instance).

I also agree, there is not much not already seen here wink Plus, I find all the example stupid. C’m’on, is MS expecting to see this thing before end of the year ? I mean, the visual tag required for all the object recognition is not existing anywhere at this time ! This means, if anybody come to this table with its own phone, the table will not recognize this at all (assuming there are several people arround with phone too) ! Simply because there is no way to identify the devices in the wireless zone (bluetooth, wifi) that have actually been layed on the table (from the one not layed on the table). So MS is fooling us here with again another marketing demo :o) Are they traying to FUD Apple ? Well .... I do think so ! Will it be successfull ... no, I don’t thing so, at least not with the current version. Because once, the people will get use to the interraction fun, they will get bored and think : “okay ... so what ?”.

Making this simple, this is another example of MS good idea (read “embrace and extend” but without “extend” part) wink

@ holopix, processing by the GPU is something that i’m really interested in, some people say it doesnt really matter if you do it by cpu or gpu, but nobody has actually properly benchmarked it, so it’s very interesting

What about this? (see multitouch SensitiveTable Skin)

http://naturalinteraction.org

Just read the article on ars technica… and they mention that the MS table uses “near-infrared light”. I’ve googled the heck out of that subject and much of it is over my head. But simply asked… are the LEDs that I see most of you using (like the Osram SFH485s) producing NIR or is that a different wavelength all together?

Leave a response

Click here to register an account.

Categories