GSoC Ideas Page
This page is dedicated to general information and ideas that can be used for student proposals. The ideas listed on this page are just that - ideas, They are deliberately somewhat vague, because you are meant to fill in the details. We don't want to see project proposals which are basically cut-n-pasted from our ideas page. Don’t do that! If you do, you will be tossed out very early in the application sorting process. Be creative and extend our project ideas with additional description and your own feedback. Also listed below is contact information for the community and specific mentors.
Proposing Your Ideas
You really don't have to use project ideas we covered below. These are projects that we have in mind to start working on or think that would be really useful for NUI Group community. Think of them as starters, even thought some developers think it’d be great to see them done. If you have your own idea, bring it on. We are always open for new innovative project ideas, so feel free to amaze us with yours. We’ll judge applications on their strengths, not on who wants the project done or who thought of it. However you might consider and find some of the project ideas listed below interesting. If you (as a student) have a new idea that is not listed here feel free to edit this project ideas wiki page and add your idea with your contact info.
- User Interface and Experience Design (Human Computer Interaction)
- Computer vision (tracking, optical flow, image processing, stereo vision)
- Touch, gesture and stylus input (Bi-manual interaction)
- Machine learning (AI, Neural Nets, Bayesian clustering, genetic algorithms)
- Custom Hardware: Microcontrollers, Sensors and open source hardware projects such as Arduino
- Mobile interaction concepts
Look at the 2013 Project Ideas tab for ideas related to our current software, or propose your own project.
Human Computer Interaction Ideas
- 3D Interaction Conceptual UI/UX - 3D Hand Tracking & Object tracking.
- Brain Computer Interface - algorithms and frameworks
- Eye tracking - Algorithm, framework, and interface widget work
- On-Screen Keyboards - create on-screen keyboard software for OS X, Windows and Linux so no external keyboard is needed to operate regular programs through a multitouch display
- 3D keyboard - Enable text input in volumetric space
- Collaborative multisite collaboration using multitouch screens - Write an application/framework which will allow for multisite collaboration on multitouch screens; what this means is that you can have many multitouch screen in different locations connected via internet or internal network and the screen works as a shared desktop with gestures support and synchronization between screens. You can have a look at the croqodile project (croqodile.googlecode.com), they use the Croquet TeaTime protocol for syncing. A solution for any platform would be most welcome.
- Visual feedback methods for training and communicating 2D/3D gestures
- Automatic projector calibration - propose a project for automatic projector calibration used with multitouch screens in DI and FTIR setups
Hardware Specific Ideas
- Stereo camera pairs with a focus on stereo vision
- Continue work on the DIY Multitouch Frame
- Low-cost/Open Source BCI (Brain-Computer Interface) device design
- Open Source Heads-Up Display (HUD) as in Google Glass :)
- Other novel natural sensory interfaces, i.e. haptic (vibrating) compass headband
- Eye tracking device (see eyewriter project)
We don't have exact template for your project proposal to use but we provide a list of information that we would like to see in it which includes: name, email, timezone, age, education, blog url, location, your project proposal, why you'd like to complete this particular project, are you an active NUI Group member, have you been working on multitouch, natural user interfaces, HCI and related topic before, the reason you're the best individual to do so, what's your academic and industry background, and of course open source development experience, explanation of your development methodology is a good idea, as well. Be prepared to supplement your proposal text with links to an external site, for example full version of your proposal in pdf file. You should also plan to provide an abstract of your proposal, including a brief list of deliverables, via the GSoC web app to ensure that your work receives sufficient review.
2013 Project Ideas
Some ideas might not be suitable to propose as a GSoC project because they are too easy and development wouldn’t last 3 months. Remember that you can still select such an idea and combine it with other ideas in order to write a proposal which meets GSoC requirements. Also, some of the projects ideas might be a little too hard to for one student developer to complete over a summer, therefore you can (and should) limit yourself and just work on particular parts that can be completed. If you remain interested (and we hope you do), you can always work on the rest features after GSoC – that’s our little open source world.
Community Core Frameworks (C/C++ services)
The Community Core frameworks, led by CCV, are the focus of this year's GSoC. Our goal is to have a stable, fully-functional CCV2 release by the end of this year's program, but the others frameworks also have improvements that could be proposed.
Community Core Vision (CCV)
A cross platform computer vision server for high fidelity tracking of objects such as fiducials, hands or fingers. First release during GSoC 2008 was met with great community response. In 2009/2010 we continued to develop new tracking techniques and enhanced stability. In 2011 we shipped a multicamera version of CCV with support for the Microsoft Kinect depth sensor. We are currently developing the 2.0 version of CCV (http://nuicode.com/projects/ccv2) which is completely rewritten. CCV2.0 is based on nuiFramework (itself based on the Movid project and OpenCV) and uses an unfixed, modular pipeline and flow-programming GUI for configuration. We are looking to ship CCV 2.0 this GSoC, including further developing tracking techniques and ensuring stability and cross-platform/device compatibility.
- oF adapter - Write a library to ease code portability between CCV1.* and 2, replacing components of oF that the 1.* modules depend on (the Point class is a good example).
- Optimization - Tuning the performance of the multithreaded CCV2 Pipeline Engine and Module Structure/API
- Calibration - Port the finger calibration system from CCV1.* and get it working in the CCV2 metaphor, probably by integrating calibration into the Camera module. Might need some additional work to output the calibration screen, as that depended on oF, but this should be doable with OpenCV drawing.
- Ship CCV2 - Ensure that the CCV2 pipeline works on all platforms, test devices, etc.
- Additional device support - Integrate with drivers (OpenCV, others) or write your own along with a Camera module to support multiple devices in CCV2 (support is currently limited to the PS Eye through CodeLaboratories' driver)
- Unfixed pattern calibration - add ability to choose different calibration patterns – rectangle, circle, polygon and etc.
- GPU-accelerated filter modules - Currently we use filters from OpenCV utilizing the CPU, we could explore an architecture to extend the parallel pipeline architecture to support GPU filters
- GPU-accelerated blob tracking - Currently our KNN algorithm uses CPU
- Additional blob-tracking algorithms - Bayesian (i.e. BraMBLe) blob tracking algorithm, particle filter blob tracking, neural network blob tracking, etc.
- Physical Object Tracking - Work on tag (fiduial) recognition, create a separate library which can be shared other project or merge existing library (reactivision) into CCV
- Region of Interest Selection - Implement a module to select an ROI to track in
- Innovative/Modern Computer Vision algorithms - implement a newer algorithm from the literature (Predator/TLD, SlowSight) for online learning and tracking of more complicated objects and features.
- Improve TUIO support (TUIO 2.0) - create a TUIO 2 library and integrate into CCV2. TUIO 2 is more extensible, and we can get more information from the tracker such as the shape of touches, hands, etc.
- Research multicomputer use for image processing – work on distributed blob detection and tracking solution, dividing computation between many machines, this would be really useful for big installations like multitouch walls and with many cameras. CCV2 has a threaded pipeline, so this would involve distributing pieces of the pipeline onto different machines.
Community Core Fusion (CCF)
GSoC 2011 – Alpha (code) Fusion is an open source, modular, cross-platform interaction framework designed to rapidly prototype and develop multi-modal application interfaces. It seeks to expose an easy-to-consume API for user input in multiple simultaneous input modes, initially supporting audio/gesture with a mouse and CMU Sphinx. Alpha was written as part of GSoC 2011; there are many emerging applications for this type of framework including integrating tracking information from multiple cameras in a room (more generally than CCV Multicam), integrating 2D and 3D data, and prototyping complicated NUI interfaces.
- Update nuiFramework backend - Sync CCF codebase with the new nuiFrameworks version developed as part of CCV2
- Core/Input API - Design and implement standard API for other core applications to communicate with the Fusion engine
- Client/Application API - Design and implement an API for Fusion to expose multimodal services to different applications in a standardized way; Define a generalized way (a grammar?) to represent multimodal interactions from arbitrary sources (camera, touch, mouse, voice, gesture, etc.) and fuse semantic information from all of them into a final "multimodal sentence".
- Fusion algorithms - continue to develop modules to fuse other
- Pipeline GUI - Overhaul the flow-programming pipeline configuration interface to be in line with CCV2's
- Add additional Fusion sources - Add Kinect, other depth camera support to CCF
Community Core Web (CCW)
- Plugin architecture - Continue Plugin Architecture Design and Implementation
- Move to nuiFrameworks - Move CCW to sit on top of nuiFrameworks rather than ofx. This may involve writing some compatibility libraries or refactoring.
- Expand Plugin Library - Ruby, Perl, Linq, ...
- UI development - continue development of HTML/JS based management and configuration utility, looking to support other nuiFrameworks web interfaces
Community Core Audio (CCA)
GSoC 2010 – Alpha (code)
A cross platform server which manages voice inputs, convert voice to text, and output resulting messages to network. Alpha shipped during GSoC 2010 including a new addon (ofxASR) abstracting signal processors and sources. We hope to get feedback from the community on this preview and continue to move it forward to support emerging voice interfaces.
- Natural language backends - broaden natural language processor/provider support (SAPI, Sphinx, OSX-SRS). Currently the backend is Sphinx.
- GUI library - Research and migrate the logic to another GUI library to get best from code practice, performance and community.
- Move backend to nuiFrameworks from oF - We've designed nuiFrameworks as part of CCV2 to be a foundation for these types of NUI interaction servers so that we can focus our efforts on a common, powerful and flexible backend. Porting CCA to nuiFrameworks would allow it to better integrate with other Community Core services as well as take advantage of improvements to the other projects.
- Continued UI Development and Optimization - Continue working on UI development which include determining application features and goals and defining interface direction for supporting additional configuration and preferences.
- Platform Testing and Device Support - Ensure stability on all platforms, and test audio input and output devices on different operating systems
- Dialogue support - Prominent speech interfaces like Siri expose a conversational interface that allows the audio interface to guide the user through a complicated series of inputs step-by-step, prompt to correct ambiguous information, or utilize compound expressions. Extending CCA with this functionality would greatly enhance its utility and allow community research into this type of interaction.
Community Client Frameworks
Goals - Development and standardization of target frameworks including structure, documentation, and release packages.
- Kivy/PyMT (Python) - A client UX framework and set of example applications written in Python (GSoC 2008/2010/2011 projects, website)
- MT4J (Java) - A client UX framework and set of example application written in Java (GSoC 2009/2010 projects, website).
NUI Client Applications
Typically developed in AS3, C/C++/C#, Java, Python or a Visual Programming Language (Max/PD)
- Gesture Implementation - Write a multitouch gesture recognition library for iPhone/mobile (defining your own gestures)
- Rich Search Engine UI
- New 3D Depth Tracking Scenarios
- Gesture Sandbox – Gesture recognition and programming that saves to flat files.
- On Screen Keyboard and Input Devices
- Image Manipulation – Use MT to bend images etc
- Your innovative multitouch application - propose new innovative multitouch application you would like to work on (interaction in 3d environment, creating scenes, documents), amaze us with your ideas
2008-2012 Projects Ideas
Some ideas might not be suitable for GSoC proposal because they are too easy and it’s development won’t last 3 months. Remember that you can still select this idea and combine it with other ideas in order to write GSoC proposal which meets GSoC requirements. In addition, some of the projects ideas might be to hard to develop during the summer by one student developer, therefore you can limit it and just work on particular parts. If you become interested you can always work on the rest features after GSoC – that’s our little open source world.
Community Core Projects (C/C++)
- Community Core Audio (CCA) (http://nuigc.com/cca) (GSoC 2010 – Alpha) – A cross platform server which manages voice inputs, convert voice to text, and output resulting messages to network. Alpha shipped during GSoC 2010 including a new addon (ofxASR) abstracting signal processors and sources. 2012 Goals – Ship 1.0 release stable on all platforms and broaden natural language processor support (SAPI, Sphinx, OSX-SRS) / CCW Management
- Community Core Vision (CCV) (http://nuigc.com/ccv) (GSoC 2008/2009/2010 – Beta 1.4) – A cross platform computer vision server for high fidelity tracking of objects such as fiducials, hands or fingers. First release during GSoC 2008 and with great community response. In 2009/2010 we continued to develop new tracking techniques and enhanced stability. In 2011 we shipped multicamera version of CCV with support of Kinect Depth sensor. Also we are developing the 2.0 version of CCV (http://nuicode.com/projects/ccv2) which was completely rewritten. CCV2.0 bases on Movid/openCV and use wxWidgets as its GUI library.
Community Client Frameworks
- Actionscript/Flash – (cm[at]nuigroup.com) – A client UX framework and set of example applications written in Actionscript (Started in 2006 with Touchlib)
- C/C++ – A client UX framework and set of example applications written in C++
- Java/MT4J – A client UX framework and set of example applications written in Java (GSoC 2009/2010 projects)
- Python/PyMT – A client UX framework and set of example applications written in Python (GSoC 2008/2010 projects) 2011
- Framework Goals – Development and standardization of target frameworks including structure, documentation and release packages.
- Signal Simulators – Utilities to generate signals when hardware is not available. (QMTSim GSoC 2008)
- Signal Translators – Client/Server Translation Utilities (flosc, wm_touch2tuio, mtvista)
With ongoing developments for supporting projects:
- OpenFrameworks – Continue contributing to Addons/Framework through C/C++ developments
- Protocols – Socket support on clients and servers to broaden TUIO/OSC support
- UX Design – A general focus on emerging experience design topics related to NUI
- Web Applications – Community Search, Publishing and Communication (LAMP Stack)
CCA (Community Core Audio)
Community Core Audio is a cross platform server which manages voice inputs, convert voice to text, and output resulting messages to network. In GSoC 2010 we shipped our first Alpha of this project and hope to get feedback from the community on this preview and look forward to the future results.
- GUI library – Research and migrate the logic to another GUI library to get best from code practice, performance and community.
- Integration of more voice providers and processors. – Currently CCA uses ofxASR to process signal but should open up to others such as SAPI.
- Continued UI Development and Optimization – Continue working on UI developments which include determining application features and goals.
- New Easy-to-use Dialogs – Currently CCA uses config file to set reference texts and preferences. Add some GUI dialogs to do these jobs.
- Platform Testing and Device Support – Test audio input and output devices on different operating systems.
Projects Page: http://nuigc.com/cca
CCV (Community Core Vision)
Community Core Vision is a cross platform computer vision tracking solution for high fidelity situations such as head, hand or finger tracking. Started in GSoC 2008 and with great community support we are looking to further develop new tracking techniques and platform stability.
- Neural network blob tracking
- Code refactoring and modularization of vision techniques/filters and controls.
- Region of Interest Selection
- Implement Threading and Separation of Tracking and GUI
- GPU Acceleration (existing but not fully implemented)
- Dynamic Filter Addition and Adjustment (in the chain)
- Object/Fidicual/Tag and Pattern Tracking
- Creation of Open Framework Addons supporting community projects
- Enhance Setup GUI – Create more detailed interface for setup and configuration of the vision engine and offer a more dynamic calibration system.
- Physical Object Tracking – Work on tag (fiduial) recognition, create a separate library which can be shared other project or merge existing library (reactivision) into CCV
- Windows Service support for(a daemon) – add Windows Service support to touchlib so it can run in the background as system service instead of user desktop application, define and write communication layer between service and configuration application. Autostart setup possibilty for exhibits. like start osc, start flosc and start flashdemo.swf (possible with batch files, more user friendly approch)
- Unfixed pattern calibration – add ability to choose different calibration patterns – rectangle, circle, polygon and etc.
- Support of Tuio 2 – create an use a Tuio 2 library. Tuio 2 is more extensible, and we can have more information from the tracker, as the shape of the touches, hands etc.
- Surface 2.0 event provider – create event provider that maps TUIO events to Surface 2.0 hardware commands and integrate into CCV
Project Page: http://nuigc.com/ccv
Getting Started: http://wiki.nuigroup.com/Getting_Started_with_CCV
CCV2 (Community Core Vision 2.0)
Community Core Vision 2.0 was the next version of CCV. It is still in Alpha version. CCV2 was fully rewritten and has a better module structure. CCV2 bases on Movid/openCV, and use wxWidgets as its GUI library. The 2012’s goal is to ship the stable version.
- Ship fully CCV 2.0 with unfixed processing pipeline / modular engine
Project Page: http://nuicode.com/projects/ccv2
CCW (Community Core Web)
- Continue Plugin Architecture Design and Implementation
- Expand Plugin Library (Ruby, Perl, Linq)
- Develop XML based UI rendering system inline with hosted services for settings
Project Page: http://nuigc.com/ccw (Pending)
- Develop higher level application frameworks based on the existing TUIO client implementations (C++, C# …)
- Implement TUIO client libraries for programming languages or environments that are not yet supported
- Implement TUIO server modules for existing tracker applications that are not yet TUIO enabled
- Write a TUIO application for Android phones and tablets (similar to the TuioPad for the iOS)
- Develop a TUIO 2 library
Project Page: http://tuio.org
Kivy Framework (PyMT)
More interactions/Widgets – Research and implement widgets based on more
than 2 touches and hands. The question to answer is: What can we do in terms of
interaction with additional information such as orientation? Implementing cool
interactions as widgets or other Kivy features so that they are reusable or
extensible is what we are looking for.
- Intuitive 3D interactions for moving objects like navigating a 3D space, rotating, sizing etc.
- Intuitive interactions for selecting, arranging objects on a tabletop.
- Implement intuitive multitouch menus that take into consideration challenges like display orientation, multiuser, touch input etc.)
- Gesture Framework – Create new or enhance existing gesture framework. Specifically, specifying, recording, and recognizing multi finger gestures. Design and implement the gesture API to work nicely with other multi-touch programming paradigms or maybe even come up with new ones (research paper opportunity?!). Some challenges to think about: How are regular touch events reported while a gesture is being performed, how to check if a gesture is being performed/in progress (maybe state machine based).
- More Input providers – Implementing native input providers for specific input devices. We have already: TUIO, mouse, Windows 7 multitouch (WM_TOUCH), windows pen. Multi pointer X would be nice for Linux, maybe wiimote support nicely integrated to Kivy. (One input provider probably isn’t worth a whole summer project, but one could do a couple or expand on the idea otherwise)
- Improve Video & Camera support – Kivy’s core is very modular, so we can use different backends and you can implement new providers for things liek video support, images, etc. We could really use e.g. a Quicktime Video provider or better webcam support cross platform (especially osx is kind of a pain right now with gstreamer)
- Use current platform’s native APIs – This adds to the previous point in that we’d love to have platform specific providers for pretty much everything. In the ideal world, we’d just ship the Kivy code without additional dependencies. For OS X, this would be the implementation of providers using Core Video/QTKit, Core Audio, Core Image, Cocoa. Or equivalently for Windows, use their native APIs. Prior experience or willingness to learn new APIs required. Not that easy, but certainly a very rewarding task.
- Extend and enhance support for mobile platforms – We already have an Android port in progress. An iOS port for example would be nice. This is challenging, but potentially awesome results.
- Network serialization – Be able to exchange widgets and their state between different Kivy processes (across the network). So you can e.g. throw a multi-touch ball at someone on a remote table. Some work on this has been done, and we could help at the very least with conceptually how to do this.
- Cross platform, no hassel PDF (or other widely used filetype) support – We’d really like to be able to integrate e.g. PDF documents easily, the state of doing this is not very good in terms of cross platform support without having to install or compile all kinds of stuff.
- Kivy IDE Build a Kivy Integrated Development Environment or GUI builder using Kivy itself – We’ve played with this idea, and might be able to help significantly if we get a good start or exciting proposals. We also have had some fun ideas like live coding, being able to change the UI at any time by switching to IDE mode etc. Python is very flexible in terms of loading code at runtime etc. so many things would be possible
- A game or really cool application idea – Come up with a really fun game or useful application that uses multitouch input in new or interesting ways. Let your imagination run wild, and show the world what Kivy is capable off.
- Debugging Tools – Multi touch can be really hard to debug (mouse simulating multi touch helps, but only gets you so far). A good project would come up with useful debugging concept for debugging multitouch development and implement them so they can be used with Kivy (e.g. Kivy modules, or debugging support built into the core).
- Calibration suite – Implement semi-automatic calibration of multitouch displays. This is especially useful for DIY hardware. Think of the following calibration steps: DoubleTap speed calibration, Improved touch flow (e.g. see retaintouch module), find & ignore hotspots, Dejitter, implement coordinate transformations if the tracker isn’t properly calibrated we can do it inside Kivy.
- Community Documentation or Example Apps – Writing good documentation and example apps or tutorials is hard, but really helpful to new people checking out a library. A summer project would involve multiple smaller parts of writing example applications, documentation, and tutorials. Great way to learn about Kivy if no prior experience is present.
- Better integration of Optical tracker – CCV, Reactivision, Movid… are only GUI. You might want sometime to trigger again a calibration, background capture, changes other parameters… directly inside your current application. Idea would be to create a standard protocol and a library for controlling optical tracker in a generic way, and spread that library into all the trackers.
- Enhanced text Input – We have keyboard and virtual keyboards. Other text entry methods could be fun to explore see e.g. http://www.swypeinc.com/
- Enhanced input interactions – Interactions that take into consideration more than just touch input. New technologies allow e.g. for hand tracking, to know hand orientation, etc. Develop widgets that take advantage of this new input data. (Example:Keyboard could be splitted in half: One half for each hand, and the respective half follows the hand it belongs to and rotates with it automatically, done last year in Gsoc 2010)
Project Page: http://kivy.org/
Project Page: http://pymt.eu/
- Shader Effect integration -> Integrate a little GLSL shader framework into MT4j to allow effects like blurring or glowing for example for any visible MT4j component.
- Create the groundwork for zoomable user interfaces (ZUIs) using MT4j -> While zoom-in functionality is already present in MT4j there is no way to “semantically” zoom to navigate to another part of an application or to another application entirely and smoothly zoom in while streaming and loading the new content to be displayed. This would allow for “endless zooming” and to navigate huge data conveniently.
- Complete multi-touch UI toolkit -> Create a comprehensive MT4j widget toolkit (windows, lists, tables, audioplayer etc) with features like skinnable appearance, automatic layouts using the model-view-controller paradigm
- Create and explore new, specialized UI-Components tailored to usability and interaction on multi-touch displays -> For example: new forms of menus (e.g. circular menus – see: http://code.google.com/p/circular-application-menu/), lists (e.g. like in Cooliris http://www.cooliris.com/) or other components that allow to browse content in a user friendly, innovative way
- Network serialization -> Create the possibility to send and share MT4j components across networks with other MT4j applications and implement a showcase application (e.g. document sharing, image sharing etc).
- Test framework -> Create the possibility to have automatic testing by pre-recording user input and then playing it back while asserting the current state. This could also include stress-tools that send random input to applications or widgets to check if they break.
- 3D Model animation -> Implement 3D model animations with bone animation and skinning to allow to animate imported 3D models.
- 3D modeling tool -> Create a multi-touch 3D modeling tool using MT4j
- Multi-Touch games -> Create innovative multi-touch games like a warcraft-like real time strategy game for example, using MT4j
Project Page: http://mt4j.org
NUI Clients Applications
Typically developed in AS3, C/C++/C#, Java, Python or a Visual Programming Language (Max/PD)
- Rich Search Engine UI
- New 3D Depth Tracking Scenarios
- Module Based Application Loading
- Advance Menu System for Application Loading and Control
- Media Loader (Document, PDF, Video, Image)(jpg,jpeg,tga,png,dds,gif,tif, tiff, bmp,mov,avi,wav,ogg)
- Particle Sandbox (C/C++)
- Physics Sandbox (C/C++) http://www.phunland.com
- Paint and Drawing Sandbox (Brushes, Filters and Tools)
- Educational Games (Match/Math Quiz/Beat the Chimp)
- 3D Object Viewing and Manipulation (Standard 3D files)
- Gesture Sandbox – Gesture recognition and programming that saves to flat files.
- On Screen Keyboard and Input Devices
- Dynamic Loading of Files via USB/Firewire or Bluetooth
- Image Manipulation – Use MT to bend images etc
- Audio Creation and Games – (Bloom… Loop Sequencer)
- GSI Mapping
- Web Browser
- Tablet User Interface Concepts
- 3D Navigation Concepts
Mobile – Android/iPad/iPhone/iPod Concepts
- Touch App – propose innovative multitouch iPhone application or educational game
- Gesture Implementation- Write multitouch gesture recognition library for iPhone (defining your own gestures)
- Audio Analyzer- Basic audio analyst tool for iphone (spectrum/sonogram/FFT)
- Accelerometer- Collects and sends Acceleration Data out via OSC
Community Web Apps
Custom developed web applications typically written in Coldfusion/PHP/Ruby/Python/Java
- Spam Filtering
- Ruby and PHP (nuigroup.com/org – nuicode.com developments)
- Community Research and Resource Centralization
- Forum Data Analyst and Restructuring
- Custom NUI Search Engine and Google SEO
- Automatic Multitouch Creator (input goal, output parts list and specifics, etc)
- Any Web Application that could benefit Community…
Other Project Ideas
- Open Source Infrared multitouch frame
- Linux-based stand alone blob tracker – create a lightweight Linux-distro for ultra light pc that includes firewire-camera drivers, blobtracking software and sends the touch information trough ethernet or even better trough USB to the main computer. Also a controlling software is needed for every operating system to configure this linux-box. Also is important to determine how fast processor is really needed for tracking
- MultiTouch mouse drivers for every OS – create mouse drivers for OsX, Windows and Linux so that they can be used without external mouse trough MT-display
- On-Screen Keyboards – create on-screen keyboard software for OsX, Windows and Linux so no external keyboard is needed to operate regular programs trough MT-display
- Collaborative multisite collaboration using multitouch screens – write an application/framework which will allow for multisite collaboration on multitouch screens, what it means is that you can have many multitouch screen in different locations connected via internet or internal network and mt screen works as shared desktop with gestures support and synchronization between screens. You can have a look at croqodile project (croqodile.googlecode.com), TeaTime protocol used in Croquet project for syncing, solution for any platform will be most welcome, whether it’s Flash/Flex/AIR or OpenGL or any other platform specific software
- WiiMote multitouch framework – create a framework for writing multitouch applications such as interactive whiteboard using WiiMote under Mac OS X (DarwiinRemote project might be helpful), write some example applications
- Work on WiiMote Whiteboard project – create new example applications, port WiiMote Whiteboard to Linux and Mac, write unit tests and create installer with binary distribution
- Improve blob detection and tracking algorithms – write optimized library for blob detection and tracking so that it can be used later with different projects ex. tbeta, touchlib, bbtouch
- Your innovative multitouch application – propose new innovative multitouch application you would like to work on (interaction in 3d environment, creating scenes, documents), amaze us with your ideas
- Work on WiiMoteTUIO project – example applications, bug fixing, create a cross-platform solution, write test and documentation
- Automatic projector calibration – propose a project for automatic projector calibration used with multitouch screens in DI and FTIR setups
- Multitouch gesture library – write multitouch gesture recognition library with support for gestures that could consist of several strokes like /\ + – = A, simple one finger gestures like mouse gestures and multitouch gestures (zooming, pinching, rotating and many more), use fingerworks gesture dictionary for example gestures (www.fingerworks.com)
- Multitouch applications on Mac OS X – propose and write multitouch application for Mac OS X, using TUIO protocol for events and new Leopard graphic frameworks like Core Animation, Core Image, Core Video, QuickTime etc.
- Research multicomputer use for image processing – work on distributed blob detection and tracking solution, dividing computation between many machines, this would be really useful for big installations like multitouch walls and with many cameras, test your solution and integrate with OpenTouch or touchlib
- MB Pro and MB Air multitouch trackpad application for Mac OS X – propose and write multitouch application which supports MacBook Air and MacBook Pro multitouch trackpad gestures or extend existing open source project to use the power of intuitive trackpad gestures
- GPU accelerated blob detection and tracking algorithms – write GPU accelerated blob detection and tracking algorithms for detecting and tracking blobs. You can use CG for NVidia or compare usage of GPUCV vs OPENCV
Here you can find some links and resources that could be useful when writing your project proposal. Sometimes they link to other open source projects, but contain many useful guidelines which can be used when writing your proposal.
- TUIO Protocol and Framework - http://www.tuio.org/
- CCV - http://ccv.nuigroup.com
- CCV2 - http://nuicode.com/projects/ccv2
- Touchlib - http://touchlib.com
- Opentouch (BBTouch) - http://code.google.com/p/opentouch/
- reacTIVision - http://reactivision.sourceforge.net/
- Multitouch project - http://multitouch.sf.net
- OpenTable project - http://opentable.googlecode.com
- MT4j - Multi-touch for Java - http://www.mt4j.org
- Libavg - http://libavg.de
- Microsoft Kinect - http://en.wikipedia.org/wiki/Kinect
- WiiMoteTUIO - http://code.google.com/p/wiimotetuio/
- WiiMote multitouch - http://www.cs.cmu.edu/People/johnny/projects/wii/
- WiiMote multitouch forum - http://www.wiimoteproject.com/
- WiiMote WhiteBoard project - https://sourceforge.net/projects/wiiwhiteboard/