This is a fun application which allows you to conduct a marionette puppet show with the help of a depth camera or just webcam with colored gloves. You can tell an animated story in real-time, with funny soft/jelly-like physically simulated characters. It also allows you to create the characters. You can rig any model by placing bones and spring-constraints. Models can be created in this tool by simple 3d sculpting with metaballs (which is similar to clay modeling) or can be imported from other tools.
Two characters can be manipulated at a time, by two hands which can be of different users or manipulators.
Unlike sock/glove puppet here full hand (all fingers) are used to manipulate marionette through strings. But since the puppet itself can be created by user, s/he can choose to create simple 1 or 2 string marionette which can manipulate body and optionally another limb e.g. jaw to simulate speech.
You conduct a show by manipulating 1 or 2 characters (at a time) with your hands. Fine tuned by moving finger tips and thus pulling strings attached to character limbs. For audio you can use pre-recorded sound or speak realtime (which will be recorded) with pitch modification. All your actions get recorded, thus you can replay the animation any time. Characters or expression can be changed realtime by voice-command/touch/key, which ultimately changes model or texture. Similarly background image can be changed.
Perceptual Computing for animating (Optional)
Puppet strings are manipulated with fingers, and Perceptual Computing allows us to track finger tips without any marker which is very intuitive and gives a next-gen feel. Since this requires a separate depth camera which can be placed/oriented as desired, the user can stand behind the AIO and conduct the show (looking at output through a mirror or another monitor). Also user can stand in dark area as depth camera works with infra-red.
Webcam for animating
This application is not dependent on depth-camera, you can use webcam too, but then you need to place different colored markers on fingers (color your glove tips or place colored paper cones on finger tips).
3D Model Creation
Blobs (made of sphere/cylinder/cube) can be placed additive or subtractive. They can be scaled unevenly or rotated or moved later, perfectly blending at intersection.
They can have color which gets mixed at intersection. Color can be changed later.
Texture can be projected and baked. Blobs can be edited even after coloring and texturing.
Optionally bones can be created earlier. Apart from rigging and animating, they also facilitate easy 3d placement of bobs.
This tool can export data to popular 3D formats, thus can also be used as a standalone modeling tool.
Models that can be created by this tool
The core logic of modeling part is based on the metaballs and marching cube algorithm. It is useful for quick 3d modeling of cartoon character and organic matter particularly those which are easily modeled with clay. Such kind of modeling is quite difficult with conventional digital tools, which are primarily developed to 3d model geometrical and man-made things like building and furniture. Clay modeling is for artists and they need their palm, fingers and knife to artistically manipulate clay. This tool will provide similar feature digitally without making your hands dirty.
Models (OBJ format) can be imported from other tools if can't be created in this tool.
3D modeling Procedure
An interconnected frame (equivalent to bones) can be made optionally which helps in next phase. To place a bone joint first select start position, then end position.
Mud blob can be placed on frame or freely anywhere depending on placement mode. Placement on frame is easy as it provides third axis value making the problem only 2d. Free placement requires manipulating a virtual plane on third axis.
Blob editing requires blob selection which is similar to blob placement. After selecting a blob it can be moved or rescaled or rotated.
Sculpt mode: Blobs can be fine tuned and crafted for details, multiple shaped knife will be used for this purpose.
Texturing requires camera manipulation, as texture remains fixed on screen while being projected on model. Touching blobs pasts the image on blob.
Perceptual Computing for modeling
The modeling part of this application needs input like palm and finger of an artist, which is best done with perceptual computing. Though it is not mandatory and other methods will be used to make it usable with conventional input modalities, they are not as convenient or intuitive.
Voice commands will be used to change mode and tools e.g. selecting cube or sphere blob, selecting additive or subtractive mode. All other tasks depends on finger tracking.
Camera movement: Camera can be changed by 90 degree at a time, which is done by flick gesture.
Bone Creation: First hand 3d palm tracking will be used to place bone. Close hand to simulate a drag. While dragging there will be realtime preview, after that it will be placed.
Use for entertainment
This is a low-cost alternative for animated movies, which can be used by parents as well as by kids. Enhancing creativity in kids while whole family having fun.
This application can be used professionally to quickly (or realtime) create animation (e.g. political satire) which can be aired by TV news channels. Kids channel can use it to animate full story-books at low cost. Games can use such animation for cutscenes.
It is better than conventional puppet show, as you can create unlimited puppet and stories. Background can be a still or movie. And with Perceptual Computing it looks Sci-fi.
I needed short funny animation for my current work in progress game, thus this tool was made. Currently all the components of this applications are fully working independently in debug environment, they need to be integrated as a single application. After getting AIO for testing, the UI will be redesigned to match its size and feel, apart form various other improvements as needed. Already have a depth camera to test Perceptual computing inputs.
C++ using Visual Studio 2012 for Desktop (express edition), rendering is done with DirectX 9. Perceptual Computing build requires additional hardware (depth camera) and installation of runtime.