I am pleased to announce that Cinder can now support all of your Kinect hacking needs. With the new Kinect CinderBlock, you can very easily obtain depth map and rgb image, and additionally control the Kinect motor. You can even change the LED color if you’d like. There is more information in the Cinder Forum including a link to download the CinderBlock from github.
I have been playing with the Kinect for a day now. It was incredibly easy to get started. You simply ask for a depthImage or colorImage from the Kinect object and it is returned as a texture. There are two samples with the Kinect CinderBlock. KinectBasic just retrieves these two images and draws them to the screen. You can also click in the app window to change the angle of the Kinect (via a motor hidden in the base of the hardware).
The second sample, KinectPointCloud, is a traditional 3D point cloud using a VBOMesh and GLSL shaders. It looks like this:
Since you are getting accurate and reasonably high res depth information for every pixel, it isn’t too difficult to generate a normal map. Since this is done in a shader, it is very quick. I did not make a blur pass because I wanted to keep the frame rates spiffy. This means the normal map isn’t excellent and it has some artifacting, but it’s good enough to get the job done.
Now that you have a normal map, you pass it to the final vert shader. You have each vert check its corresponding normal map value and you simply add this normal value to the position of the vertex. If your normals are correct, you should see something like the image below. I am using a ‘fatness’ uniform float so I can adjust it during runtime.
All thats left is to add back in the original color values and voila, instant fatsuit and extremely creepy late-night distraction. In order to push the creepy vibe, I am only drawing the frags that are within a certain distance to the camera. If they are background frags, I discard them entirely. This ends up being extra useful because the depth image from the Kinect has a bit of parallax shadowing on the side where the depth data goes to black. Eliminating the drawing of these unwanted artifacts will clean up the final image quite a bit.
Since you have remapped the normals back onto a webcam image, it is a trivial matter to create a lightsource and dynamically change the lighting of your realtime webcam input. For the following video, I have created a virtual swinging light source above my head.
Oh, and this library also supports two Kinects at the same time. I haven’t figured out how to use/abuse this knowledge, but I am certainly going to try. Realtime morphing between two people seems like an interesting first go. Maybe I will try morphing myself into my cat. Hmmm….