Adding new features and controlling a Kinect from a couch

Upon the release of the Kinect, Microsoft showed off its golden child as the beginnings of a revolution in user interface technology. The skeleton and motion detection promised a futuristic, hand-waving “Minority Report-style” interface where your entire body controls a computer. The expectations haven’t exactly lived up reality, but [Steve], along with his coworkers at Amulet Devices have vastly improved the Kinect’s skeleton recognition so people can use a Kinect sitting down.

One huge drawback for using the Kinect for a Minority Report UI in a home theater is the fact that the Microsoft Skeleton recognition doesn’t work well when sitting down. Instead of relying on the built-in skeleton recognition that comes with the Kinect, [Steve] rolled his own skeleton detection using Harr classifiers.

Detecting Harr-like features has been used in many applications of computer vision technology; it’s a great, not-very-computationally-intensive way to detect faces and body positions with a simple camera. Training is required for the software, and [Steve]’s app spent several days programming itself. The results were worth it, though: the Kinect now recognizes [Steve] waving his arm while he is lying down on the couch.

Not to outdo himself, [Steve] also threw in voice recognition to his Kinect home theater controller; a fitting  addition as his employer makes a voice recognition remote control. The recognition software seems to work very well, even with the wistful Scottish accent [Steve] has honed over a lifetime.

[Steve]’s employer is giving away their improved Kinect software that works for both the Xbox and Windows Kinects. If you’re ever going to do something with a Kinect that isn’t provided with the SDKs and APIs we covered earlier today, this will surely be an invaluable resource.

You can check out [Steve]’s demo of the new Kinect software after the break.

[Read more…]

Kinect for Windows Resources

Despite having been out for nearly two months, the world has yet to see a decent guide to the Kinect for Windows. While the Xbox and Windows  versions of the Kinect use basically the same hardware, there are subtle but important differences. Thanks to [Matthew Leone] and his awesome summary of developer resources, getting your Kinect project up and running is now a lot easier.

After getting the SDK from the Microsoft Kinect for Windows site, you might want to check out the Microsoft Programming Guide. The Windows Kinect can only be used with Visual Studio, but with that inflexibility comes a few added features. Both versions of the Kinect have a microphone array that allows for determining the direction of a sound source. The Open Source driver has very little support for audio input, but the official Microsoft version has all the APIs for audio capture, source localization, and speech recognition ready to go.

At $250, the Kinect for Windows is a fairly hefty investment. A used Xbox Kinect can be had for around $80, so we’re pretty certain the hacker community is going to steer itself away from the Windows version. Still, if you’re ever paid to develop something for the Kinect you might want the friendly APIs and features not found in the XBox version.

Giant pencil used as an Etch a Sketch stylus

The gang over at Waterloo Labs decided to add a team-building aspect to a plain old Etch a Sketch. Instead of just twisting the two knobs with your own mitts, they’re converting this giant pencil’s movements into Etch a Sketch art.

The challenge here is figuring out a reliable way to track the tip of the pencil as it moves through the air. You may have already guess that they are using a Microsoft Kinect depth camera for this task. The Windows SDK for the device actually has a wrapper that helps it to play nicely with LabView, where the data is converted to position commands for the display.

On the Etch a Sketch side of things they’ve chosen the time-tested technique of adding gears and stepper motors to each of the toy’s knobs. As you can see from the video after the break, the results are mixed. We’d say from the CNC ‘W’ demo that is shown there’s room for improvement when it comes to the motor driver. We can’t really tell if the Kinect data translation is working as intended or not. But we say load it up and bring to a conference. We’re sure it’ll attract a lot of attention just like this giant version did.

[Read more…]

Very easy 3D scanning software with ReconstructMe

[Maxzillian] sent in a pretty amazing project he’s been beta testing called ReconstructMe. Even though this project is just the result of software developers getting bored at their job, there’s a lot of potential in the 3D scanning abilities of ReconstructMe.

ReconstructMe is a software interface that allows anyone with a Kinect (or other 3D depth camera) in front of a scene and generate a 3D object on a computer in an .STL or .OBJ file. There are countless applications of this technology, such as scanning objects to duplicate with a 3D printer, or importing yourself into a video game.

There are a few downsides to ReconstructMe: The only 3D sensors supported are the xBox 360 Kinect and the ASUS Xtion. The Kinect for Windows isn’t supported yet. Right now, ReconstructMe is limited to scanning objects that fit into a one-meter cube and can only operate from the command line, but it looks like the ReconstructMe team is working on supporting larger scans.

While it’s not quite ready for prime time, ReconstructMe could serve as the basis for a few amazing 3D scanner builds. Check out the video demos after the break.

[Read more…]

Building your own portable 3D camera

diy-3d-camera

[Steven] needed to come up with a project for the Computer Vision course he was taking, so he decided to try building a portable 3D camera. His goal was to build a Kinect-like 3D scanner, though his solution is better suited for very detailed still scenes, while the Kinect performs shallow, less detailed scans of dynamic scenes.

The device uses a TI DLP Pico projector for displaying the structured light patterns, while a cheap VGA camera is tasked with taking snapshots of the scene he is capturing. The data is fed into a Beagleboard, where OpenCV is used to create point clouds of the objects he is scanning. That data is then handed off to Meshlab, where the point clouds can be combined and tweaked to create the final 3D image.

As [Steven] points out, the resultant images are pretty impressive considering his rig is completely portable and that it only uses an HVGA projector with a VGA camera. He says that someone using higher resolution equipment would certainly be able to generate fantastically detailed 3D images with ease.

Be sure to check out his page for more details on the project, as well as links to the code he uses to put these images together.

Microsoft shows off their transparent 3D desktop prototype

We think most would agree that the Microsoft Kinect is a miraculous piece of hardware. The affordable availability of a high-quality depth camera was the genesis of a myriad of hacks. And now it seems that type of data is making an intriguing 3D display possible.

What you see above is a 3D monitor concept that Microsoft developed. It starts off looking much like a tablet PC, but the screen can be lifted up toward the user whose arms reach around it to get at the keyboard underneath. There is as depth camera that can see the hands and fingers of the user to allow manipulation of the virtual environment. But that’s only part of the problem. You need some way to align the user’s eyes with what’s on the screen. They seem to have solved that problem too, using another depth camera to track the location of the user’s head. This means that you can lean from one side to the other and the perspective of the virtual 3D desktop will change to preserve the apparent distance of each object.

Don’t miss the show-and-tell video after the break. As long as there’s only one viewer this looks like a perfect non-glasses alternative to current 3D hardware offerings. [Read more…]