Sandbox topographical play gets a big resolution boost

Here’s another virtual sandbox meets real sandbox project. A team at UC Davis is behind this depth-mapped and digitally projected sandbox environment. The physical sandbox uses fine-grained sand which serves nicely as a projection surface as well as a building medium. It includes a Kinect depth camera over head, and an offset digital projector to add the virtual layer. As you dig or build elevation in parts of the box, the depth camera changes the projected view to match in real-time. As you can see after the break, this starts with topographical data, but can also include enhancements like the water feature seen above.

It’s a big step forward in resolution compared to the project from which the team took inspiration. We have already seen this concept used as an interactive game. But we wonder about the potential of using this to quickly generate natural environments for digital gameplay. Just build up your topography in sand, jump into the video game and make sure it’s got the attributes you want, then start adding in trees and structures.

Don’t miss the video demo embedded after the break.

[Read more…]

ATiny powered Kinect fire cannons for dance Fx

[Paul] is at it again with some kinect controlled fire poofers. You may remember [Paul’s] previous shenanigans with the gigantic hand made hydraulic flame-sailed pirate ship.  This time he is building a small flame poofer (possibly a series of poofers) for SOAK, a regional (unaffiliated) Burning Man style festival in Oregon.

Any one who remembers the build will recognize the brains of the new cannons, they are just the pirate ship’s custom ATiny board unceremoniously torn from their previous home and recycled for the new controller. This time though they have Kinect! The build seems to function much like the evil genius simulator by simply using a height threshold to activate each cannon, but [Paul] has plans for the new system. This hardware test uses the closed source OpenNI but will meet its full potential when it is reborn in SkelTrack, which was just released a few weeks ago. The cannons are going to go around a small single person dance floor, presumably with the Kinect nearby.

Check out the brief test video after the jump.

[Read more…]

Multitouch table uses a Kinect for a 3D display

[Bastian] sent in a coffee table he built. This isn’t a place to set your drinks and copies of Make, though: it’s a multitouch table with a 3D display. Since no description can do this table justice, take a look at the video.

The build was inspired by the subject of this Hackaday post where [programming4fun] was able to build a ‘holographic display’ using a regular 2D projector and a Kinect. Both builds work on the principle of redrawing the 3D space in relation to the user’s head – as [Bastian] moves his head around the coffee table, the Kinect tracks his location and moves the 3 dimensional grid of boxes in the opposite direction. It’s extremely clever, and looks to be a promising user interface.

In addition to a Kinect, the coffee table uses a Microsoft Surface-like display; four infrared lasers are placed at the corner and detected with a camera next to the projector in the base.

After the break you can see the demo video and a gallery of the images [Bastion] put up on the NUI group forum.

[Read more…]

Adding new features and controlling a Kinect from a couch

Upon the release of the Kinect, Microsoft showed off its golden child as the beginnings of a revolution in user interface technology. The skeleton and motion detection promised a futuristic, hand-waving “Minority Report-style” interface where your entire body controls a computer. The expectations haven’t exactly lived up reality, but [Steve], along with his coworkers at Amulet Devices have vastly improved the Kinect’s skeleton recognition so people can use a Kinect sitting down.

One huge drawback for using the Kinect for a Minority Report UI in a home theater is the fact that the Microsoft Skeleton recognition doesn’t work well when sitting down. Instead of relying on the built-in skeleton recognition that comes with the Kinect, [Steve] rolled his own skeleton detection using Harr classifiers.

Detecting Harr-like features has been used in many applications of computer vision technology; it’s a great, not-very-computationally-intensive way to detect faces and body positions with a simple camera. Training is required for the software, and [Steve]’s app spent several days programming itself. The results were worth it, though: the Kinect now recognizes [Steve] waving his arm while he is lying down on the couch.

Not to outdo himself, [Steve] also threw in voice recognition to his Kinect home theater controller; a fitting  addition as his employer makes a voice recognition remote control. The recognition software seems to work very well, even with the wistful Scottish accent [Steve] has honed over a lifetime.

[Steve]’s employer is giving away their improved Kinect software that works for both the Xbox and Windows Kinects. If you’re ever going to do something with a Kinect that isn’t provided with the SDKs and APIs we covered earlier today, this will surely be an invaluable resource.

You can check out [Steve]’s demo of the new Kinect software after the break.

[Read more…]

Kinect for Windows Resources

Despite having been out for nearly two months, the world has yet to see a decent guide to the Kinect for Windows. While the Xbox and Windows  versions of the Kinect use basically the same hardware, there are subtle but important differences. Thanks to [Matthew Leone] and his awesome summary of developer resources, getting your Kinect project up and running is now a lot easier.

After getting the SDK from the Microsoft Kinect for Windows site, you might want to check out the Microsoft Programming Guide. The Windows Kinect can only be used with Visual Studio, but with that inflexibility comes a few added features. Both versions of the Kinect have a microphone array that allows for determining the direction of a sound source. The Open Source driver has very little support for audio input, but the official Microsoft version has all the APIs for audio capture, source localization, and speech recognition ready to go.

At $250, the Kinect for Windows is a fairly hefty investment. A used Xbox Kinect can be had for around $80, so we’re pretty certain the hacker community is going to steer itself away from the Windows version. Still, if you’re ever paid to develop something for the Kinect you might want the friendly APIs and features not found in the XBox version.