Kenneth M. Cruikshank
Kinect Projects

Kenneth M. Cruikshank
Kinect Projects

The Microsoft Kinect Sensor is a powerful device designed for a game console, but has become very popular in robotics. It has a depth sensor (500 - 4500 mm effective range, on a 512 x 424 grid), a HD (1080p) camera, an infrared camera, and directional microphone. The Depth and Camera sensors can work at up to 30 frames-per-second. All of this lets the software do quite a bit of "cool" stuff (see ...). Microsoft also provides drivers and a Software Development Kit (SDK) that allows you to attach a Kinect sensor to your computer, and do what you will with the various data streams. The SDK comes with some libraries for limb and face detection and other things that are useful if you were developing games and gesture-controlled software .Here we will look at some ways we can integrate the sensor into the classroom for hands-on labs. Other than the cost of the sensor and cables (~$200), all the software development tools here are available for free.

Some possible applications are:

  • Use the Depth sensor to get quantitative data on a surface, or how a surface is changing (e.g., Augmented Reality Sandbox)
  • Monitor the motion of specific "image-identifiable" points
  • Use Multiple sensors to detect and triangulate noise (e.g., "Earthquake" trilateration).

Hardware

For the material presented here, I am using the Kinect for XBox One (aka "Version 2.0") hardware it is an upgrade over the original Kinect for XBox 360). To use on a computer, It requires a specific brand of dedicated USB 3.0 port (possibly only works with Renesas- or Intel-based adapters), and a connector Kit. A complete sensor and Windows connection kit costs about $200. Check your USB ports, you may need to get a new USB 3.0 port ($20-50). This makes the hardware relatively inexpensive, considering what you can do with it.

Software

Working with a Kinect is not necessarily for new programmers. To work though what I have here, you do need to understand how to program a simple windows application, should be comfortable with how numbers are represented in a computer, and how to work with vectors and arrays.

For the projects described here, you will need to gather a few tools. How many depends on what system you have decided to use. I work on a windows based computer, so I use the (free) Microsoft Visual Studio Community Edition as my IDE (Integrated Development Environment -- a fancy term for integrating a text editor with compilers), and the (free) Microsoft Kinect for Windows SDK. This is all I need to program Kinect applications for Windows. For doing some of the various tasks, I use the open source OpenCV (Computer Vision) software package, which is available with Emgu CV. Emgu CV is a multi-platform implementation of OpenCV, which includes C# wrappers so I can write windows code using managed code (and can mix my range of programming languages).

To develop the code samples here, I used the following

  • Microsoft Kinect for XBox One with Kinect for Windows adapter
  • Microsoft Visual Studio Community Edition (Free): http://www.microsoft.com/VS
  • Microsoft Kinect SDK- Collection on objects for accessing and processing data: Kinect SDK 2.0
  • Emgu CV - An open source library that provides "wrappers" for Open CV (which it can install for you). This allows open CV to work on Windows, Linux, Android, Raspberry Pi, etc. It also allows the Kinect to be used tithe these other operating systems in case you cannot use the Microsoft Kinect SDK. Emgu CV Web site. The examples here use the stable 3.0.0 release. There were some significant changes in 3.0.0 from earlier versions, especially with how the Contour function works. When you browse the web for code examples using Emgu CV be aware of the version that is being used.
  • Open CV - An open source library of "Computer Vision" tools (i.e., image processing tools) Open CV Website (includes documentation and examples) If you use Emgu CV, then it can install Open CV for you.

Project Starters

The Kinect SDK gives you various sample programs in several languages (as does Emgu CV). Here we will look at extending those to get the basic kind of information we may want. Below are several examples. The first example, Contouring an Image, provides a basic framework for accessing the depth sensor, and using OpenCV to contour the image. The next example looks for changes in the Depth map. I will be working with C#. These programs could serve as a starting point for multiple projects.

Project 1: Contouring an Image

 

 

Geology Department
http://www.pdx.edu/geology

Copyright © 1994-2015 · K.M. Cruikshank ·
http://geomechanics.research.pdx.edu