EDN Admin
Well-known member
You saw it at Mix—in typical fashion, our mission was to build two Jellybean robots in three weeks for the Mix keynote; no pressure, right?—and now its time to introduce Project Jellybean on Coding4Fun. So, here it is—the Kinect drivable lounge chair! The lounge chair has Omni-directional wheels, eight batteries, two motor controllers, and a frame made of extruded aluminum. Jellybean exists as a proof-of-concept of what crazy things are possible when utilizing the http://research.microsoft.com/kinectsdk/ Kinect for Windows SDK , and the project also leverages the http://c4fkinect.codeplex.com/ Coding4Fun Kinect Toolkit in order to handle some of the more complex operations. Before we get into the code, let me point out, THIS WILL WORK WITHOUT THE ROBOT . There is an application setting called IsMotorEnabled , and with this setting set to false, you can play with the user interface and see how we did all our Kinect-enabled goodness. <img class="wlEmoticon wlEmoticon-smile" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/wlEmoticon-smile%5B2%5D.png" alt="Smile The screenshot at the bottom is of me testing this puppy at my desk without any of the motors or relays connected. http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/clint%5B2%5D.jpg <img title="clint" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/clint_thumb.jpg" alt="clint" width="500" height="333" border="0 <h3>Overview</h3> There are five total projects in the C# solution, and Jellybean is broken down into a four big parts: Hardware Robot Software Kinect Software User Interface http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image%5B3%5D.png <img title="image" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image_thumb%5B4%5D.png" alt="image" width="239" height="115" border="0 <h3>Hardware</h3> A lot of the hardware is pretty straightforward and can be gleaned from the part list and the wiring diagram. Larry Larsen has a http://channel9.msdn.com/Series/Show-Us-Your-Tech/Show-Us-Your-Tech-Inside-the-Jellybean video of me building out the robot and explaining some of the hardware, both during the construction and at the actual event. <blockquote> WARNING The motors are extremely powerful—everything is very heavy and there is a lot of power in the batteries. Be careful. The wheels easily catch on shoelaces and headphone cords, etc. </blockquote> With other projects, such as the t-shirt cannon from last Mix, I had to disconnect a rather large number of wires, and so risked short-circuiting the entire project. Jellybean, however, is wired to make charging it a lot easier. The solution below allows me to charge the robot by flipping four heavy-duty switches to the off position. This wiring diagram is also included in the source code as a Visio file called "WiringDiagram.vsd" located in the "Files" directory: http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image%5B7%5D.png <img title="image" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image_thumb%5B6%5D-1.png" alt="image" width="640" height="276" border="0 http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image%5B11%5D.png <img title="image" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image_thumb%5B8%5D-1.png" alt="image" width="640" height="236" border="0 <h4>Wiring Up the Chair and Relay</h4> I decided to pick a chair that was already electric and just tap into the existing switches, and so I mimicked the wiring to match the chairs "stock" wiring. Youll have to alter this design depending on how your chair is set up. http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1044%5B5%5D.jpg <img title="IMG_1044" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1044_thumb%5B5%5D.jpg" alt="IMG_1044" width="640" height="471" border="0 <h4>Wiring, Wire Management and easy access</h4> Another lesson I learned from the cannon project was to make sure the wiring is nice and easy to get to so the project doesnt have to be half disassembled when I want to reach an individual connection. To ensure a solid connection, every wire was crimped and soldered with ring connectors. I didnt want any chance of a wire coming loose. As you can see, the left and right wiring harnesses are pretty much exact clones of each other. http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1074%5B5%5D.jpg <img title="IMG_1074" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1074_thumb%5B6%5D.jpg" alt="IMG_1074" width="518" height="480" border="0 <h4>Wheels</h4> These are AndyMark 10" steel omni-directional wheels. A heads-up—you can mount them backwards, and if you do, the chair wont be able to rotate in place. Accordingly, your co-workers will mock you…trust me. What you want is for the wheels to form an O pattern, not an X. Here is a picture of improperly mounted wheels. http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1073%5B3%5D.jpg <img title="IMG_1073" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1073_thumb%5B1%5D.jpg" alt="IMG_1073" width="640" height="480" border="0 Jellybean Object The jellybean is what talks to the robotic platform so we can test the platform without the Kinect. The object only knows about two serial ports, which are connected to the motor controllers, and our trusty phidget relay controller, which controls the footrest. The three methods called during operation are CalculateSpeed Drive ToggleFootrest <h4>How to drive sidewise</h4> Since driving an omni-directional armchair isnt exactly something someone does every day, looked at how Id drive it with an xbox controller. The Y-axis is the throttle and the X-axis is what I call the vector multiplier. The formula for this is surprisingly straight forward:. <pre class="brush: csharp private static double ThrottlesThroughVectorMultiplier(double throttle, double vectorMultiplier, bool isFrontMotor)
{
return vectorMultiplier + ((isFrontMotor) ? throttle : -throttle);
}[/code] Since were dealing with our hands, I also included a "dead" zone where the drivers hands can move but the motors wont react: <pre class="brush: csharp private double AdjustValueForDeadzone(double value)
{
// positive value
if (value > 0)
{
// value under threshold
if (value < AllowedMovementArea)
return 0; // re-adjust value back to 0 to 1
// Example: deadzone of .2
// that means 1 to .8 is only value, jerky movement
// readjust so .2 = 0 and use same curve
// that brings value - allowed deadzone movement
// so if value = 1, that would be 1, .2 = 0
// values between .2 and 0 would return 0 due to if statement above
// so our adjusted range is .2 to 0 but need 1 to 0
// negating allowed deadzone would be 2 in this case
// which brings us back to 1 to 0 range
value = (value - AllowedMovementArea) * _negatedAllowedMovementArea;
}
else // negative values
{
// value under threshold
if (value > -AllowedMovementArea)
return 0; value = (value + AllowedMovementArea) * _negatedAllowedMovementArea;
} return value;
}[/code]<h3>Kinect for Windows SDK</h3> Aww snap, were finally here! Using the Coding4Fun.Kinect.WPF API with the Kinect for Windows SDK, I simplified the amount of heavy lifting I had to do. I have two core classes here, and one is just a simple wrapper around the SDK: From sensor.cs <pre class="brush: csharp public void Open()
{
if (_isInit)
Close(); RuntimeOptions flags = 0; if (TrackSkeleton)
{
flags |= RuntimeOptions.UseDepthAndPlayerIndex;
flags |= RuntimeOptions.UseSkeletalTracking;
}
else if (UseDepthCameraStream)
{
flags |= RuntimeOptions.UseDepth;
} if (UseColorCameraStream)
{
flags |= RuntimeOptions.UseColor;
} _runtime.Initialize(flags); if (TrackSkeleton || UseDepthCameraStream)
{
var imageType = (TrackSkeleton) ? ImageType.DepthAndPlayerIndex : ImageType.Depth;
_runtime.DepthStream.Open(ImageStreamType.Depth, 2, DepthResolution, imageType);
} // now open streams
if (UseColorCameraStream)
{
_runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
} _runtime.VideoFrameReady += RuntimeColorFrameReady;
_runtime.DepthFrameReady += RuntimeDepthFrameReady;
_runtime.SkeletonFrameReady += RuntimeSkeletonFrameReady;
_isInit = true;
}[/code] The second class is all about processing the data, NuiDepth.cs. Since the http://c4fkinect.codeplex.com/ Coding4Fun.Kinect.WPF handles the heavy lifting, the code is pretty straight forward! Its all housed in the DepthFrameReady event: From NuiDepth.cs <pre class="brush: csharp void _sensor_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
{
var imageWidth = e.ImageFrame.Image.Width;
var imageHeight = e.ImageFrame.Image.Height;
var imageHeightWithMargin = imageHeight - 50; var depthArray = e.ImageFrame.ToDepthArray();
var rightHandOffset = imageWidth / 2;
var leftHand = depthArray.GetMidpoint(imageWidth, imageHeight, 0, 0, rightHandOffset, imageHeightWithMargin, MinDistance);
var rightHand = depthArray.GetMidpoint(imageWidth, imageHeight, rightHandOffset, 0, imageWidth, imageHeightWithMargin, MinDistance); leftHand.X *= _bitmapScale;
leftHand.Y *= _bitmapScale; rightHand.X *= _bitmapScale;
rightHand.Y *= _bitmapScale;
var args = new FrameReadyEventArgs
{
DepthBitmap = depthArray.ToBitmapSource(imageWidth, imageHeight, MinDistance, Color.FromArgb(255, 255, 0, 0)),
ImageBitmap = _colorImage,
LeftHand = leftHand,
RightHand = rightHand
};
FrameReady(this, args);
}[/code]<h3>User Interface and NUI</h3> Our user interface was designed by the fine folks over at http://352mediagroup.com 352 Media and implemented by Dan Fernandez and myself. From this interface, we can turn on the motors and honk a horn as well as raise and lower the chair. We also have a visual for how fast we are going and how the program views our hands. http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image%5B15%5D.png <img title="image" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image_thumb%5B10%5D.png" alt="image" width="640" height="360" border="0 <h4>Why didnt we use skeleton tracking?</h4> Well, we wanted to, and as you can see we actually have it turned on. The issue is getting a lock this close to the Kinect due to how close we had to mount the it. Accordingly, we decided to go pure depth data. We leveraged the GetMidpoint and ToBitmapSource with minimum distance extensions from the Coding4Fun Kinect toolkit to do the coloring and provide us with the hand positions in the screen. <h4> </h4><h4> </h4><h3>Conclusion</h3> Now you know how we pulled off Project Jellybean! If you want to try this out, the download link for the source code is at the top of the article. And if you build one and ask nicely, http://weblogs.asp.net/scottgu/ Scott Guthrie may ride it <img class="wlEmoticon wlEmoticon-smile" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/wlEmoticon-smile%5B2%5D.png" alt="Smile http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1071%5B4%5D.jpg <img title="IMG_1071" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1071_thumb%5B1%5D.jpg" alt="IMG_1071" width="640" height="480" border="0 <h3>About The Author</h3> Clint Rutkas runs http://coding4fun.com Coding4Fun and has built a few crazy projects in the past. Clint is part of the Channel9 team at Microsoft and can be reached at <a>clint.rutkas@microsoft.com or on twitter at http://twitter.com/clintrutkas @clintrutkas. If you ever have a question, please reach out. <img src="http://m.webtrends.com/dcs1wotjh10000w0irc493s0e_6x1g/njs.gif?dcssip=channel9.msdn.com&dcsuri=http://channel9.msdn.com/Feeds/RSS&WT.dl=0&WT.entryid=Entry:RSSView:f55aece11e45485a93239efa016bb555
View the full article
{
return vectorMultiplier + ((isFrontMotor) ? throttle : -throttle);
}[/code] Since were dealing with our hands, I also included a "dead" zone where the drivers hands can move but the motors wont react: <pre class="brush: csharp private double AdjustValueForDeadzone(double value)
{
// positive value
if (value > 0)
{
// value under threshold
if (value < AllowedMovementArea)
return 0; // re-adjust value back to 0 to 1
// Example: deadzone of .2
// that means 1 to .8 is only value, jerky movement
// readjust so .2 = 0 and use same curve
// that brings value - allowed deadzone movement
// so if value = 1, that would be 1, .2 = 0
// values between .2 and 0 would return 0 due to if statement above
// so our adjusted range is .2 to 0 but need 1 to 0
// negating allowed deadzone would be 2 in this case
// which brings us back to 1 to 0 range
value = (value - AllowedMovementArea) * _negatedAllowedMovementArea;
}
else // negative values
{
// value under threshold
if (value > -AllowedMovementArea)
return 0; value = (value + AllowedMovementArea) * _negatedAllowedMovementArea;
} return value;
}[/code]<h3>Kinect for Windows SDK</h3> Aww snap, were finally here! Using the Coding4Fun.Kinect.WPF API with the Kinect for Windows SDK, I simplified the amount of heavy lifting I had to do. I have two core classes here, and one is just a simple wrapper around the SDK: From sensor.cs <pre class="brush: csharp public void Open()
{
if (_isInit)
Close(); RuntimeOptions flags = 0; if (TrackSkeleton)
{
flags |= RuntimeOptions.UseDepthAndPlayerIndex;
flags |= RuntimeOptions.UseSkeletalTracking;
}
else if (UseDepthCameraStream)
{
flags |= RuntimeOptions.UseDepth;
} if (UseColorCameraStream)
{
flags |= RuntimeOptions.UseColor;
} _runtime.Initialize(flags); if (TrackSkeleton || UseDepthCameraStream)
{
var imageType = (TrackSkeleton) ? ImageType.DepthAndPlayerIndex : ImageType.Depth;
_runtime.DepthStream.Open(ImageStreamType.Depth, 2, DepthResolution, imageType);
} // now open streams
if (UseColorCameraStream)
{
_runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);
} _runtime.VideoFrameReady += RuntimeColorFrameReady;
_runtime.DepthFrameReady += RuntimeDepthFrameReady;
_runtime.SkeletonFrameReady += RuntimeSkeletonFrameReady;
_isInit = true;
}[/code] The second class is all about processing the data, NuiDepth.cs. Since the http://c4fkinect.codeplex.com/ Coding4Fun.Kinect.WPF handles the heavy lifting, the code is pretty straight forward! Its all housed in the DepthFrameReady event: From NuiDepth.cs <pre class="brush: csharp void _sensor_DepthFrameReady(object sender, ImageFrameReadyEventArgs e)
{
var imageWidth = e.ImageFrame.Image.Width;
var imageHeight = e.ImageFrame.Image.Height;
var imageHeightWithMargin = imageHeight - 50; var depthArray = e.ImageFrame.ToDepthArray();
var rightHandOffset = imageWidth / 2;
var leftHand = depthArray.GetMidpoint(imageWidth, imageHeight, 0, 0, rightHandOffset, imageHeightWithMargin, MinDistance);
var rightHand = depthArray.GetMidpoint(imageWidth, imageHeight, rightHandOffset, 0, imageWidth, imageHeightWithMargin, MinDistance); leftHand.X *= _bitmapScale;
leftHand.Y *= _bitmapScale; rightHand.X *= _bitmapScale;
rightHand.Y *= _bitmapScale;
var args = new FrameReadyEventArgs
{
DepthBitmap = depthArray.ToBitmapSource(imageWidth, imageHeight, MinDistance, Color.FromArgb(255, 255, 0, 0)),
ImageBitmap = _colorImage,
LeftHand = leftHand,
RightHand = rightHand
};
FrameReady(this, args);
}[/code]<h3>User Interface and NUI</h3> Our user interface was designed by the fine folks over at http://352mediagroup.com 352 Media and implemented by Dan Fernandez and myself. From this interface, we can turn on the motors and honk a horn as well as raise and lower the chair. We also have a visual for how fast we are going and how the program views our hands. http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image%5B15%5D.png <img title="image" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/image_thumb%5B10%5D.png" alt="image" width="640" height="360" border="0 <h4>Why didnt we use skeleton tracking?</h4> Well, we wanted to, and as you can see we actually have it turned on. The issue is getting a lock this close to the Kinect due to how close we had to mount the it. Accordingly, we decided to go pure depth data. We leveraged the GetMidpoint and ToBitmapSource with minimum distance extensions from the Coding4Fun Kinect toolkit to do the coloring and provide us with the hand positions in the screen. <h4> </h4><h4> </h4><h3>Conclusion</h3> Now you know how we pulled off Project Jellybean! If you want to try this out, the download link for the source code is at the top of the article. And if you build one and ask nicely, http://weblogs.asp.net/scottgu/ Scott Guthrie may ride it <img class="wlEmoticon wlEmoticon-smile" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/wlEmoticon-smile%5B2%5D.png" alt="Smile http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1071%5B4%5D.jpg <img title="IMG_1071" src="http://files.channel9.msdn.com/wlwimages/1932b237046e4743a4e79e6800c0220f/IMG_1071_thumb%5B1%5D.jpg" alt="IMG_1071" width="640" height="480" border="0 <h3>About The Author</h3> Clint Rutkas runs http://coding4fun.com Coding4Fun and has built a few crazy projects in the past. Clint is part of the Channel9 team at Microsoft and can be reached at <a>clint.rutkas@microsoft.com or on twitter at http://twitter.com/clintrutkas @clintrutkas. If you ever have a question, please reach out. <img src="http://m.webtrends.com/dcs1wotjh10000w0irc493s0e_6x1g/njs.gif?dcssip=channel9.msdn.com&dcsuri=http://channel9.msdn.com/Feeds/RSS&WT.dl=0&WT.entryid=Entry:RSSView:f55aece11e45485a93239efa016bb555
View the full article