The aim of this project was to design and implement an embedded system which enables controlling of a robotic arm through the user's body gestures captured by a Kinect for Windows sensor through an Intel's Atom PC.
The list of the equipment employed is as follows:
- Kinect for Windows sensor
- Lynx6 Robotic Arm
- Atom PC
At first we started off by coding the Kinect program in C# and the manipulator controller program in C++, however, we then decided to have both codes in C# to achieve a simpler and possibly faster and more efficient interface between the two parts.Manipulation
The Lynx6 robotic platform was chosen to work on. Lynx6 is a 6 degrees of freedom (DOF) robotic arm (also manipulator) designed and produced by Lynxmotion company. The first step in using any robotic manipulator is to calibrate it. Through the calibration of the robotic arm the joint angle limits are measured, the link length are precisely measured, the reachable workspace of the end-effector is worked out, and so on. One of the most important aspects of the calibration is to determine what command is required to take each servo to a desired angle. To achieve this, three sample angles for each joint servo are recorded along with their corresponding servo commands that will take the joints to those angles. These data are then used in the formation of a formula through interpolation that will map any angles within the working range of a servo to their corresponding servo commands.
For the purpose of this project where a robotic manipulator with different degrees of freedom for its every joint than the human arm's is to mimic a human arm's gestures, there was need for great deal of modifications and considerations on the raw angular data obtained from the Kinect sensor prior to communicating it to the robotic arm.
To communicate servo commands to the joints, a serial communication between the robotic arm and the Atom processor is utilized.
|Lynx6- a 6 DOF Robotic Manipulator by Lynxmotion Co.|
The Microsoft's Kinect for Windows was chosen to be our sensory platform for capturing human body gestures (most particularly the right or left arm movements). To mimic human body gestures or/and receive commands through user's body movements, a series of information needed be collected from the user. The Microsoft's SDK support for Kinect provides a library of API including a particular API support for skeletal detection. The Skeleton class provided in the Microsoft.Kinect's library and its properties Joints and Position were used to obtain the spatial positions of some of the 20 detectable (supported by the SDK) human body's joints in the stand-up mode. A method was implemented in our code which takes as input arguments the positions of three joints and outputs the joint angle between them. The collected angles are then passed to the robotic manipulation section of the program for further processes to produce appropriate commands to be communicated to the robotic arm's servos at its joints. Examples of the additional processes that need be conducted on the angular data obtained from the Kinect sensor before communicating it over the serial port to the robotic arm are applying the constraints that were worked out through the full calibration. There are many constraints for an arm, be it a robotic arm or a human arm. Some of these constraints are limits of the servos, limited degrees of freedom at a joint, limited end-effector orientation, and constrained reachable workspace.
|Kinect for Windows - an RGB + Depth Image sensor by Microsoft Corp.|
Note: As this is in early development, the code is not as streamlined as it could be, the code will be refined in the coming weeks.
- Program.cs holds the main method.
- Object of the class arm is create. (This has all the methods contained within it to move one/multiple servos to a specified angle.).
- Limits of the servo motors are set as well as the home position and the sleep position. (This is done in the arm constructor).
- The robot moves to sleep and then home to verify link.
- The Kinect is detected and Initialized. An event handler is attached as a frame is grabbed.
- The specific joint positions are calculated (Kinect API).
- Processing is done to get the angles between certain joint positions.
- Angle commands are then sent and translated (via the methods in the class arm.cs) to specific servo motors with the angle information one at a time.
- This repeats per frame.
To implement the inverse kinematics in our design so that by passing the spatial coordinates, the end-effector of the robot will automatically move to that point relative to the manipulator's relative frame.
To make the inverse kinematics more efficient, a trajectory-planning algorithm, such as the rapidly-exploring random tree (RRT) algorithm, can also be implemented to smooth the transitions.
- Microsoft Kinect SDK Library:
- "A Mathematical Introduction to Robotic Manipulation":