Thursday, April 21, 2011

Robotic Pong Shooter

Team Members:
Michael Daeffler (daeffler@umich.edu)
Keegan Kinkade (kinkadek@umich.edu)

Description and Objectives

We wanted to take a robotic ping pong shooter and develop a code that implemented a GUI and some sort of player feedback to play a game of root beer pong against another human and constantly win. This project was broken down into multiple steps:
1) develop a D-H table for the ping pong shooter
2) test the velocity of the ping pong ball leaving the barrel of the shooter
3) develop the inverse kinematics of the robot and ballistic trajectory to solve for joint angles
4) develop the feedback control equations for the EKF
5) independently test each algorithm
6) integrate the algorithms
7) develop a GUI such that almost anyone could play the game without any knowledge of what was occurring behind the GUI
8) test the entire package vs multiple people

Wednesday, April 20, 2011

Block Diagram

Inital Control Design Concept

Final Control Design:



Inverse Kinimeatics


We were first going to use the Jacobian and task space control to solve for the joint angles, however once we did a little research we discovered I would be rather simple to do inverse kinematics. The figure define all variable used to solve the inverse kinematics.


The total height the ball is launched from above the mouth of the cup is defined as

Tocalculate the first joint angle, θ1, that finds the correct bearing it is a simple inverse tangent relationship shown below.

The range joint angle, θ2, is a little bit more difficult to calculate. We first calculated the angle of maximum range. This calculation uses ‘z’ calculated above, but since ‘z’ is a function of ‘θ2‘, preformed an iterative search to determine the angle of maximum range. The variable ‘v’ is the velocity of the ball leaving the barrel, this value was determined experimentally.

We next calculated the range of the shot and the maximum range the shooter is able to achieve at the current height.

If the range exceeded the shooters maximum range, the angle of maximum range was used to take the shot. If not then the equation below was used to calculate the angle required to make the shot.

Before we sent the angles to the servos we first removed some angle biases that were experimentally determined. The bearing bais was -4deg and the range bias was 2deg.


EKF Implentation

In order to allow the user to provide feedback to the system in an effort to correct for any modeling errors, we have incorporated an Extended Kalman Filter. Essentially, the EKF allows for a virtual position of the robot to be tracked within the XY-plane of the global reference frame. All cups have their own reference frame, with which a rigid body transform can be calculated between the location of a given cup and that of the virtual robot. Therefore, when a user inputs which cup they desire for the cannon to aim for, we are able to calculate the required ballistic joint angles based on the estimations of the virtual position of the robot and the ground truth location of the selected cup. Feedback occurs when the user reports where the ball actually landed. We can calculate the difference between where the user says the ball landed and where the ball should of landed, thus providing an observation residual for the EKF to use in better estimating the virtual location of the robot. The following presents the formulation our Extended Kalman Filter.



A. State Declaration


Let the state vector be S =

The state at the next time step is S’ =

This gives the following equation S‘ = f(S,W) with W =

B. Propagation

-Let S0 denote the state in the previous time step, and denote the estimate of the current state based on propagation as defined above. We allow w1 = w2 = w3 ~ N(0,σ), giving the covariance of W as a diagonal matrix with each diagonal element set to σ, a parameter which is fine tuned for our system and ensures that the variance in our estimate does not become zero after multiple observations. We now get our state and covariance prediction prior to observation as follows:
State estimate using time propagation

Covariance Estimate

Note that the above propagation step is a constant linear function.


C. Observation

-Our observation will simply be the user’s input of where the ball landed with respect to the global frame. We will use a rigid body transformation, labelled ‘T’ in Fig. 1, to relate this observation to the state of the robot. This gives rise to the following expected observation vector:


This is in the form z = h(S,V), where V = and V1 = V2 ~ N(0,σ2), representing the users feedback with tuned white noise.

The above equation is used to calculate the expected observation, while the true observation is the difference between the robots prior position and the users input. Variables Xc and Yc in the above equation represent the ground truth locations of the selected cup for the robot to attempt to shoot the ping pong ball into. Because the above equation is non-linear, we must use a First-order Taylor expansion to estimate the covariance of the observation. Thus, we need the Jacobian of h with respect to the state variables. This can be calculated as follows:

We can now formulate our Kalman update step in order to increase the accuracy of the ping pong launcher by increasing the accuracy in the estimate of the robot’s virtual location.

D. Kalman Update

-We can now update our estimates of both state and variance using the above observation in Kalman fashion as follows:

Innovation

Kalman Gain

Posteriori Estimate

Updated Covariance

-The updated state and covariance are then run through the propagation step, and a new observation step occurs after the user provides feedback. Therefore, we have defined our recursive Extended Kalman Filter used to predict the robot’s virtual location given user feedback, allowing for correction of any modeling errors within the overall system.

Graphical User Interface

The following provides an overview of the graphical user interface developed for the system, as well as a general flow of actions happening during a game of robotic pong. The user begins with the following display which asks the user which cup they would like the cannon to attempt to shoot the ping pong ball into:

After the user has selected one of the available cups, the required joint angles on the cannon are calculated, and the trajectory to the desired cup is displayed below the current field configuration. The user is then asked if they would like to take the shot, reset the current estimate on the virtual estimation of the location of the robot (in case they have given poor feedback, resulting in a poor estimation), as well as if they would like to select a different cup:

If the user chooses to reset the feedback, the estimated virtual position of the robot is aligned with the origin of the global coordinate frame, and the GUI returns to the ‘Choose Cup View’ as seen in Fig. 1. The user is also returned to the ‘Choose Cup View’ if they select the option to choose a different cup to aim at. If the user chooses to take the shot, the robot heads to the correct joint angles (calculated from inverse kinematics using the estimated virtual robot location, ground truth cup locations, and simple ballistics - see above section on Inverse Kinematics for more information) and launches the ball. The following is then displayed:

The user is now able to select which cup they made, if any. Let us assume that the user chose to shoot at Cup 1, had the robot take the shot, and the ball actually landed in Cup 2. The user is then asked if the would like to provide feedback, while also being given the option to manually input the robot offset:


If the user decides not to provide feedback, they are returned to the ‘Choose Cup View’ with any cups that they made no longer available and crossed out on the GUI. When the user chooses to manually input robot offset, a menu comes up allowing them to type in the virtual location of the robot, after which the user is returned to the ‘Choose Cup View’ as well. However, if the user decides to provide feedback, they are given a display window of the current configuration of the field with a set of cross hairs with which they can click on the location the ball landed. After clicking on the location the ball landed, the user is given the option to continue to the game, end the game, or add a cup back (incase the user accidentally removed a cup or failed to play by house rules):

After the user provides feedback, a new updated estimation of the virtual location of the robot is calculated using the Extended Kalman Filter described in the above section. For our given example, we continue the game and return to the ‘Cup Chooser View’ which appears without Cup 2 as an option since we declared that we hit it earlier:







Results & Video of Testing

The video would not upload properly but use the link below to view our fully integrated first test, the reloading parts were cut to reduce the file size and length however this is a single game with all shots taken included. I ran into the camera near the end which is why the framing of the game changes for the last shot.







The shooter and EKF position update worked better than either of us thought it would. We completed the first game in nine shots hitting all six cups. Our EKF provided an estimated position for the shooter with respect to the origin. The center of the second row of cups in this game was placed 91.5 cm from the center of the shooter.

Code

All of the Matlab and Ardunio codes have been uploaded to google docs, select the link below to view the codes:


You will also need the Arduino IDE if you do not already have it. It can be found on Arduino's website or just click the link below:



Tuesday, April 19, 2011

Final Exam Question

Question:

a) Derive the D-H table for the RR manipulator above. Assume h=0;
b) Solve for the transformation matrix . (Note this is not the normal matrix )

Alternatively to simplify the problem, b) could be written as:
b) Solve for the transformation matrix , then explain how to derive from .

Solution:












Tuesday, April 12, 2011

Project Title:
Member Names (with emails):

Brief Project Description:

Objectives:

Picture:

Model, including Jacobian:

Block Diagram:

Control Design:

Implementation Notes:

Results:
(MATLAB plots)

Discussion: