ME567W11Team10
Thursday, April 21, 2011
Robotic Pong Shooter
Description and Objectives
Wednesday, April 20, 2011
Inverse Kinimeatics

Tocalculate the first joint angle, θ1, that finds the correct bearing it is a simple inverse tangent relationship shown below.
The range joint angle, θ2, is a little bit more difficult to calculate. We first calculated the angle of maximum range. This calculation uses ‘z’ calculated above, but since ‘z’ is a function of ‘θ2‘, preformed an iterative search to determine the angle of maximum range. The variable ‘v’ is the velocity of the ball leaving the barrel, this value was determined experimentally.
We next calculated the range of the shot and the maximum range the shooter is able to achieve at the current height.
If the range exceeded the shooters maximum range, the angle of maximum range was used to take the shot. If not then the equation below was used to calculate the angle required to make the shot.
EKF Implentation
In order to allow the user to provide feedback to the system in an effort to correct for any modeling errors, we have incorporated an Extended Kalman Filter. Essentially, the EKF allows for a virtual position of the robot to be tracked within the XY-plane of the global reference frame. All cups have their own reference frame, with which a rigid body transform can be calculated between the location of a given cup and that of the virtual robot. Therefore, when a user inputs which cup they desire for the cannon to aim for, we are able to calculate the required ballistic joint angles based on the estimations of the virtual position of the robot and the ground truth location of the selected cup. Feedback occurs when the user reports where the ball actually landed. We can calculate the difference between where the user says the ball landed and where the ball should of landed, thus providing an observation residual for the EKF to use in better estimating the virtual location of the robot. The following presents the formulation our Extended Kalman Filter.
A. State Declaration
B. Propagation
Covariance Estimate
Note that the above propagation step is a constant linear function.
C. Observation
-Our observation will simply be the user’s input of where the ball landed with respect to the global frame. We will use a rigid body transformation, labelled ‘T’ in Fig. 1, to relate this observation to the state of the robot. This gives rise to the following expected observation vector:
The above equation is used to calculate the expected observation, while the true observation is the difference between the robots prior position and the users input. Variables Xc and Yc in the above equation represent the ground truth locations of the selected cup for the robot to attempt to shoot the ping pong ball into. Because the above equation is non-linear, we must use a First-order Taylor expansion to estimate the covariance of the observation. Thus, we need the Jacobian of h with respect to the state variables. This can be calculated as follows:
We can now formulate our Kalman update step in order to increase the accuracy of the ping pong launcher by increasing the accuracy in the estimate of the robot’s virtual location.
D. Kalman Update
-We can now update our estimates of both state and variance using the above observation in Kalman fashion as follows:
-The updated state and covariance are then run through the propagation step, and a new observation step occurs after the user provides feedback. Therefore, we have defined our recursive Extended Kalman Filter used to predict the robot’s virtual location given user feedback, allowing for correction of any modeling errors within the overall system.
Graphical User Interface
The following provides an overview of the graphical user interface developed for the system, as well as a general flow of actions happening during a game of robotic pong. The user begins with the following display which asks the user which cup they would like the cannon to attempt to shoot the ping pong ball into:
If the user chooses to reset the feedback, the estimated virtual position of the robot is aligned with the origin of the global coordinate frame, and the GUI returns to the ‘Choose Cup View’ as seen in Fig. 1. The user is also returned to the ‘Choose Cup View’ if they select the option to choose a different cup to aim at. If the user chooses to take the shot, the robot heads to the correct joint angles (calculated from inverse kinematics using the estimated virtual robot location, ground truth cup locations, and simple ballistics - see above section on Inverse Kinematics for more information) and launches the ball. The following is then displayed:
The user is now able to select which cup they made, if any. Let us assume that the user chose to shoot at Cup 1, had the robot take the shot, and the ball actually landed in Cup 2. The user is then asked if the would like to provide feedback, while also being given the option to manually input the robot offset:
If the user decides not to provide feedback, they are returned to the ‘Choose Cup View’ with any cups that they made no longer available and crossed out on the GUI. When the user chooses to manually input robot offset, a menu comes up allowing them to type in the virtual location of the robot, after which the user is returned to the ‘Choose Cup View’ as well. However, if the user decides to provide feedback, they are given a display window of the current configuration of the field with a set of cross hairs with which they can click on the location the ball landed. After clicking on the location the ball landed, the user is given the option to continue to the game, end the game, or add a cup back (incase the user accidentally removed a cup or failed to play by house rules):
After the user provides feedback, a new updated estimation of the virtual location of the robot is calculated using the Extended Kalman Filter described in the above section. For our given example, we continue the game and return to the ‘Cup Chooser View’ which appears without Cup 2 as an option since we declared that we hit it earlier:

