This implementation of EKF Localization is forked from Edx/AMRx-Exercise-4
The EKF Localization algorithm is implemented for a differential drive robot equipped with a ring of rangefinders, operated in the environment shown in Fig.1. It's worth to notice that the sensory output is a set of 2D points on the plane defined by the rangefinders ring.
Fig.1 Robot in simulation environment
Since the EKF Localization can only address the position tracking problem, the initial position of the robot and its uncertainty are assumed to be known. In addition, the environment map is also provided.
To fully define robot pose in a 2D environment, robot state vector is chosen to be its position
and its heading orientation
with respect to global frame.
From the raw sensory output (a set of 2D points), several lines are extracted for localization; hence the map is the set of every line
in the environment.
The EKF Localization algorithm is comprised of 3 steps:
- Motion Prediction
- Measurement Prediction
- Estimation
Given the current pose of robot and the control input
, new robot pose
can be predicted by the motion model. In this example, the
is robot odometry - displacement of left and right wheel.
Using the kinematic model of differential drive robot, the motion model is established as
hereupon is the distance between left and right wheel. Assume the covariance of the control input
has the form of
Here, is a constance. Given
calculated by Eq.5 and the covariance of the current pose
, the covariance of the new pose predicted by the motion model is
In equation above, are respectively the Jacobian of
with respect to
and
, evaluated at the value of
.
As mentioned in Sec.1, robot has a ring of rangefinders; therefore the raw measurement is a set 2D points. From this set, several lines are extracted to serve the process of feature-based localization with EKF. In this example, the extraction of line features from raw measurement is already provided. At every time instance, robot has access to set of lines and their covariant matrix
. Here
is the number of extracted line features. Each line
is parameterized by two number
and
.
The definition of these numbers is shown in Fig.2.
Fig.2 Parameterize a line
The measurement model is to predict the obtained measurment given robot position and the environment map. The key insight for establishing the model is that each feature of the is expressed in the world frame
, while features obtained by robot's sensors are expressed in robot's body frame
. As a result, to predict robot measurement, all environment's features need to be transformed from
to
. Such transformation formulates the measurement model and is implemented below
For later usage, the Jacobian of the measurement model above is dervied
At this point, robot have obtained 2 sets of measurement: the actual measurement and the predicted measurement
. For the Kalman Filter to operate, the correspondence between acutal and predicted measurement need to be established. To this end, the Mahalanobis distance between the acutal measurement
and the predicted measurement
is employed.
To calculate the Mahalanobis distance, the innovation vector is first computed
The covariant matrix of is calculated by
Here, is the covariant matrix of
. This matrix is inferred from the covariant associated with the noise of all 2D points that form
.
If this distance is smaller than a threshold, a correspondence is found. In case there are more than 1 predicted feature match with one actual feature, the predicted feature with smallest distance is chosen.
In this step, the matched measurements are used to correct the prediction of motion model .
For all pairs of match actual and predicted measurement, vertically concatenate the actual measurements into vector and the predicted measurements into
. The composite innovation vector
is
To calculate the covariant matrix of -
, vertically concatenate all
to make the composite Jacobian of measurement model
; and put the covariant matrix of all actual measurement
into a block diagonal matrix
. Similarly to the covariant matrix of a single pair of match measurements,
is calculated by
Next, compute the Kalman gain
The new estimation of robot pose and its covariant matrix is
The comparison between EKF Localization and Odometry is shown below. In those figures, the grought truth and the estimated by either EKF or odometry is respectively denoted by the grey robot and the yellow robot. It can be seen that while the Odometry diverse from the ground truth after sometime, EKF Localization still manages to track the true state.
Fig.3 Odometry result
Fig.4 EKF Localization result