Skip to content

Latest commit

 

History

History

ekf-localization

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

1. Intro

This implementation of EKF Localization is forked from Edx/AMRx-Exercise-4

The EKF Localization algorithm is implemented for a differential drive robot equipped with a ring of rangefinders, operated in the environment shown in Fig.1. It's worth to notice that the sensory output is a set of 2D points on the plane defined by the rangefinders ring.

alt text Fig.1 Robot in simulation environment

Since the EKF Localization can only address the position tracking problem, the initial position of the robot and its uncertainty are assumed to be known. In addition, the environment map is also provided.

To fully define robot pose in a 2D environment, robot state vector \textbf{x}_t is chosen to be its position (x_t, y_t) and its heading orientation \theta_t with respect to global frame.

\textbf{x}_t = \begin{bmatrix}x_t & y_t & z_t \end{bmatrix}^T

From the raw sensory output (a set of 2D points), several lines are extracted for localization; hence the map \textbf{M} is the set of every line m_i (i = 0, ..., n-1) in the environment.

M = \begin{Bmatrix}m_0, m_1, \dots, m_{n -1}\end{Bmatrix}

The EKF Localization algorithm is comprised of 3 steps:

  1. Motion Prediction
  2. Measurement Prediction
  3. Estimation

2. Motion Prediction

Given the current pose of robot \textbf{x}_{t - 1} and the control input u_t, new robot pose \textbf{x}_t can be predicted by the motion model. In this example, the u_t is robot odometry - displacement of left and right wheel.

u_t = \begin{bmatrix}\Delta s_l & \Delta s_r \end{bmatrix}^T

Using the kinematic model of differential drive robot, the motion model is established as

\hat{\textbf{x}}_t = f(\textbf{x}_{t-1}, u_t) = \textbf{x}_{t-1} + \begin{bmatrix} 
\frac{\Delta s_l + \Delta s_r}{2} \cdot \cos\left(\theta_{t-1} + \frac{\Delta s_r - \Delta s_l}{2b}\right) \\
\frac{\Delta s_l + \Delta s_r}{2} \cdot \sin\left(\theta_{t-1} + \frac{\Delta s_r - \Delta s_l}{2b}\right) \\
\frac{\Delta s_r - \Delta s_l}{b} 
\end{bmatrix}

b hereupon is the distance between left and right wheel. Assume the covariance of the control input u_t has the form of

Q_t = \begin{bmatrix} k|\Delta s_l| & 0 \\ 0 & k|\Delta s_r| \end{bmatrix}

Here, k is a constance. Given Q_t calculated by Eq.5 and the covariance of the current pose P_{t-1}, the covariance of the new pose predicted by the motion model is

\hat{P}_t = F_x \cdot P_{t - 1} \cdot F_x^T + F_u \cdot Q_{t} \cdot F_u^T

In equation above, F_{\textbf{x}}, F_u are respectively the Jacobian of f(\textbf{x}_{t-1}, u_t) with respect to \textbf{x}_t and u_t, evaluated at the value of \textbf{x}_{t-1}, u_t.

F_{\textbf{x}} = I + \begin{bmatrix}
0 & 0 & -\frac{\Delta s_l + \Delta s_r}{2} \sin\left(\theta_{t-1} + \frac{\Delta s_r - \Delta s_l}{2b}\right) \\ 
0 & 0 & \frac{\Delta s_l + \Delta s_r}{2} \cos\left(\theta_{t-1} + \frac{\Delta s_r - \Delta s_l}{2b}\right) \\
0 & 0 & 0
\end{bmatrix}

Let \Delta \theta = \left(\Delta s_r - \Delta s_l\right) / b

F_u = \frac{1}{2}\begin{bmatrix}
\cos\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) + \frac{\Delta s_l + \Delta s_r}{2b} \sin\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) &
\cos\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) - \frac{\Delta s_l + \Delta s_r}{2b} \sin\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) \\
\sin\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) - \frac{\Delta s_l + \Delta s_r}{2b} \cos\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) & 
\sin\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) + \frac{\Delta s_l + \Delta s_r}{2b} \cos\left(\theta_{t-1} + \frac{\Delta \theta}{2}\right) \\
\frac{-1}{b} & \frac{1}{b}
\end{bmatrix}

3. Measurement Prediction

3.1 Measurement Model

As mentioned in Sec.1, robot has a ring of rangefinders; therefore the raw measurement is a set 2D points. From this set, several lines are extracted to serve the process of feature-based localization with EKF. In this example, the extraction of line features from raw measurement is already provided. At every time instance, robot has access to set of lines \textbf{Z}_t = \left\{z_t^i | i = 0, \dots, m-1\right\} and their covariant matrix R_t^i. Here m is the number of extracted line features. Each line z_t^i is parameterized by two number \alpha_t^i and r_t^i.

z_t^i = \begin{bmatrix}\alpha_t^i & r_t^i\end{bmatrix}^T

The definition of these numbers is shown in Fig.2.

alt text

Fig.2 Parameterize a line

The measurement model is to predict the obtained measurment given robot position and the environment map. The key insight for establishing the model is that each feature m^j = \begin{bmatrix}\alpha^j & r^j\end{bmatrix}^T of the is expressed in the world frame \{\textbf{W}\}, while features obtained by robot's sensors are expressed in robot's body frame \{\textbf{R}\}. As a result, to predict robot measurement, all environment's features need to be transformed from \{\textbf{W}\} to \{\textbf{R}\}. Such transformation formulates the measurement model and is implemented below

\hat{z}_t^j = h^j(\hat{\textbf{x}}_t, m^j) = \begin{bmatrix}
\alpha^j - \hat{\theta}_t \\
r^j - \left(\hat{x}_t \cdot \cos(\alpha^j) + \hat{y}_t \cdot \sin(\alpha^j)\right)
\end{bmatrix}

For later usage, the Jacobian of the measurement model above is dervied H_{\textbf{x}}^j = \begin{bmatrix}
0 & 0 & -1 \\
-\cos\left(\alpha^j\right) & -\sin\left(\alpha^j\right) & 0
\end{bmatrix}

3.2 Measurement Association

At this point, robot have obtained 2 sets of measurement: the actual measurement \textbf{Z}_t = \left\{z_t^i | i = 0, \dots, m-1\right\} and the predicted measurement \hat{\textbf{Z}}_t = \left\{\hat{z}^j_t | j = 0, \dots, n-1\right\}. For the Kalman Filter to operate, the correspondence between acutal and predicted measurement need to be established. To this end, the Mahalanobis distance between the acutal measurement z_t^i and the predicted measurement \hat{z}_t^j is employed.

To calculate the Mahalanobis distance, the innovation vector v_t^{ij} is first computed

v_t^{ij} = z_t^i - \hat{z}_t^j

The covariant matrix of v_t^{ij} is calculated by

\Sigma_{IN_t}^{ij} = R_t^i + H_{\textbf{x}} \cdot \hat{P}_t \cdot H_{\textbf{x}}^T

Here, R_t^i is the covariant matrix of z_t^i. This matrix is inferred from the covariant associated with the noise of all 2D points that form z_t^i.

The Mahalanobis distance d_t^{ij} is

d_t^{ij} = v_t^{ij}^T \cdot \left(\Sigma_{IN_t}^{ij}\right)^{-1} \cdot v_t^{ij}

If this distance is smaller than a threshold, a correspondence is found. In case there are more than 1 predicted feature match with one actual feature, the predicted feature with smallest distance is chosen.

4. Estimation

In this step, the matched measurements are used to correct the prediction of motion model \hat{\textbf{x}}}_t, \hat{P}_t.

For all pairs of match actual and predicted measurement, vertically concatenate the actual measurements into vector z_t and the predicted measurements into \hat{z}_t. The composite innovation vector v_t is

v_t = z_t - \hat{z}_t

To calculate the covariant matrix of v_t - \Sigma_{IN_t} , vertically concatenate all H_{\textbf{x}}^j to make the composite Jacobian of measurement model H_t; and put the covariant matrix of all actual measurement R_t^i into a block diagonal matrix R_t. Similarly to the covariant matrix of a single pair of match measurements, \Sigma_{IN_t} is calculated by

\Sigma_{IN_t} = R_t + H_{\textbf{x}}\cdot \hat{P}_t \cdot H_{\textbf{x}}^T

Next, compute the Kalman gain

\textbf{K}_t = \hat{P}_t \cdot H_t^T \cdot \Sigma_{IN_t}

The new estimation of robot pose and its covariant matrix is

 \textbf{x}_t = \hat{\textbf{x}}_t + \textbf{K}_t \cdot v_t

 P_t = \left(I - \textbf{K}_t \cdot H_t \right) \cdot \hat{P}_t

5. Result

The comparison between EKF Localization and Odometry is shown below. In those figures, the grought truth and the estimated by either EKF or odometry is respectively denoted by the grey robot and the yellow robot. It can be seen that while the Odometry diverse from the ground truth after sometime, EKF Localization still manages to track the true state.

alt text

Fig.3 Odometry result

alt text

Fig.4 EKF Localization result