diff --git a/README.md b/README.md new file mode 100644 index 0000000..530714f --- /dev/null +++ b/README.md @@ -0,0 +1,176 @@ +## Writeup Template + +### You can use this file as a template for your writeup if you want to submit it as a markdown file, but feel free to use some other method and submit a pdf if you prefer. + +--- + +**Advanced Lane Finding Project** + +The goals / steps of this project are the following: + +1. Compute the camera calibration matrix and distortion coefficients given a set of chessboard images. +2. Apply a distortion correction to raw images. +3. Use color transforms, gradients, etc., to create a thresholded binary image. +4. Apply a perspective transform to rectify binary image ("birds-eye view"). +5. Detect lane pixels and fit to find the lane boundary. +6. Determine the curvature of the lane and vehicle position with respect to center. +7. Warp the detected lane boundaries back onto the original image. +8. Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position. + + +(I checked the [rubric](https://review.udacity.com/#!/rubrics/571/view) points) + +--- + +### Camera Calibration + +**Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.** + +The code for this step is contained in the lines #10 through #44 of the file called `undistort_img.py`). + +I start by preparing "object points", which will be the (x, y, z) coordinates of the chessboard corners in the world. +Here I am assuming the chessboard is fixed on the (x, y) plane at z=0, such that the object points are the same for each +calibration image. Thus, `objp` is just a replicated array of coordinates, and `objpoints` will be appended with a copy +of it every time I successfully detect all chessboard corners in a test image. `imgpoints` will be appended with +the (x, y) pixel position of each of the corners in the image plane with each successful chessboard detection. + +I then used the output `objpoints` and `imgpoints` to compute the camera calibration and distortion coefficients using +the `cv2.calibrateCamera()` function.
+I applied this distortion correction to the test image using the `cv2.undistort()` +function and obtained the results: + +- Calibration: Draw chessboard conners +![calibration_chessboard_conners](./output_images/chessboard_conners/calibration2.jpg) +- Calibration undistorted original images +![calibration_undistortion](./output_images/undistorted/calibration2.jpg) + +### Pipeline (single images) + +#### 1. Example of a distortion-corrected image. +![undistortion of test images](./output_images/undistorted_test_images/both_test3.jpg) + +#### 2. I used color transforms, gradients to create a thresholded binary image. +I used a combination of color and gradient thresholds to generate a binary image (thresholding steps at lines #9 through #76 +in `gradient.py`). Here's an example of my output for this step. +![binary output](./output_images/binary_test_images/binary_test3.jpg) + +- For color: I converted the original RGB images to HLS images, then I applied a threshold *thresh_s_channel* to only +the *s channel*. +- I applied thresholds to gradient in *x* axis, *y* axis, *magnitute*, and *direction*.
+I chose the thresholds as below: +```python +thresh_gradx = (20, 100) +thresh_grady = (20, 100) +thresh_mag = (30, 100) +thresh_dir = (0.7, 1.3) +thresh_s_channel = (170, 255) +``` + +#### 3. I performed a perspective transform to the birdview. + +First, I computed the transformation matrix by using `get_transform_matrix()` function (lines #10 through #14) in the +file `perspective_transform.py`. The function takes the source (`src`) and destination (`dst`) points as inputs. + +The code for my perspective transform includes a function called `warped_birdview()`, which appears in lines #17 through #21 +in the file `perspective_transform.py`. +The `warper()` function takes as inputs an image (`img`) and the transformation matrix (`M`). + +I chose the hardcode the source and destination points in the following manner: + +```python +src = np.float32( + [[(img_size[0] / 2) - 55, img_size[1] / 2 + 100], + [((img_size[0] / 6) - 10), img_size[1]], + [(img_size[0] * 5 / 6) + 60, img_size[1]], + [(img_size[0] / 2 + 55), img_size[1] / 2 + 100]]) +dst = np.float32( + [[(img_size[0] / 4), 0], + [(img_size[0] / 4), img_size[1]], + [(img_size[0] * 3 / 4), img_size[1]], + [(img_size[0] * 3 / 4), 0]]) +``` + +This resulted in the following source and destination points: + +| Source | Destination | +|:-------------:|:-------------:| +| 585, 460 | 320, 0 | +| 203, 720 | 320, 720 | +| 1127, 720 | 960, 720 | +| 695, 460 | 960, 0 | + +I verified that my perspective transform was working as expected by drawing the `src` and `dst` points onto a test image +and its warped counterpart to verify that the lines appear parallel in the warped image. +![warped color](./output_images/warped_test_images/color_test3.jpg) + +An example of a binary warped image: +![warped_img](./output_images/warped_test_images/binary_test3.jpg) + +#### 4. I identified lane-line pixels and fit their positions with a polynomial +This step was done by the function `find_lane_sliding_window()` in `detect_lanelines.py` +##### 4.1. Find the initial point in lanes by detecting the peaks in a histogram +I took a histogram along all the columns in the lower half of the image. +The peak on the left side and the right side of the histogram are the initial points of the left lane +and the right lane respectively. This step was implemented in function `find_lane_sliding_window()` +from lines #25 to lines #28. + +##### 4.2. Sliding window +When I had the initial points on the 2 lanelines, I started search the whole lanelines by using the sliding +window method. +This step was implemented in function `find_lane_sliding_window()` from lines #30 to lines #102. +There are several parameters in this step: +- nwindows: Number of windows on y axis +- margin: The half width of windows on x axis +- minpix: If the number of detected points in the window > minpix, the initial point of lanelines in the next window +will be updated. + +When I obtained two lists of points on the left laneline and the right laneline, I fit a quadratic polynomial to the lists of points +by using `np.polyfit()` function. + +An example of result using the sliding window methods +![detect lane lines](./output_images/detected_lane_test_images/ntest3.jpg) + +#### 5. Calculated the radius of curvature of the lanelines and the position of the vehicle with respect to center. +The position of the vehicle with respect to center was calculated in lines #16 through #24 in function +`calculate_distance_from_lane_center()` in `main.py` + +The radius of curvature of the lanelines was computed in lines #48 through #52 in my code in `utils.py` + +#### 6. Warped the detected lane boundaries back onto the original image + +I implemented this step in lines #59 through #86 in my code in `utils.py` in the function `transform_to_the_road()`. +Here is an example of my result on a test image: +![on_road_img](./output_images/onroad_test_images/test3.jpg) + +--- + +### Pipeline (video) + +1. For the first frame, I using the sliding window to find the two lanelines. + +2. For the next frames, I considered the conditions: whether both two lanelines detected or not. +If both lanelines are detected, I will search the lanelines based on the previous results on the previous frames. +Otherwise, I used the sliding window to search the lanelines. + +I used a buffer to store fit parameters of the lanelines of 20 frames. I computed an average of the buffer to find the +fit parameters of the two lanelines. +The code is in `average_fit()` function in `utils.py`, and the lines #145 to #147 in `detect_lanelines.py` + +This step was implemented in my code in `main.py` from lines #81 to lines #89. + +My final video output is at [https://youtu.be/mpKqJY0iSNM](https://youtu.be/mpKqJY0iSNM) + +--- + +### Discussion + +#### 1. Briefly discuss any problems / issues you faced in your implementation of this project. +- The fine-tuning process to find the proper parameters took me a long time. + +#### 2. When does the algorithm fail? +- The algorithm estimated incorrectly the left line in the `project_video.mp4` video from *0:23* to *0:24* seconds. +The reason for this error is mainly because of a shadow on the road that leads to a blurred yellow line. + +### Future work +- Try to smooth the detected lane lines +- Improve the algorithm to work well on the challenge videos. \ No newline at end of file diff --git a/camera_cal/calibration1.jpg b/camera_cal/calibration1.jpg new file mode 100644 index 0000000..9704fd1 Binary files /dev/null and b/camera_cal/calibration1.jpg differ diff --git a/camera_cal/calibration10.jpg b/camera_cal/calibration10.jpg new file mode 100644 index 0000000..3eb285a Binary files /dev/null and b/camera_cal/calibration10.jpg differ diff --git a/camera_cal/calibration11.jpg b/camera_cal/calibration11.jpg new file mode 100644 index 0000000..26b2a43 Binary files /dev/null and b/camera_cal/calibration11.jpg differ diff --git a/camera_cal/calibration12.jpg b/camera_cal/calibration12.jpg new file mode 100644 index 0000000..2953bad Binary files /dev/null and b/camera_cal/calibration12.jpg differ diff --git a/camera_cal/calibration13.jpg b/camera_cal/calibration13.jpg new file mode 100644 index 0000000..257225d Binary files /dev/null and b/camera_cal/calibration13.jpg differ diff --git a/camera_cal/calibration14.jpg b/camera_cal/calibration14.jpg new file mode 100644 index 0000000..a989a01 Binary files /dev/null and b/camera_cal/calibration14.jpg differ diff --git a/camera_cal/calibration15.jpg b/camera_cal/calibration15.jpg new file mode 100644 index 0000000..a82947a Binary files /dev/null and b/camera_cal/calibration15.jpg differ diff --git a/camera_cal/calibration16.jpg b/camera_cal/calibration16.jpg new file mode 100644 index 0000000..39117e6 Binary files /dev/null and b/camera_cal/calibration16.jpg differ diff --git a/camera_cal/calibration17.jpg b/camera_cal/calibration17.jpg new file mode 100644 index 0000000..8924cb4 Binary files /dev/null and b/camera_cal/calibration17.jpg differ diff --git a/camera_cal/calibration18.jpg b/camera_cal/calibration18.jpg new file mode 100644 index 0000000..239b119 Binary files /dev/null and b/camera_cal/calibration18.jpg differ diff --git a/camera_cal/calibration19.jpg b/camera_cal/calibration19.jpg new file mode 100644 index 0000000..b63a234 Binary files /dev/null and b/camera_cal/calibration19.jpg differ diff --git a/camera_cal/calibration2.jpg b/camera_cal/calibration2.jpg new file mode 100644 index 0000000..967bf18 Binary files /dev/null and b/camera_cal/calibration2.jpg differ diff --git a/camera_cal/calibration20.jpg b/camera_cal/calibration20.jpg new file mode 100644 index 0000000..98cb886 Binary files /dev/null and b/camera_cal/calibration20.jpg differ diff --git a/camera_cal/calibration3.jpg b/camera_cal/calibration3.jpg new file mode 100644 index 0000000..9134531 Binary files /dev/null and b/camera_cal/calibration3.jpg differ diff --git a/camera_cal/calibration4.jpg b/camera_cal/calibration4.jpg new file mode 100644 index 0000000..7397e8e Binary files /dev/null and b/camera_cal/calibration4.jpg differ diff --git a/camera_cal/calibration5.jpg b/camera_cal/calibration5.jpg new file mode 100644 index 0000000..027cc40 Binary files /dev/null and b/camera_cal/calibration5.jpg differ diff --git a/camera_cal/calibration6.jpg b/camera_cal/calibration6.jpg new file mode 100644 index 0000000..6e71c0d Binary files /dev/null and b/camera_cal/calibration6.jpg differ diff --git a/camera_cal/calibration7.jpg b/camera_cal/calibration7.jpg new file mode 100644 index 0000000..49ecfce Binary files /dev/null and b/camera_cal/calibration7.jpg differ diff --git a/camera_cal/calibration8.jpg b/camera_cal/calibration8.jpg new file mode 100644 index 0000000..0c07fea Binary files /dev/null and b/camera_cal/calibration8.jpg differ diff --git a/camera_cal/calibration9.jpg b/camera_cal/calibration9.jpg new file mode 100644 index 0000000..1071c7d Binary files /dev/null and b/camera_cal/calibration9.jpg differ diff --git a/challenge_video.mp4 b/challenge_video.mp4 new file mode 100644 index 0000000..3d84bbd Binary files /dev/null and b/challenge_video.mp4 differ diff --git a/examples/.ipynb_checkpoints/example-checkpoint.ipynb b/examples/.ipynb_checkpoints/example-checkpoint.ipynb new file mode 100644 index 0000000..59ce53d --- /dev/null +++ b/examples/.ipynb_checkpoints/example-checkpoint.ipynb @@ -0,0 +1,6 @@ +{ + "cells": [], + "metadata": {}, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/examples/binary_combo_example.jpg b/examples/binary_combo_example.jpg new file mode 100644 index 0000000..20720bc Binary files /dev/null and b/examples/binary_combo_example.jpg differ diff --git a/examples/color_fit_lines.jpg b/examples/color_fit_lines.jpg new file mode 100644 index 0000000..8ffe00c Binary files /dev/null and b/examples/color_fit_lines.jpg differ diff --git a/examples/example.ipynb b/examples/example.ipynb new file mode 100644 index 0000000..9b24816 --- /dev/null +++ b/examples/example.ipynb @@ -0,0 +1,109 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Advanced Lane Finding Project\n", + "\n", + "The goals / steps of this project are the following:\n", + "\n", + "* Compute the camera calibration matrix and distortion coefficients given a set of chessboard images.\n", + "* Apply a distortion correction to raw images.\n", + "* Use color transforms, gradients, etc., to create a thresholded binary image.\n", + "* Apply a perspective transform to rectify binary image (\"birds-eye view\").\n", + "* Detect lane pixels and fit to find the lane boundary.\n", + "* Determine the curvature of the lane and vehicle position with respect to center.\n", + "* Warp the detected lane boundaries back onto the original image.\n", + "* Output visual display of the lane boundaries and numerical estimation of lane curvature and vehicle position.\n", + "\n", + "---\n", + "## First, I'll compute the camera calibration using chessboard images" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [ + "import numpy as np\n", + "import cv2\n", + "import glob\n", + "import matplotlib.pyplot as plt\n", + "%matplotlib qt\n", + "\n", + "# prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0)\n", + "objp = np.zeros((6*9,3), np.float32)\n", + "objp[:,:2] = np.mgrid[0:9,0:6].T.reshape(-1,2)\n", + "\n", + "# Arrays to store object points and image points from all the images.\n", + "objpoints = [] # 3d points in real world space\n", + "imgpoints = [] # 2d points in image plane.\n", + "\n", + "# Make a list of calibration images\n", + "images = glob.glob('../camera_cal/calibration*.jpg')\n", + "\n", + "# Step through the list and search for chessboard corners\n", + "for fname in images:\n", + " img = cv2.imread(fname)\n", + " gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)\n", + "\n", + " # Find the chessboard corners\n", + " ret, corners = cv2.findChessboardCorners(gray, (9,6),None)\n", + "\n", + " # If found, add object points, image points\n", + " if ret == True:\n", + " objpoints.append(objp)\n", + " imgpoints.append(corners)\n", + "\n", + " # Draw and display the corners\n", + " img = cv2.drawChessboardCorners(img, (9,6), corners, ret)\n", + " cv2.imshow('img',img)\n", + " cv2.waitKey(500)\n", + "\n", + "cv2.destroyAllWindows()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## And so on and so forth..." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "collapsed": true + }, + "outputs": [], + "source": [] + } + ], + "metadata": { + "anaconda-cloud": {}, + "kernelspec": { + "display_name": "Python [conda root]", + "language": "python", + "name": "conda-root-py" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.5.2" + } + }, + "nbformat": 4, + "nbformat_minor": 1 +} diff --git a/examples/example.py b/examples/example.py new file mode 100644 index 0000000..fd41128 --- /dev/null +++ b/examples/example.py @@ -0,0 +1,8 @@ +def warper(img, src, dst): + + # Compute and apply perpective transform + img_size = (img.shape[1], img.shape[0]) + M = cv2.getPerspectiveTransform(src, dst) + warped = cv2.warpPerspective(img, M, img_size, flags=cv2.INTER_NEAREST) # keep same size as input image + + return warped diff --git a/examples/example_output.jpg b/examples/example_output.jpg new file mode 100644 index 0000000..d47c8c2 Binary files /dev/null and b/examples/example_output.jpg differ diff --git a/examples/undistort_output.png b/examples/undistort_output.png new file mode 100644 index 0000000..77ef25e Binary files /dev/null and b/examples/undistort_output.png differ diff --git a/examples/warped_straight_lines.jpg b/examples/warped_straight_lines.jpg new file mode 100644 index 0000000..3d58bb3 Binary files /dev/null and b/examples/warped_straight_lines.jpg differ diff --git a/harder_challenge_video.mp4 b/harder_challenge_video.mp4 new file mode 100644 index 0000000..543f275 Binary files /dev/null and b/harder_challenge_video.mp4 differ diff --git a/output_images/binary_test_images/binary_straight_lines1.jpg b/output_images/binary_test_images/binary_straight_lines1.jpg new file mode 100644 index 0000000..bdc6c0c Binary files /dev/null and b/output_images/binary_test_images/binary_straight_lines1.jpg differ diff --git a/output_images/binary_test_images/binary_straight_lines2.jpg b/output_images/binary_test_images/binary_straight_lines2.jpg new file mode 100644 index 0000000..39fd4fc Binary files /dev/null and b/output_images/binary_test_images/binary_straight_lines2.jpg differ diff --git a/output_images/binary_test_images/binary_test1.jpg b/output_images/binary_test_images/binary_test1.jpg new file mode 100644 index 0000000..d4e4143 Binary files /dev/null and b/output_images/binary_test_images/binary_test1.jpg differ diff --git a/output_images/binary_test_images/binary_test2.jpg b/output_images/binary_test_images/binary_test2.jpg new file mode 100644 index 0000000..e80efe1 Binary files /dev/null and b/output_images/binary_test_images/binary_test2.jpg differ diff --git a/output_images/binary_test_images/binary_test3.jpg b/output_images/binary_test_images/binary_test3.jpg new file mode 100644 index 0000000..54c109a Binary files /dev/null and b/output_images/binary_test_images/binary_test3.jpg differ diff --git a/output_images/binary_test_images/binary_test4.jpg b/output_images/binary_test_images/binary_test4.jpg new file mode 100644 index 0000000..0d96012 Binary files /dev/null and b/output_images/binary_test_images/binary_test4.jpg differ diff --git a/output_images/binary_test_images/binary_test5.jpg b/output_images/binary_test_images/binary_test5.jpg new file mode 100644 index 0000000..42941cf Binary files /dev/null and b/output_images/binary_test_images/binary_test5.jpg differ diff --git a/output_images/binary_test_images/binary_test6.jpg b/output_images/binary_test_images/binary_test6.jpg new file mode 100644 index 0000000..23de3bb Binary files /dev/null and b/output_images/binary_test_images/binary_test6.jpg differ diff --git a/output_images/chessboard_conners/calibration10.jpg b/output_images/chessboard_conners/calibration10.jpg new file mode 100644 index 0000000..3e37652 Binary files /dev/null and b/output_images/chessboard_conners/calibration10.jpg differ diff --git a/output_images/chessboard_conners/calibration11.jpg b/output_images/chessboard_conners/calibration11.jpg new file mode 100644 index 0000000..681455f Binary files /dev/null and b/output_images/chessboard_conners/calibration11.jpg differ diff --git a/output_images/chessboard_conners/calibration12.jpg b/output_images/chessboard_conners/calibration12.jpg new file mode 100644 index 0000000..99ace3d Binary files /dev/null and b/output_images/chessboard_conners/calibration12.jpg differ diff --git a/output_images/chessboard_conners/calibration13.jpg b/output_images/chessboard_conners/calibration13.jpg new file mode 100644 index 0000000..b4322bc Binary files /dev/null and b/output_images/chessboard_conners/calibration13.jpg differ diff --git a/output_images/chessboard_conners/calibration14.jpg b/output_images/chessboard_conners/calibration14.jpg new file mode 100644 index 0000000..723bdb6 Binary files /dev/null and b/output_images/chessboard_conners/calibration14.jpg differ diff --git a/output_images/chessboard_conners/calibration15.jpg b/output_images/chessboard_conners/calibration15.jpg new file mode 100644 index 0000000..7078c69 Binary files /dev/null and b/output_images/chessboard_conners/calibration15.jpg differ diff --git a/output_images/chessboard_conners/calibration16.jpg b/output_images/chessboard_conners/calibration16.jpg new file mode 100644 index 0000000..115044b Binary files /dev/null and b/output_images/chessboard_conners/calibration16.jpg differ diff --git a/output_images/chessboard_conners/calibration17.jpg b/output_images/chessboard_conners/calibration17.jpg new file mode 100644 index 0000000..13f0272 Binary files /dev/null and b/output_images/chessboard_conners/calibration17.jpg differ diff --git a/output_images/chessboard_conners/calibration18.jpg b/output_images/chessboard_conners/calibration18.jpg new file mode 100644 index 0000000..61c0503 Binary files /dev/null and b/output_images/chessboard_conners/calibration18.jpg differ diff --git a/output_images/chessboard_conners/calibration19.jpg b/output_images/chessboard_conners/calibration19.jpg new file mode 100644 index 0000000..4d8cc1b Binary files /dev/null and b/output_images/chessboard_conners/calibration19.jpg differ diff --git a/output_images/chessboard_conners/calibration2.jpg b/output_images/chessboard_conners/calibration2.jpg new file mode 100644 index 0000000..37220bc Binary files /dev/null and b/output_images/chessboard_conners/calibration2.jpg differ diff --git a/output_images/chessboard_conners/calibration20.jpg b/output_images/chessboard_conners/calibration20.jpg new file mode 100644 index 0000000..d52dc73 Binary files /dev/null and b/output_images/chessboard_conners/calibration20.jpg differ diff --git a/output_images/chessboard_conners/calibration3.jpg b/output_images/chessboard_conners/calibration3.jpg new file mode 100644 index 0000000..c584244 Binary files /dev/null and b/output_images/chessboard_conners/calibration3.jpg differ diff --git a/output_images/chessboard_conners/calibration6.jpg b/output_images/chessboard_conners/calibration6.jpg new file mode 100644 index 0000000..b46454e Binary files /dev/null and b/output_images/chessboard_conners/calibration6.jpg differ diff --git a/output_images/chessboard_conners/calibration7.jpg b/output_images/chessboard_conners/calibration7.jpg new file mode 100644 index 0000000..cc7f6d7 Binary files /dev/null and b/output_images/chessboard_conners/calibration7.jpg differ diff --git a/output_images/chessboard_conners/calibration8.jpg b/output_images/chessboard_conners/calibration8.jpg new file mode 100644 index 0000000..93ef571 Binary files /dev/null and b/output_images/chessboard_conners/calibration8.jpg differ diff --git a/output_images/chessboard_conners/calibration9.jpg b/output_images/chessboard_conners/calibration9.jpg new file mode 100644 index 0000000..313fba8 Binary files /dev/null and b/output_images/chessboard_conners/calibration9.jpg differ diff --git a/output_images/detected_lane_test_images/nstraight_lines1.jpg b/output_images/detected_lane_test_images/nstraight_lines1.jpg new file mode 100644 index 0000000..edbc1ca Binary files /dev/null and b/output_images/detected_lane_test_images/nstraight_lines1.jpg differ diff --git a/output_images/detected_lane_test_images/nstraight_lines2.jpg b/output_images/detected_lane_test_images/nstraight_lines2.jpg new file mode 100644 index 0000000..00bf806 Binary files /dev/null and b/output_images/detected_lane_test_images/nstraight_lines2.jpg differ diff --git a/output_images/detected_lane_test_images/ntest1.jpg b/output_images/detected_lane_test_images/ntest1.jpg new file mode 100644 index 0000000..76dec75 Binary files /dev/null and b/output_images/detected_lane_test_images/ntest1.jpg differ diff --git a/output_images/detected_lane_test_images/ntest2.jpg b/output_images/detected_lane_test_images/ntest2.jpg new file mode 100644 index 0000000..fbddf71 Binary files /dev/null and b/output_images/detected_lane_test_images/ntest2.jpg differ diff --git a/output_images/detected_lane_test_images/ntest3.jpg b/output_images/detected_lane_test_images/ntest3.jpg new file mode 100644 index 0000000..a82f1b2 Binary files /dev/null and b/output_images/detected_lane_test_images/ntest3.jpg differ diff --git a/output_images/detected_lane_test_images/ntest4.jpg b/output_images/detected_lane_test_images/ntest4.jpg new file mode 100644 index 0000000..5313470 Binary files /dev/null and b/output_images/detected_lane_test_images/ntest4.jpg differ diff --git a/output_images/detected_lane_test_images/ntest5.jpg b/output_images/detected_lane_test_images/ntest5.jpg new file mode 100644 index 0000000..28c1493 Binary files /dev/null and b/output_images/detected_lane_test_images/ntest5.jpg differ diff --git a/output_images/detected_lane_test_images/ntest6.jpg b/output_images/detected_lane_test_images/ntest6.jpg new file mode 100644 index 0000000..d23b8cc Binary files /dev/null and b/output_images/detected_lane_test_images/ntest6.jpg differ diff --git a/output_images/onroad_test_images/straight_lines1.jpg b/output_images/onroad_test_images/straight_lines1.jpg new file mode 100644 index 0000000..a1922d3 Binary files /dev/null and b/output_images/onroad_test_images/straight_lines1.jpg differ diff --git a/output_images/onroad_test_images/straight_lines2.jpg b/output_images/onroad_test_images/straight_lines2.jpg new file mode 100644 index 0000000..31e8087 Binary files /dev/null and b/output_images/onroad_test_images/straight_lines2.jpg differ diff --git a/output_images/onroad_test_images/test1.jpg b/output_images/onroad_test_images/test1.jpg new file mode 100644 index 0000000..88f77bf Binary files /dev/null and b/output_images/onroad_test_images/test1.jpg differ diff --git a/output_images/onroad_test_images/test2.jpg b/output_images/onroad_test_images/test2.jpg new file mode 100644 index 0000000..f8877dc Binary files /dev/null and b/output_images/onroad_test_images/test2.jpg differ diff --git a/output_images/onroad_test_images/test3.jpg b/output_images/onroad_test_images/test3.jpg new file mode 100644 index 0000000..4d54b93 Binary files /dev/null and b/output_images/onroad_test_images/test3.jpg differ diff --git a/output_images/onroad_test_images/test4.jpg b/output_images/onroad_test_images/test4.jpg new file mode 100644 index 0000000..2dbe357 Binary files /dev/null and b/output_images/onroad_test_images/test4.jpg differ diff --git a/output_images/onroad_test_images/test5.jpg b/output_images/onroad_test_images/test5.jpg new file mode 100644 index 0000000..111819a Binary files /dev/null and b/output_images/onroad_test_images/test5.jpg differ diff --git a/output_images/onroad_test_images/test6.jpg b/output_images/onroad_test_images/test6.jpg new file mode 100644 index 0000000..4e44a71 Binary files /dev/null and b/output_images/onroad_test_images/test6.jpg differ diff --git a/output_images/undistorted/calibration1.jpg b/output_images/undistorted/calibration1.jpg new file mode 100644 index 0000000..2b6eb7d Binary files /dev/null and b/output_images/undistorted/calibration1.jpg differ diff --git a/output_images/undistorted/calibration10.jpg b/output_images/undistorted/calibration10.jpg new file mode 100644 index 0000000..07e57d7 Binary files /dev/null and b/output_images/undistorted/calibration10.jpg differ diff --git a/output_images/undistorted/calibration11.jpg b/output_images/undistorted/calibration11.jpg new file mode 100644 index 0000000..1b9d8e6 Binary files /dev/null and b/output_images/undistorted/calibration11.jpg differ diff --git a/output_images/undistorted/calibration12.jpg b/output_images/undistorted/calibration12.jpg new file mode 100644 index 0000000..934e1d4 Binary files /dev/null and b/output_images/undistorted/calibration12.jpg differ diff --git a/output_images/undistorted/calibration13.jpg b/output_images/undistorted/calibration13.jpg new file mode 100644 index 0000000..20a6cb0 Binary files /dev/null and b/output_images/undistorted/calibration13.jpg differ diff --git a/output_images/undistorted/calibration14.jpg b/output_images/undistorted/calibration14.jpg new file mode 100644 index 0000000..fa0bbac Binary files /dev/null and b/output_images/undistorted/calibration14.jpg differ diff --git a/output_images/undistorted/calibration15.jpg b/output_images/undistorted/calibration15.jpg new file mode 100644 index 0000000..957fba5 Binary files /dev/null and b/output_images/undistorted/calibration15.jpg differ diff --git a/output_images/undistorted/calibration16.jpg b/output_images/undistorted/calibration16.jpg new file mode 100644 index 0000000..2512c1c Binary files /dev/null and b/output_images/undistorted/calibration16.jpg differ diff --git a/output_images/undistorted/calibration17.jpg b/output_images/undistorted/calibration17.jpg new file mode 100644 index 0000000..fef57b4 Binary files /dev/null and b/output_images/undistorted/calibration17.jpg differ diff --git a/output_images/undistorted/calibration18.jpg b/output_images/undistorted/calibration18.jpg new file mode 100644 index 0000000..eb6b115 Binary files /dev/null and b/output_images/undistorted/calibration18.jpg differ diff --git a/output_images/undistorted/calibration19.jpg b/output_images/undistorted/calibration19.jpg new file mode 100644 index 0000000..c6324a9 Binary files /dev/null and b/output_images/undistorted/calibration19.jpg differ diff --git a/output_images/undistorted/calibration2.jpg b/output_images/undistorted/calibration2.jpg new file mode 100644 index 0000000..29bf9c0 Binary files /dev/null and b/output_images/undistorted/calibration2.jpg differ diff --git a/output_images/undistorted/calibration20.jpg b/output_images/undistorted/calibration20.jpg new file mode 100644 index 0000000..b43b524 Binary files /dev/null and b/output_images/undistorted/calibration20.jpg differ diff --git a/output_images/undistorted/calibration3.jpg b/output_images/undistorted/calibration3.jpg new file mode 100644 index 0000000..5b2d253 Binary files /dev/null and b/output_images/undistorted/calibration3.jpg differ diff --git a/output_images/undistorted/calibration4.jpg b/output_images/undistorted/calibration4.jpg new file mode 100644 index 0000000..61d44d0 Binary files /dev/null and b/output_images/undistorted/calibration4.jpg differ diff --git a/output_images/undistorted/calibration5.jpg b/output_images/undistorted/calibration5.jpg new file mode 100644 index 0000000..546f3ab Binary files /dev/null and b/output_images/undistorted/calibration5.jpg differ diff --git a/output_images/undistorted/calibration6.jpg b/output_images/undistorted/calibration6.jpg new file mode 100644 index 0000000..7067d4b Binary files /dev/null and b/output_images/undistorted/calibration6.jpg differ diff --git a/output_images/undistorted/calibration7.jpg b/output_images/undistorted/calibration7.jpg new file mode 100644 index 0000000..39fc42e Binary files /dev/null and b/output_images/undistorted/calibration7.jpg differ diff --git a/output_images/undistorted/calibration8.jpg b/output_images/undistorted/calibration8.jpg new file mode 100644 index 0000000..5086683 Binary files /dev/null and b/output_images/undistorted/calibration8.jpg differ diff --git a/output_images/undistorted/calibration9.jpg b/output_images/undistorted/calibration9.jpg new file mode 100644 index 0000000..61b9a08 Binary files /dev/null and b/output_images/undistorted/calibration9.jpg differ diff --git a/output_images/undistorted_test_images/both_straight_lines1.jpg b/output_images/undistorted_test_images/both_straight_lines1.jpg new file mode 100644 index 0000000..324a21c Binary files /dev/null and b/output_images/undistorted_test_images/both_straight_lines1.jpg differ diff --git a/output_images/undistorted_test_images/both_straight_lines2.jpg b/output_images/undistorted_test_images/both_straight_lines2.jpg new file mode 100644 index 0000000..8fa4634 Binary files /dev/null and b/output_images/undistorted_test_images/both_straight_lines2.jpg differ diff --git a/output_images/undistorted_test_images/both_test1.jpg b/output_images/undistorted_test_images/both_test1.jpg new file mode 100644 index 0000000..e80c24b Binary files /dev/null and b/output_images/undistorted_test_images/both_test1.jpg differ diff --git a/output_images/undistorted_test_images/both_test2.jpg b/output_images/undistorted_test_images/both_test2.jpg new file mode 100644 index 0000000..dfb134e Binary files /dev/null and b/output_images/undistorted_test_images/both_test2.jpg differ diff --git a/output_images/undistorted_test_images/both_test3.jpg b/output_images/undistorted_test_images/both_test3.jpg new file mode 100644 index 0000000..8e20f43 Binary files /dev/null and b/output_images/undistorted_test_images/both_test3.jpg differ diff --git a/output_images/undistorted_test_images/both_test4.jpg b/output_images/undistorted_test_images/both_test4.jpg new file mode 100644 index 0000000..0cf761e Binary files /dev/null and b/output_images/undistorted_test_images/both_test4.jpg differ diff --git a/output_images/undistorted_test_images/both_test5.jpg b/output_images/undistorted_test_images/both_test5.jpg new file mode 100644 index 0000000..e167593 Binary files /dev/null and b/output_images/undistorted_test_images/both_test5.jpg differ diff --git a/output_images/undistorted_test_images/both_test6.jpg b/output_images/undistorted_test_images/both_test6.jpg new file mode 100644 index 0000000..781b7ed Binary files /dev/null and b/output_images/undistorted_test_images/both_test6.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_straight_lines1.jpg b/output_images/undistorted_test_images/undistorted_straight_lines1.jpg new file mode 100644 index 0000000..3fe41a8 Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_straight_lines1.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_straight_lines2.jpg b/output_images/undistorted_test_images/undistorted_straight_lines2.jpg new file mode 100644 index 0000000..01ddd38 Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_straight_lines2.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_test1.jpg b/output_images/undistorted_test_images/undistorted_test1.jpg new file mode 100644 index 0000000..8dac385 Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_test1.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_test2.jpg b/output_images/undistorted_test_images/undistorted_test2.jpg new file mode 100644 index 0000000..37415f9 Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_test2.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_test3.jpg b/output_images/undistorted_test_images/undistorted_test3.jpg new file mode 100644 index 0000000..53b3a06 Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_test3.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_test4.jpg b/output_images/undistorted_test_images/undistorted_test4.jpg new file mode 100644 index 0000000..f79a324 Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_test4.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_test5.jpg b/output_images/undistorted_test_images/undistorted_test5.jpg new file mode 100644 index 0000000..c7ad646 Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_test5.jpg differ diff --git a/output_images/undistorted_test_images/undistorted_test6.jpg b/output_images/undistorted_test_images/undistorted_test6.jpg new file mode 100644 index 0000000..96936df Binary files /dev/null and b/output_images/undistorted_test_images/undistorted_test6.jpg differ diff --git a/output_images/warped_test_images/binary_straight_lines1.jpg b/output_images/warped_test_images/binary_straight_lines1.jpg new file mode 100644 index 0000000..44ef8a4 Binary files /dev/null and b/output_images/warped_test_images/binary_straight_lines1.jpg differ diff --git a/output_images/warped_test_images/binary_straight_lines2.jpg b/output_images/warped_test_images/binary_straight_lines2.jpg new file mode 100644 index 0000000..094d5aa Binary files /dev/null and b/output_images/warped_test_images/binary_straight_lines2.jpg differ diff --git a/output_images/warped_test_images/binary_test1.jpg b/output_images/warped_test_images/binary_test1.jpg new file mode 100644 index 0000000..fd6b758 Binary files /dev/null and b/output_images/warped_test_images/binary_test1.jpg differ diff --git a/output_images/warped_test_images/binary_test2.jpg b/output_images/warped_test_images/binary_test2.jpg new file mode 100644 index 0000000..8136298 Binary files /dev/null and b/output_images/warped_test_images/binary_test2.jpg differ diff --git a/output_images/warped_test_images/binary_test3.jpg b/output_images/warped_test_images/binary_test3.jpg new file mode 100644 index 0000000..e0db46d Binary files /dev/null and b/output_images/warped_test_images/binary_test3.jpg differ diff --git a/output_images/warped_test_images/binary_test4.jpg b/output_images/warped_test_images/binary_test4.jpg new file mode 100644 index 0000000..44275e0 Binary files /dev/null and b/output_images/warped_test_images/binary_test4.jpg differ diff --git a/output_images/warped_test_images/binary_test5.jpg b/output_images/warped_test_images/binary_test5.jpg new file mode 100644 index 0000000..f5f1071 Binary files /dev/null and b/output_images/warped_test_images/binary_test5.jpg differ diff --git a/output_images/warped_test_images/binary_test6.jpg b/output_images/warped_test_images/binary_test6.jpg new file mode 100644 index 0000000..81fbf70 Binary files /dev/null and b/output_images/warped_test_images/binary_test6.jpg differ diff --git a/output_images/warped_test_images/color_straight_lines1.jpg b/output_images/warped_test_images/color_straight_lines1.jpg new file mode 100644 index 0000000..8fa58d3 Binary files /dev/null and b/output_images/warped_test_images/color_straight_lines1.jpg differ diff --git a/output_images/warped_test_images/color_straight_lines2.jpg b/output_images/warped_test_images/color_straight_lines2.jpg new file mode 100644 index 0000000..5bee0ef Binary files /dev/null and b/output_images/warped_test_images/color_straight_lines2.jpg differ diff --git a/output_images/warped_test_images/color_test1.jpg b/output_images/warped_test_images/color_test1.jpg new file mode 100644 index 0000000..b22e65b Binary files /dev/null and b/output_images/warped_test_images/color_test1.jpg differ diff --git a/output_images/warped_test_images/color_test2.jpg b/output_images/warped_test_images/color_test2.jpg new file mode 100644 index 0000000..3c57043 Binary files /dev/null and b/output_images/warped_test_images/color_test2.jpg differ diff --git a/output_images/warped_test_images/color_test3.jpg b/output_images/warped_test_images/color_test3.jpg new file mode 100644 index 0000000..3ff8393 Binary files /dev/null and b/output_images/warped_test_images/color_test3.jpg differ diff --git a/output_images/warped_test_images/color_test4.jpg b/output_images/warped_test_images/color_test4.jpg new file mode 100644 index 0000000..af196a6 Binary files /dev/null and b/output_images/warped_test_images/color_test4.jpg differ diff --git a/output_images/warped_test_images/color_test5.jpg b/output_images/warped_test_images/color_test5.jpg new file mode 100644 index 0000000..4f1c763 Binary files /dev/null and b/output_images/warped_test_images/color_test5.jpg differ diff --git a/output_images/warped_test_images/color_test6.jpg b/output_images/warped_test_images/color_test6.jpg new file mode 100644 index 0000000..f0f04ac Binary files /dev/null and b/output_images/warped_test_images/color_test6.jpg differ diff --git a/project_video.mp4 b/project_video.mp4 new file mode 100644 index 0000000..57c0a00 Binary files /dev/null and b/project_video.mp4 differ diff --git a/src/detect_lanelines.py b/src/detect_lanelines.py new file mode 100644 index 0000000..f6fcff8 --- /dev/null +++ b/src/detect_lanelines.py @@ -0,0 +1,249 @@ +import os +from glob import glob +import sys + +import cv2 +import numpy as np +import matplotlib.pyplot as plt + + +# Using histogram --> finding peak at left and right +# Using sliding window method to find the curve +def find_lane_sliding_window(binary_birdview, nwindows, margin, minpix, lane_left, lane_right, ym_per_pix, xm_per_pix): + # # Clear the 2 lane buffers + # lane_left.reset() + # lane_right.reset() + + h, w = binary_birdview.shape[:2] + step_y = int(h / nwindows) + nonzeroxy = binary_birdview.nonzero() + nonzerox = nonzeroxy[1] + nonzeroy = nonzeroxy[0] + + out_img = np.dstack((binary_birdview, binary_birdview, binary_birdview)) * 255 + + half_below = binary_birdview[int(h / 2):, :] + hist = np.sum(half_below, axis=0) + left_x_peak = np.argmax(hist[:int(w / 2)]) + right_x_peak = np.argmax(hist[int(w / 2):]) + int(w / 2) + + left_x_cur = left_x_peak + right_x_cur = right_x_peak + + left_lane_idexes = [] + right_lane_indexes = [] + for window_idx in range(nwindows): + low_y = h - step_y * (window_idx + 1) + high_y = h - step_y * window_idx + low_left_x = left_x_cur - margin + high_left_x = left_x_cur + margin + low_right_x = right_x_cur - margin + high_right_x = right_x_cur + margin + + cv2.rectangle(out_img, (low_left_x, low_y), (high_left_x, high_y), (0, 255, 0), 5) + cv2.rectangle(out_img, (low_right_x, low_y), (high_right_x, high_y), (0, 255, 0), 5) + + left_lane_idx = \ + ((nonzerox >= low_left_x) & (nonzerox < high_left_x) & (nonzeroy >= low_y) & (nonzeroy < high_y)).nonzero()[ + 0] + right_lane_idx = \ + ((nonzerox >= low_right_x) & (nonzerox < high_right_x) & (nonzeroy >= low_y) & ( + nonzeroy < high_y)).nonzero()[0] + + if len(left_lane_idx) > minpix: + left_x_cur = np.int(np.mean(nonzerox[left_lane_idx])) + if len(right_lane_idx) > minpix: + right_x_cur = np.int(np.mean(nonzerox[right_lane_idx])) + left_lane_idexes.append(left_lane_idx) + right_lane_indexes.append(right_lane_idx) + + try: + left_lane_idexes = np.concatenate(left_lane_idexes) + right_lane_indexes = np.concatenate(right_lane_indexes) + except ValueError: + pass + + left_lane_x = nonzerox[left_lane_idexes] + left_lane_y = nonzeroy[left_lane_idexes] + right_lane_x = nonzerox[right_lane_indexes] + right_lane_y = nonzeroy[right_lane_indexes] + + detected = True + if len(left_lane_x) == 0: + left_fit_pixel = lane_left.last_fit_pixel + left_fit_meter = lane_left.left_fit_meter + detected = False + else: + left_fit_pixel = np.polyfit(left_lane_y, left_lane_x, 2) + left_fit_meter = np.polyfit(left_lane_y * ym_per_pix, left_lane_x * xm_per_pix, 2) + + if len(right_lane_x) == 0: + right_fit_pixel = lane_right.last_fit_pixel + right_fit_meter = lane_left.right_fit_meter + detected = False + else: + right_fit_pixel = np.polyfit(right_lane_y, right_lane_x, 2) + right_fit_meter = np.polyfit(right_lane_y * ym_per_pix, right_lane_x * xm_per_pix, 2) + + lane_left.update_lane(left_fit_pixel, left_fit_meter, detected, left_lane_x, left_lane_y) + lane_right.update_lane(right_fit_pixel, right_fit_meter, detected, right_lane_x, right_lane_y) + + # Take average of previous frames + left_fit_pixel = lane_left.average_fit() + right_fit_pixel = lane_right.average_fit() + + ploty = np.linspace(0, h - 1, h) + left_fit_x = left_fit_pixel[0] * ploty ** 2 + left_fit_pixel[1] * ploty + left_fit_pixel[2] + right_fit_x = right_fit_pixel[0] * ploty ** 2 + right_fit_pixel[1] * ploty + right_fit_pixel[2] + + out_img[left_lane_y, left_lane_x] = [255, 0, 0] + out_img[right_lane_y, right_lane_x] = [0, 0, 255] + + return out_img, lane_left, lane_right, left_fit_x, right_fit_x, ploty + + +def find_lane_based_on_previous_frame(binary_birdview, margin, lane_left, lane_right, ym_per_pix, xm_per_pix): + h, w = binary_birdview.shape[:2] + nonzeroxy = binary_birdview.nonzero() + nonzerox = nonzeroxy[1] + nonzeroy = nonzeroxy[0] + + left_fit_pixel = lane_left.last_fit_pixel + right_fit_pixel = lane_right.last_fit_pixel + + left_fit_x = left_fit_pixel[0] * nonzeroy ** 2 + left_fit_pixel[1] * nonzeroy + left_fit_pixel[2] + right_fit_x = right_fit_pixel[0] * nonzeroy ** 2 + right_fit_pixel[1] * nonzeroy + right_fit_pixel[2] + + left_lane_idx = (nonzerox >= left_fit_x - margin) & (nonzerox < left_fit_x + margin) + right_lane_idx = (nonzerox >= right_fit_x - margin) & (nonzerox < right_fit_x + margin) + + left_lane_x = nonzerox[left_lane_idx] + left_lane_y = nonzeroy[left_lane_idx] + right_lane_x = nonzerox[right_lane_idx] + right_lane_y = nonzeroy[right_lane_idx] + + detected = True + if len(left_lane_x) == 0: + left_fit_pixel = lane_left.last_fit_pixel + left_fit_meter = lane_left.left_fit_meter + detected = False + else: + left_fit_pixel = np.polyfit(left_lane_y, left_lane_x, 2) + left_fit_meter = np.polyfit(left_lane_y * ym_per_pix, left_lane_x * xm_per_pix, 2) + + if len(right_lane_x) == 0: + right_fit_pixel = lane_right.last_fit_pixel + right_fit_meter = lane_left.right_fit_meter + detected = False + else: + right_fit_pixel = np.polyfit(right_lane_y, right_lane_x, 2) + right_fit_meter = np.polyfit(right_lane_y * ym_per_pix, right_lane_x * xm_per_pix, 2) + + lane_left.update_lane(left_fit_pixel, left_fit_meter, detected, left_lane_x, left_lane_y) + lane_right.update_lane(right_fit_pixel, right_fit_meter, detected, right_lane_x, right_lane_y) + + # Take average of previous frames + left_fit_pixel = lane_left.average_fit() + right_fit_pixel = lane_right.average_fit() + + ploty = np.linspace(0, h - 1, h) + left_fit_x = left_fit_pixel[0] * ploty ** 2 + left_fit_pixel[1] * ploty + left_fit_pixel[2] + right_fit_x = right_fit_pixel[0] * ploty ** 2 + right_fit_pixel[1] * ploty + right_fit_pixel[2] + + out_img = np.dstack((binary_birdview, binary_birdview, binary_birdview)) * 255 + out_img[left_lane_y, left_lane_x] = [255, 0, 0] + out_img[right_lane_y, right_lane_x] = [0, 0, 255] + + ## Visualization ## + # Create an image to draw on and an image to show the selection window + window_img = np.zeros_like(out_img) + # Color in left and right line pixels + + # Generate a polygon to illustrate the search window area + # And recast the x and y points into usable format for cv2.fillPoly() + left_line_window1 = np.array([np.transpose(np.vstack([left_fit_x - margin, ploty]))]) + left_line_window2 = np.array([np.flipud(np.transpose(np.vstack([left_fit_x + margin, + ploty])))]) + left_line_pts = np.hstack((left_line_window1, left_line_window2)) + right_line_window1 = np.array([np.transpose(np.vstack([right_fit_x - margin, ploty]))]) + right_line_window2 = np.array([np.flipud(np.transpose(np.vstack([right_fit_x + margin, + ploty])))]) + right_line_pts = np.hstack((right_line_window1, right_line_window2)) + + # Draw the lane onto the warped blank image + cv2.fillPoly(window_img, np.int_([left_line_pts]), (0, 255, 0)) + cv2.fillPoly(window_img, np.int_([right_line_pts]), (0, 255, 0)) + out_img = cv2.addWeighted(out_img, 1, window_img, 0.3, 0) + + return out_img, lane_left, lane_right, left_fit_x, right_fit_x, ploty + + +if __name__ == '__main__': + from undistort_img import undistort, calibrate + from gradient import get_binary_img + from perspective_transform import get_transform_matrix, warped_birdview + from utils import Line + + nwindows = 9 + margin = 100 + minpix = 50 + + output_images_dir = '../output_images' + output_detectedline_img = os.path.join(output_images_dir, 'detected_lane_test_images') + if not os.path.isdir(output_detectedline_img): + os.makedirs(output_detectedline_img) + + img_paths = glob('../test_images/*.jpg') + + ret, mtx, dist, rvecs, tvecs = calibrate(is_save=True) + + thresh_gradx = (20, 100) + thresh_grady = (20, 100) + thresh_mag = (30, 100) + thresh_dir = (0.7, 1.3) + thresh_s_channel = (170, 255) + # Define conversions in x and y from pixels space to meters + ym_per_pix = 30 / 720 # meters per pixel in y dimension + xm_per_pix = 3.7 / 700 # meters per pixel in x dimension + for idx, img_path_ in enumerate(img_paths): + img_fn = os.path.basename(img_path_)[:-4] + img = cv2.cvtColor(cv2.imread(img_path_), cv2.COLOR_BGR2RGB) # BGR --> RGB + undistorted_img = undistort(img, mtx, dist) + binary_output = get_binary_img(undistorted_img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir, + thresh_s_channel) + + h, w = binary_output.shape[:2] + src = np.float32([ + [(w / 2) - 55, h / 2 + 100], + [((w / 6) - 10), h], + [(w * 5 / 6) + 60, h], + [(w / 2 + 55), h / 2 + 100] + ]) + + dst = np.float32([ + [(w / 4), 0], + [(w / 4), h], + [(w * 3 / 4), h], + [(w * 3 / 4), 0] + ]) + M, Minv = get_transform_matrix(src, dst) + + warped = warped_birdview(img, M) + binary_birdview = warped_birdview(binary_output, M) + + lane_left = Line(buffer_len=20) + lane_right = Line(buffer_len=20) + + # left_lane_x, left_lane_y, right_lane_x, right_lane_y, out_img = find_lane_boundary(binary_birdview) + out_img, lane_left, lane_right, left_fit_x, right_fit_x, ploty = find_lane_sliding_window(binary_birdview, + nwindows, margin, + minpix, lane_left, + lane_right, + ym_per_pix, + xm_per_pix) + plt.cla() + plt.plot(left_fit_x, ploty, color='yellow') + plt.plot(right_fit_x, ploty, color='yellow') + plt.imshow(out_img) + plt.savefig(os.path.join(output_detectedline_img, 'n{}.jpg'.format(img_fn))) + # plt.imshow(binary_birdview, cmap='gray') diff --git a/src/gradient.py b/src/gradient.py new file mode 100644 index 0000000..727496e --- /dev/null +++ b/src/gradient.py @@ -0,0 +1,105 @@ +import os +from glob import glob +import sys + +import cv2 +import numpy as np + + +def abs_sobel_thresh(image, orient, sobel_kernel, thresh): + # Calculate directional gradient + if orient == 'x': + grad = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=sobel_kernel) + elif orient == 'y': + grad = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=sobel_kernel) + # Apply threshold + + grad = ((grad / np.max(grad)) * 255).astype(np.uint8) + + grad_binary = np.zeros_like(grad) + grad_binary[(grad > thresh[0]) * (grad < thresh[1])] = 1 + return grad_binary + + +def mag_thresh(image, sobel_kernel, thresh): + # Calculate gradient magnitude + grad_x = cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=sobel_kernel) + grad_y = cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=sobel_kernel) + mag = np.sqrt(grad_x ** 2 + grad_y ** 2) + # Apply threshold + mag = ((mag / np.max(mag)) * 255).astype(np.uint8) + mag_binary = np.zeros_like(mag) + mag_binary[(mag > thresh[0]) * (mag < thresh[1])] = 1 + return mag_binary + + +def dir_threshold(image, sobel_kernel, thresh): + # Calculate gradient direction + grad_x = np.absolute(cv2.Sobel(image, cv2.CV_64F, 1, 0, ksize=sobel_kernel)) + grad_y = np.absolute(cv2.Sobel(image, cv2.CV_64F, 0, 1, ksize=sobel_kernel)) + + direction = np.arctan2(grad_y, grad_x) + dir_binary = np.zeros_like(direction, dtype=np.uint8) + # Apply threshold + dir_binary[(direction > thresh[0]) * (direction < thresh[1])] = 1 + return dir_binary + + +def get_binary_gradient_img(img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir): + gray_img = cv2.cvtColor(img, cv2.COLOR_RGB2GRAY) + + mask_gradx = abs_sobel_thresh(gray_img, orient='x', sobel_kernel=3, thresh=thresh_gradx) + mask_grady = abs_sobel_thresh(gray_img, orient='y', sobel_kernel=3, thresh=thresh_grady) + mask_mag = mag_thresh(gray_img, sobel_kernel=3, thresh=thresh_mag) + mask_direction = dir_threshold(gray_img, sobel_kernel=15, thresh=thresh_dir) + + binary_output = np.zeros_like(mask_gradx) + binary_output[((mask_gradx == 1) & (mask_grady == 1)) | ((mask_mag == 1) & (mask_direction == 1))] = 1 + + return binary_output + + +def get_binary_s_channel_img(img, thresh_s_channel): + hls_img = cv2.cvtColor(img, cv2.COLOR_RGB2HLS) + s_channel = hls_img[:, :, 2] + binary_output = np.zeros_like(s_channel, dtype=np.uint8) + binary_output[(s_channel > thresh_s_channel[0]) & (s_channel < thresh_s_channel[1])] = 1 + + return binary_output + + +def get_binary_img(img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir, thresh_s_channel): + grad_binary_output = get_binary_gradient_img(img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir) + s_channel_binary_output = get_binary_s_channel_img(img, thresh_s_channel) + binary_output = grad_binary_output | s_channel_binary_output + + return binary_output + + +if __name__ == '__main__': + # sys.path.append('./') + from undistort_img import undistort, calibrate + + output_images_dir = '../output_images' + output_binary_img = os.path.join(output_images_dir, 'binary_test_images') + if not os.path.isdir(output_binary_img): + os.makedirs(output_binary_img) + + ret, mtx, dist, rvecs, tvecs = calibrate(is_save=True) + + img_paths = glob('../test_images/*.jpg') + thresh_gradx = (20, 100) + thresh_grady = (20, 100) + thresh_mag = (30, 100) + thresh_dir = (0.7, 1.3) + thresh_s_channel = (170, 255) + for idx, img_path_ in enumerate(img_paths): + img_fn = os.path.basename(img_path_)[:-4] + img = cv2.cvtColor(cv2.imread(img_path_), cv2.COLOR_BGR2RGB) # BGR --> RGB + undistorted_img = undistort(img, mtx, dist) + # grad_binary_output = get_binary_gradient_img(img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir) + # s_channel_binary_output = get_binary_s_channel_img(img, thresh_s_channel) + binary_output = get_binary_img(undistorted_img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir, + thresh_s_channel) + + cv2.imwrite(os.path.join(output_binary_img, 'binary_{}.jpg'.format(img_fn)), binary_output * 255) # Gray img diff --git a/src/main.py b/src/main.py new file mode 100644 index 0000000..cec5f42 --- /dev/null +++ b/src/main.py @@ -0,0 +1,139 @@ +import time +import os +import numpy as np + +import cv2 +# Import everything needed to edit/save/watch video clips +from moviepy.editor import VideoFileClip + +from undistort_img import undistort, calibrate +from gradient import get_binary_img +from perspective_transform import get_transform_matrix, warped_birdview +from detect_lanelines import find_lane_sliding_window, find_lane_based_on_previous_frame +from utils import transform_to_the_road, Line + + +def calculate_distance_from_lane_center(lane_left, lane_right, w, h, xm_per_pix): + distance_meter = -100 + if lane_left.detected and lane_right.detected: + center_bottom_left_x = np.mean(lane_left.allx[lane_left.ally > 0.9 * h]) + center_bottom_right_x = np.mean(lane_right.allx[lane_right.ally > 0.9 * h]) + lane_width = center_bottom_right_x - center_bottom_left_x + distance_pixel = abs((center_bottom_left_x + lane_width / 2) - w / 2) + distance_meter = distance_pixel * xm_per_pix + return distance_meter + + +def compose_final_output(onroad_img, out_binary_birdview, w, h, mean_curvature_in_meter, distance_meter): + off_x, off_y = 100, 30 + thumb_ratio = 0.2 + thumb_h, thumb_w = int(thumb_ratio * h), int(thumb_ratio * w) + + # add a gray rectangle to highlight the upper area + topmask = np.copy(onroad_img) + topmask = cv2.rectangle(topmask, pt1=(0, 0), pt2=(w, thumb_h + off_y * 2), color=(0, 0, 0), thickness=cv2.FILLED) + blend_on_road = cv2.addWeighted(src1=onroad_img, alpha=1., src2=topmask, beta=0.2, gamma=0) + + # add thumbnail of binary image + thumb_binary = cv2.resize(out_binary_birdview, dsize=(thumb_w, thumb_h)) + # thumb_binary = np.dstack([thumb_binary, thumb_binary, thumb_binary]) * 255 + blend_on_road[off_y:thumb_h + off_y, off_x:off_x + thumb_w, :] = thumb_binary + + font = cv2.FONT_HERSHEY_SIMPLEX + cv2.putText(blend_on_road, 'Birdview', (int(off_x + thumb_w / 4), int(thumb_h + 2 * off_y)), font, + 0.9, (255, 255, 255), 2, cv2.LINE_AA) + cv2.putText(blend_on_road, 'Lane curvature: {:.2f} m'.format(mean_curvature_in_meter), (700, 60), font, + 0.9, (255, 255, 255), 2, cv2.LINE_AA) + cv2.putText(blend_on_road, 'Distance from lane center: {:.2f} m'.format(distance_meter), (700, 130), font, 0.9, + (255, 255, 255), 2, cv2.LINE_AA) + + return blend_on_road + + +def process_image(image): + global lane_left, lane_right, frame_idx + frame_idx += 1 + h, w = image.shape[:2] + src = np.float32([ + [(w / 2) - 55, h / 2 + 100], + [((w / 6) - 10), h], + [(w * 5 / 6) + 60, h], + [(w / 2 + 55), h / 2 + 100] + ]) + + dst = np.float32([ + [(w / 4), 0], + [(w / 4), h], + [(w * 3 / 4), h], + [(w * 3 / 4), 0] + ]) + + undistorted_img = undistort(image, mtx, dist) + binary_output = get_binary_img(undistorted_img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir, + thresh_s_channel) + + M, Minv = get_transform_matrix(src, dst) + binary_birdview = warped_birdview(binary_output, M) + + # diff_curvature_in_meter = abs(lane_right.curvature_in_meter - lane_left.curvature_in_meter) + + # if (frame_idx > 0) and lane_left.detected and lane_right.detected and (diff_curvature_in_meter < 5000.): + if (frame_idx > 0) and lane_left.detected and lane_right.detected: + out_binary_birdview, lane_left, lane_right, left_fit_x, right_fit_x, ploty = find_lane_based_on_previous_frame( + binary_birdview, margin, lane_left, lane_right, ym_per_pix, xm_per_pix) + else: + out_binary_birdview, lane_left, lane_right, left_fit_x, right_fit_x, ploty = find_lane_sliding_window( + binary_birdview, + nwindows, margin, + minpix, lane_left, + lane_right, ym_per_pix, xm_per_pix) + + lane_left.cal_curvature(h, xm_per_pix) + lane_right.cal_curvature(h, xm_per_pix) + + distance_meter = calculate_distance_from_lane_center(lane_left, lane_right, w, h, xm_per_pix) + mean_curvature_in_meter = (lane_left.curvature_in_meter + lane_right.curvature_in_meter)/2 + + onroad_img = transform_to_the_road(undistorted_img, Minv, left_fit_x, right_fit_x, ploty) + # onroad_img = cv2.polylines(onroad_img, [src.astype(np.int32)], True, (255, 0, 0), 5) + + blend_on_road = compose_final_output(onroad_img, out_binary_birdview, w, h, mean_curvature_in_meter, distance_meter) + return blend_on_road + + +def main(): + video_output_dir = '../test_videos_output' + if not os.path.isdir(video_output_dir): + os.makedirs(video_output_dir) + + ## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video + ## To do so add .subclip(start_second,end_second) to the end of the line below + ## Where start_second and end_second are integer values representing the start and end of the subclip + ## You may also uncomment the following line for a subclip of the first 5 seconds + ##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5) + video_fn = 'project_video.mp4' + # video_fn = 'challenge_video.mp4' + # video_fn = 'harder_challenge_video.mp4' + video_output_path = os.path.join(video_output_dir, video_fn) + # clip1 = VideoFileClip(os.path.join('../', video_fn)).subclip(0,2) + clip1 = VideoFileClip(os.path.join('../', video_fn)) + white_clip = clip1.fl_image(process_image) # NOTE: this function expects color images!! + white_clip.write_videofile(video_output_path, audio=False) + + +if __name__ == '__main__': + lane_left, lane_right = Line(buffer_len=20), Line(buffer_len=20) + frame_idx = -1 + nwindows = 9 + margin = 100 + minpix = 50 + thresh_gradx = (20, 100) + thresh_grady = (20, 100) + thresh_mag = (30, 100) + thresh_dir = (0.7, 1.3) + thresh_s_channel = (170, 255) + # Define conversions in x and y from pixels space to meters + ym_per_pix = 30 / 720 # meters per pixel in y dimension + xm_per_pix = 3.7 / 700 # meters per pixel in x dimension + ret, mtx, dist, rvecs, tvecs = calibrate(is_save=False) + main() diff --git a/src/perspective_transform.py b/src/perspective_transform.py new file mode 100644 index 0000000..2152a89 --- /dev/null +++ b/src/perspective_transform.py @@ -0,0 +1,83 @@ +import os +from glob import glob +import sys + +import cv2 +import numpy as np +import matplotlib.pyplot as plt + + +def get_transform_matrix(src, dst): + M = cv2.getPerspectiveTransform(src, dst) + Minv = cv2.getPerspectiveTransform(dst, src) + + return M, Minv + + +def warped_birdview(img, M): + h, w = img.shape[:2] + warped = cv2.warpPerspective(img, M, (w, h)) + + return warped + + +if __name__ == '__main__': + from undistort_img import undistort, calibrate + from gradient import get_binary_img + + output_images_dir = '../output_images' + output_warped_img = os.path.join(output_images_dir, 'warped_test_images') + if not os.path.isdir(output_warped_img): + os.makedirs(output_warped_img) + + ret, mtx, dist, rvecs, tvecs = calibrate(is_save=True) + + img_paths = glob('../test_images/*.jpg') + thresh_gradx = (20, 100) + thresh_grady = (20, 100) + thresh_mag = (30, 100) + thresh_dir = (0.7, 1.3) + thresh_s_channel = (170, 255) + for idx, img_path_ in enumerate(img_paths): + img_fn = os.path.basename(img_path_)[:-4] + img = cv2.cvtColor(cv2.imread(img_path_), cv2.COLOR_BGR2RGB) # BGR --> RGB + undistorted_img = undistort(img, mtx, dist) + # grad_binary_output = get_binary_gradient_img(img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir) + # s_channel_binary_output = get_binary_s_channel_img(img, thresh_s_channel) + binary_output = get_binary_img(undistorted_img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir, + thresh_s_channel) + + h, w = binary_output.shape[:2] + src = np.float32([ + [(w / 2) - 55, h / 2 + 100], + [((w / 6) - 10), h], + [(w * 5 / 6) + 60, h], + [(w / 2 + 55), h / 2 + 100] + ]) + dst = np.float32([ + [(w / 4), 0], + [(w / 4), h], + [(w * 3 / 4), h], + [(w * 3 / 4), 0] + ]) + M, Minv = get_transform_matrix(src, dst) + + warped = warped_birdview(undistorted_img, M) + binary_warped = warped_birdview(binary_output, M) + + draw_img = np.copy(undistorted_img) + draw_img = cv2.polylines(draw_img, [src.astype(np.int32)], True, (255, 0, 0), thickness=10) + warped = cv2.polylines(warped, [dst.astype(np.int32)], True, (255, 0, 0), thickness=10) + + fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(20, 10)) + ax1.imshow(draw_img) + ax1.set_title('Undistorted Image', fontsize=20) + ax2.imshow(warped) + ax2.set_title('Perspective Transform Image', fontsize=20) + plt.savefig(os.path.join(output_warped_img, 'color_{}.jpg'.format(img_fn))) + + ax1.imshow(binary_output, cmap='gray') + ax1.set_title('Undistorted Image', fontsize=20) + ax2.imshow(binary_warped, cmap='gray') + ax2.set_title('Perspective Transform Image', fontsize=20) + plt.savefig(os.path.join(output_warped_img, 'binary_{}.jpg'.format(img_fn)), cmap='gray') diff --git a/src/undistort_img.py b/src/undistort_img.py new file mode 100644 index 0000000..5ded9f8 --- /dev/null +++ b/src/undistort_img.py @@ -0,0 +1,91 @@ +from glob import glob +import os + +import cv2 +import numpy as np +import matplotlib.pyplot as plt +import matplotlib.image as mpimg + + +def calibrate(is_save=False): + # prepare object points, like (0,0,0), (1,0,0), (2,0,0) ....,(6,5,0) + objp = np.zeros((6 * 9, 3), np.float32) + objp[:, :2] = np.mgrid[0:9, 0:6].T.reshape(-1, 2) + # Arrays to store object points and image points from all the images. + objpoints = [] # 3d points in real world space + imgpoints = [] # 2d points in image plane. + + # Make a list of calibration images + img_paths = glob('../camera_cal/calibration*.jpg') + for img_path_ in img_paths: + img_fn = os.path.basename(img_path_)[:-4] + img = cv2.imread(img_path_) + gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) + + # Find the chessboard corners + ret, corners = cv2.findChessboardCorners(gray, (9, 6), None) + + # If found, add object points, image points + if ret == True: + objpoints.append(objp) + imgpoints.append(corners) + if is_save: + output_images_dir = '../output_images' + output_chessboard_coners = os.path.join(output_images_dir, 'chessboard_conners') + if not os.path.isdir(output_chessboard_coners): + os.makedirs(output_chessboard_coners) + # Draw and display the corners + img = cv2.drawChessboardCorners(img, (9, 6), corners, ret) + cv2.imwrite(os.path.join(output_chessboard_coners, '{}.jpg'.format(img_fn)), img) + + w, h = 1280, 720 + ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, (w, h), None, None) + + return ret, mtx, dist, rvecs, tvecs + + +def undistort(img, mtx, dist): + undistorted_img = cv2.undistort(img, mtx, dist, None, mtx) + + return undistorted_img + + +if __name__ == '__main__': + output_images_dir = '../output_images' + output_undistorted_img = os.path.join(output_images_dir, 'undistorted') + if not os.path.isdir(output_undistorted_img): + os.makedirs(output_undistorted_img) + + ret, mtx, dist, rvecs, tvecs = calibrate(is_save=True) + + img_paths = glob('../camera_cal/calibration*.jpg') + for img_path_ in img_paths: + img_fn = os.path.basename(img_path_)[:-4] + img = mpimg.imread(img_path_) + undistorted_img = undistort(img, mtx, dist) + + fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(20, 10)) + ax1.imshow(img) + ax1.set_title('Original Image', fontsize=20) + ax2.imshow(undistorted_img) + ax2.set_title('Undistorted Image', fontsize=20) + plt.savefig(os.path.join(output_undistorted_img, '{}.jpg'.format(img_fn))) + + output_undistorted_img = os.path.join(output_images_dir, 'undistorted_test_images') + if not os.path.isdir(output_undistorted_img): + os.makedirs(output_undistorted_img) + + img_paths = glob('../test_images/*.jpg') + for img_path_ in img_paths: + img_fn = os.path.basename(img_path_)[:-4] + img = mpimg.imread(img_path_) + undistorted_img = undistort(img, mtx, dist) + + plt.cla() + fig, (ax1, ax2) = plt.subplots(nrows=1, ncols=2, figsize=(20, 10)) + ax1.imshow(img) + ax1.set_title('Original Image', fontsize=20) + ax2.imshow(undistorted_img) + ax2.set_title('Undistorted Image', fontsize=20) + plt.imsave(os.path.join(output_undistorted_img, 'undistorted_{}.jpg'.format(img_fn)), undistorted_img) + plt.savefig(os.path.join(output_undistorted_img, 'both_{}.jpg'.format(img_fn))) diff --git a/src/utils.py b/src/utils.py new file mode 100644 index 0000000..9c0c53d --- /dev/null +++ b/src/utils.py @@ -0,0 +1,156 @@ +import os +from glob import glob +import sys +import collections + +import cv2 +import numpy as np +import matplotlib.pyplot as plt + + +# Define a class to receive the characteristics of each line detection +class Line(): + def __init__(self, buffer_len=20): + # was the line detected in the last iteration? + self.detected = False + # x values of the last n fits of the line + self.recent_xfitted = [] + # average x values of the fitted line over the last n iterations + self.bestx = None + # polynomial coefficients averaged over the last n iterations + self.best_fit = None + # polynomial coefficients for the most recent fit + self.last_fit_pixel = None + self.last_fit_meter = None + self.recent_fits = collections.deque(maxlen=buffer_len) + # radius of curvature of the line in some units + self.curvature_in_meter = 0 + # distance in meters of vehicle center from the line + self.line_base_pos = None + # difference in fit coefficients between last and new fits + self.diffs = np.array([0, 0, 0], dtype='float') + # x values for detected line pixels + self.allx = None + # y values for detected line pixels + self.ally = None + + def reset(self): + self.recent_fits.clear() + + def update_lane(self, new_curve_fit_pixel, new_curve_fit_meter, detected, new_lane_x, new_lane_y): + self.detected = detected + self.last_fit_pixel = new_curve_fit_pixel + self.last_fit_meter = new_curve_fit_meter + self.recent_fits.append(new_curve_fit_pixel) + self.allx = new_lane_x + self.ally = new_lane_y + + def cal_curvature(self, h, ym_per_pix): + y_eval = (h - 1) * ym_per_pix # bottom image + self.curvature_in_meter = np.sqrt( + (1 + (2 * self.last_fit_meter[0] * y_eval + self.last_fit_meter[1]) ** 2) ** 3) / np.absolute( + 2 * self.last_fit_meter[0]) + + def average_fit(self): + return np.mean(self.recent_fits, axis=0) + + +# def transform_to_the_road(undistorted_img, Minv, left_lane, right_lane): +def transform_to_the_road(undistorted_img, Minv, left_fit_x, right_fit_x, ploty): + h, w = undistorted_img.shape[:2] + + road_warped = np.zeros_like(undistorted_img, dtype=np.uint8) + + # Recast the x and y points into usable format for cv2.fillPoly() + pts_left = np.array([np.transpose(np.vstack([left_fit_x, ploty]))]) + pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fit_x, ploty])))]) + pts = np.hstack((pts_left, pts_right)) + + # Draw the lane onto the warped blank image + cv2.fillPoly(road_warped, np.int_([pts]), (0, 255, 0)) + + # Draw 2 curves + left_fit_x = left_fit_x.astype(np.int32) + right_fit_x = right_fit_x.astype(np.int32) + ploty = ploty.astype(np.int32) + + for idx in range(len(ploty) - 2): + cv2.line(road_warped, (left_fit_x[idx], ploty[idx]), (left_fit_x[idx + 1], ploty[idx + 1]), (255,0,0), 20) + cv2.line(road_warped, (right_fit_x[idx], ploty[idx]), (right_fit_x[idx + 1], ploty[idx + 1]), (255,0,0), 20) + + # Warp the blank back to original image space using inverse perspective matrix (Minv) + road_unwarped = cv2.warpPerspective(road_warped, Minv, (w, h)) # Warp back to original image space + + blend_img = cv2.addWeighted(undistorted_img, 1., road_unwarped, 0.8, 0) + + return blend_img + + +if __name__ == '__main__': + from undistort_img import undistort, calibrate + from gradient import get_binary_img + from perspective_transform import get_transform_matrix, warped_birdview + from detect_lanelines import find_lane_sliding_window + + nwindows = 9 + margin = 100 + minpix = 50 + thresh_gradx = (20, 100) + thresh_grady = (20, 100) + thresh_mag = (30, 100) + thresh_dir = (0.7, 1.3) + thresh_s_channel = (170, 255) + + ym_per_pix = 30 / 720 # meters per pixel in y dimension + xm_per_pix = 3.7 / 700 # meters per pixel in x dimension + + output_images_dir = '../output_images' + output_detectedline_img = os.path.join(output_images_dir, 'onroad_test_images') + if not os.path.isdir(output_detectedline_img): + os.makedirs(output_detectedline_img) + + img_paths = glob('../test_images/*.jpg') + + ret, mtx, dist, rvecs, tvecs = calibrate(is_save=True) + + for idx, img_path_ in enumerate(img_paths): + img_fn = os.path.basename(img_path_)[:-4] + img = cv2.cvtColor(cv2.imread(img_path_), cv2.COLOR_BGR2RGB) # BGR --> RGB + undistorted_img = undistort(img, mtx, dist) + binary_output = get_binary_img(undistorted_img, thresh_gradx, thresh_grady, thresh_mag, thresh_dir, + thresh_s_channel) + + h, w = binary_output.shape[:2] + src = np.float32([ + [(w / 2) - 55, h / 2 + 100], + [((w / 6) - 10), h], + [(w * 5 / 6) + 60, h], + [(w / 2 + 55), h / 2 + 100] + ]) + dst = np.float32([ + [(w / 4), 0], + [(w / 4), h], + [(w * 3 / 4), h], + [(w * 3 / 4), 0] + ]) + M, Minv = get_transform_matrix(src, dst) + + warped = warped_birdview(img, M) + binary_birdview = warped_birdview(binary_output, M) + + lane_left, lane_right = Line(buffer_len=20), Line(buffer_len=20) + + out_img, left_fit, right_fit, left_fit_x, right_fit_x, ploty = find_lane_sliding_window(binary_birdview, + nwindows, margin, + minpix, lane_left, + lane_right, ym_per_pix, + xm_per_pix) + + blend_img = transform_to_the_road(undistorted_img, Minv, left_fit_x, right_fit_x, ploty) + + plt.cla() + # plt.plot(left_fit_x, ploty, color='yellow') + # plt.plot(right_fit_x, ploty, color='yellow') + plt.imshow(blend_img) + plt.savefig(os.path.join(output_detectedline_img, '{}.jpg'.format(img_fn))) + # plt.imshow(binary_warped, cmap='gray') diff --git a/test_images/straight_lines1.jpg b/test_images/straight_lines1.jpg new file mode 100644 index 0000000..a5c5036 Binary files /dev/null and b/test_images/straight_lines1.jpg differ diff --git a/test_images/straight_lines2.jpg b/test_images/straight_lines2.jpg new file mode 100644 index 0000000..f822d16 Binary files /dev/null and b/test_images/straight_lines2.jpg differ diff --git a/test_images/test1.jpg b/test_images/test1.jpg new file mode 100644 index 0000000..672c9e4 Binary files /dev/null and b/test_images/test1.jpg differ diff --git a/test_images/test2.jpg b/test_images/test2.jpg new file mode 100644 index 0000000..a862b90 Binary files /dev/null and b/test_images/test2.jpg differ diff --git a/test_images/test3.jpg b/test_images/test3.jpg new file mode 100644 index 0000000..1527851 Binary files /dev/null and b/test_images/test3.jpg differ diff --git a/test_images/test4.jpg b/test_images/test4.jpg new file mode 100644 index 0000000..8774255 Binary files /dev/null and b/test_images/test4.jpg differ diff --git a/test_images/test5.jpg b/test_images/test5.jpg new file mode 100644 index 0000000..b2f65e9 Binary files /dev/null and b/test_images/test5.jpg differ diff --git a/test_images/test6.jpg b/test_images/test6.jpg new file mode 100644 index 0000000..dab0c17 Binary files /dev/null and b/test_images/test6.jpg differ diff --git a/test_videos_output/project_video.mp4 b/test_videos_output/project_video.mp4 new file mode 100644 index 0000000..df9e9e8 Binary files /dev/null and b/test_videos_output/project_video.mp4 differ