This repository contains solutions for projects of Udacity Self-Driving Car Engineer Nanodegree
Demo playlist with all projects (except P2) https://www.youtube.com/playlist?list=PL2jvBN6kdzT6ex5pwg4tLS7098rBFS4xl
Detection of lane lines on a video
Using Canny edge detector and Hough transform implementations from OpenCV I extracted candidate lane lines from individual frames based on level of confidence and assumptions about spatial characteristics (position, angle and length). The final lane lines in the individual frame were weighted averages of the frame's candidate lines (longer candidates had larger weights). Resulting lane lines on the video were averages of the final lane lines from current and several previous individual frames. All variables were manually tuned
Simple Case - https://youtu.be/En4_FAs5c-s
More Advanced Case - https://youtu.be/5emFX8R4zpA
Challenge - https://youtu.be/U8C0otDC1F8
Python
, Computer Vision
, OpenCV
Classification of traffic signs
After some initial exploration of the dataset I augmented it by rotating images. I didn't convert to grayscale to keep color data. Not only it increased the size of training data, but also in real life traffic signs can be observed at some angle depending on relative position of the car and signs. LeNet architecture worked pretty well on augmented data. I only had to modify input and output dimensions to fit dataset. Then the model was tested on traffic signs found on the internet
http://benchmark.ini.rub.de/?section=gtsrb&subsection=dataset
TensorFlow
, Deep Learning
, LeNet
Cloning of the behavior of the car manually driven around the track in Udacity simulator
Simulator provides two modes: training and autonomous driving. The first mode allows to drive manually in order simulate different road situations that will be used during model training. The other mode allows to simulate driving of the trained agent.
Using training data provided by Udacity I trained a model based on architecture from paper "End to End Learning for Self-Driving Cars" by NVIDIA.
Having decent training data was the key here. Some ideas on designing training data:
- It should include not only "proper" driving along the center of the track, but also recovering from driving out of the road so that trained agent can correct its behavior in similar situations
- Data augmentation can be received virtually for free by including images from side cameras with modified angle (of course, if they are available). For example, an image from the right camera would have corresponding steering angle slightly corrected to the left hand side as if the car recovers from the driving out of the right side of the road
- Mirroring images and corresponding steering angles doubled the training dataset for free and decreases overfitting (because the track is a ring)
- Cropping irrelevant areas of the input image to focus only on the road
- One of the most important things was to decrease amount of samples with ~0° steering angle because during normal driving it was the most common situation and it overwhelmed samples with non-zero steering angle, so the agent learned to drive straight
https://www.youtube.com/watch?v=6txXwArfLRY
Keras
, Data Augmentation
Identification of lane boundaries in a video from a front-facing camera on a car
I calculated camera matrix and distortion coefficients to undistort images from camera, so that calculated radius of curvature was more accurate. Instead of Canny Edge Detection approach from the Project 1, I used a combination of Sobel operators to filter relevant gradient information (related to lane boundaries) from input and output it in the form of binary (black and white) image. I used HLS color space, because it provided the most consistent results under different lighting conditions. Resulting binary image was perspective transformed to bird's-eye view to detect lane boundaries and calculate the radius of curvature as well as car's position relative to the center of the road
OpenCV
, Sobel Operator
, Camera Calibration
Identification and tracking vehicles moving in the same direction in a video from a front-facing camera on a car
Using sliding windows approach and HOG I generated a heatmap of possible vehicle detections for each frame. For detection of a car in a window I trained Support Vector Classifier on data from GTI Vehicle Image Database and KITTI Vision Benchmark Suite. Then I filtered out detections that got less votes (dim detections on the heatmap) as probable false detections
http://www.gti.ssr.upm.es/data/Vehicle_database.html
http://www.cvlibs.net/datasets/kitti/
p5_vehicle_detection_and_tracking
Computer Vision
, Support Vector Classifier
, Histogram of Oriented Gradients
, Sliding Windows
Tracking a moving object using simulated radar and lidar data
In this project, goals were to get familiar with C++ and implement Extended Kalman Filter using it. Based on simulated measurements of object moving around a vehicle I used Extended Kalman Filter to track the object's position.
C++
, Extended Kalman Filter
, Constant Velocity model (CV)
Improvement of tracking a nonlinearily moving object using Unscented Kalman Filter
Solution in the last project assumed that the tracked object kept going straight (Constant Velocity model). Because of that when the object turned, its estimated position tended to result outside of the actually driven circle. In this project, I implemented Unscented Kalman Filter using Constant Turn Rate and Velocity model (CTRV) to track nonlinearily moving object
Unscented Kalman Filter
, Constant Turn Rate and Velocity model (CTRV)
, C++
Localization of a vehicle based on noisy estimate of initial position and noisy sensor and control data
To localize a vehicle, I implemented Particle Filter
Particle Filter
, C++
Controlling a vehicle based on Cross Track Error (CTE) provided by simulator
To control the vehicle based on CTE I implemented PID Controller. Parameters have been tuned manually by visually inspecting changes in simulator
PID Controller
, Motion Planning
, C++
Controlling a vehicle based on position, orientation, steering angle, throttle and speed of a vehicle and an array of waypoints
To control the vehicle so that it traveled along the provided waypoints I implemented Model Predictive Control. Visualizations of reference path (yellow line) and MPC trajectory path (green line) were added for debug and demonstration purposes. Details of the solution are in projects's folder
Model Predictive Control
, Motion Planning
, C++