This repository contains starting files for the Behavioral Cloning Project.
In this project, i have demonstrated how deep neural networks and convolutional neural networks can be used to clone driving behavior. I have trained, validated and tested a model using Keras. The model will output a steering angle to an autonomous vehicle.
A simulator has been provided by Udacity where you can steer a car around a track for data collection. You'll use image data and steering angles to train a neural network and then use this model to drive the car autonomously around the track.
Overview of code:
- model.py (script used to create and train the model)
- drive.py (script to drive the car - feel free to modify this file)
- model.h5 (a trained Keras model)
- DataAnalysis.ipynb (script for analyzing the data collected during training)
- config.py (script for initializing parameters)
- load_data.py (implemented preprocessing steps and generators)
- video.mp4 (a video recording of your vehicle driving autonomously around the track for at least one full lap)
The goals / steps of this project are the following:
- Use the simulator to collect data of good driving behavior
- Design, train and validate a model that predicts a steering angle from image data
- Use the model to drive the vehicle autonomously around the first track in the simulator. The vehicle should remain on the road for an entire loop around the track.
Further details about the model, preprocessing on images and how the model has been implemented are detailed in report.md
This project requires:
The lab enviroment can be created with CarND Term1 Starter Kit. Click here for the details.
The simulator can be downloaded from the classroom. In the classroom, we have also provided sample data that you can optionally use to help train your model.
Usage of drive.py
requires you have saved the trained model as an h5 file, i.e. model.h5
. See the Keras documentation for how to create this file using the following command:
model.save(filepath)
Once the model has been saved, it can be used with drive.py using this command:
python drive.py model.h5
The above command will load the trained model and use the model to make predictions on individual images in real-time and send the predicted angle back to the server via a websocket connection.
python drive.py model.h5 run1
The fourth argument run1
is the directory to save the images seen by the agent to. If the directory already exists it'll be overwritten.
ls run1
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_424.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_451.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_477.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_528.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_573.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_618.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_697.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_723.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_749.jpg
[2017-01-09 16:10:23 EST] 12KiB 2017_01_09_21_10_23_817.jpg
...
The image file name is a timestamp when the image image was seen. This information is used by video.py
to create a chronological video of the agent driving.
python video.py run1
Create a video based on images found in the run1
directory. The name of the video will be name of the directory following by '.mp4'
, so, in this case the video will be run1.mp4
.
Optionally one can specify the FPS (frames per second) of the video:
python video.py run1 --fps 48
The video will run at 48 FPS. The default FPS is 60.