This repo contains my entire work and source code committed for Udacity's Self-Driving Car Nano-Degree.
|
|
|
---|---|---|
Focuses on applying Deep Learning and Computer Vision to automotive tasks:
- Implementation of a simple lane detector using OpenCV.
- Train a classifier of the German Traffic Sign Dataset using CNNs.
- Using CNNs with Keras to clone driving behavior of a vehicle driving in the simulator. The project covers data collection strategies from the simulator, data preprocessing, and implementation of an end-to-end CNN that maps pixels from a single camera image directly to steering commands.
- Detect lane boundaries and determine numerical estimation of lane curvature and vehicle position. Display this in a video output. The project covers how to perform camera calibration, color and gradient thresholds, as well as perspective transform and sliding windows to identify lane lines.
- Apply different image processing techinques and implement a sliding-window technique to search for vehicles in images, then detect and estimate the bounding boxes of vehicles in a video input.
Focuses on building the core robotic functions of an autonomous vehicle system: sensor fusion, localization, and control. This module was built in partnership with Mercedes Benz and Uber ATG.
- Kalman filters are the key mathematical tool for fusing together data. Implement ane Extended Kalman Filter to combine measurements from multiple sensors (LiDAR and Radar) into a non-linear model and estimate the state of a moving object.
- The Unscented Kalman filter is a mathematically-sophisticated approach for combining sensor data. The UKF performs better than the EKF in many situations. Implement an Unscented Kalman Filter to estimate the state of a moving object of interest with noisy lidar and radar measurements.
- Use a probabilistic sampling technique known as a particle filter in C++ that takes real-world datat to localize a lost vehicle.
- Implement the classic closed-loop controller — a proportional-integral-derivative (PID) control system-- in C++ to be able to drive a car around a track in Unity's simulator.
- Implementation of a Model Predictive Controller in C++.
-
Path planning is the brains of a self-driving car. It’s how a vehicle decides how to get where it’s going, both at the macro and micro levels. It has 3 core components:
- environmental predictions: predict what other vehicles around will do next based on their past behavior.
- behavioral planning: at each time step, the path planner must choose a maneuver to perform. It requires building finite-state machines (FSM) to represent all of the different possible maneuvers your vehicle could choose, and then having a Cost function that assigns cost to each maneuver.
- trajectory generation: build candidate trajectories for the vehicle to follow, using C++ and Eigen Linear algebra library
-
The project consists in building an end-to-end path planner to safely navigate around a virtual highway with other cars.
- Semantic segmentation identifies free space on the road at pixel-level granularity, which improves decision-making ability. This project consists in building a Fully Convolutional Network (FCN) to perform Semantic Segmentation of road image data.
-
Design and implementation of the perception, planning, and control subsystems to enable a physical car ("Carla", Udacity's self-driving car) to drive around a test track using waypoint navigation, while avoiding obstacles and stopping at traffic lights. It requires to integrate ROS nodes and Autoware modules with Carla’s software development environment.
-
Tags: Perception, Control, Planning, ROS
- Implementation of the traffic light detector and classifier that is integrated in the self-driving car. It includes the Tensorflow model trained with 3 different datasets using Object Detection API.