Skip to content

ilopezfr/Self-Driving-Car-Engineering

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self-Driving Car Engineering

Overview

This repo contains my entire work and source code committed for Udacity's Self-Driving Car Nano-Degree.

Projects


    P1: Basic Lane Finding


    P2: Traffic Signs Classifier


    P3: Behavioral Cloning


    P4: Adv. Lane Finding


    P5: Vehicle Detection


    P6: Extended Kalman Filter


    P7: Unscented Kalman Filter


    P8: Kidnapped Vehicle


    P9: PID Controller


    P10: MPC Controller


    P11: Path Planning


    P12: Semantic Segmentation

Capstone Project


    Systems Integration


    Traffic Light Detection


Term 1

Focuses on applying Deep Learning and Computer Vision to automotive tasks:

  • Implementation of a simple lane detector using OpenCV.
  • Train a classifier of the German Traffic Sign Dataset using CNNs.
  • Using CNNs with Keras to clone driving behavior of a vehicle driving in the simulator. The project covers data collection strategies from the simulator, data preprocessing, and implementation of an end-to-end CNN that maps pixels from a single camera image directly to steering commands.
  • Detect lane boundaries and determine numerical estimation of lane curvature and vehicle position. Display this in a video output. The project covers how to perform camera calibration, color and gradient thresholds, as well as perspective transform and sliding windows to identify lane lines.
  • Apply different image processing techinques and implement a sliding-window technique to search for vehicles in images, then detect and estimate the bounding boxes of vehicles in a video input.

Term 2

Focuses on building the core robotic functions of an autonomous vehicle system: sensor fusion, localization, and control. This module was built in partnership with Mercedes Benz and Uber ATG.

  • Kalman filters are the key mathematical tool for fusing together data. Implement ane Extended Kalman Filter to combine measurements from multiple sensors (LiDAR and Radar) into a non-linear model and estimate the state of a moving object.
  • The Unscented Kalman filter is a mathematically-sophisticated approach for combining sensor data. The UKF performs better than the EKF in many situations. Implement an Unscented Kalman Filter to estimate the state of a moving object of interest with noisy lidar and radar measurements.
  • Use a probabilistic sampling technique known as a particle filter in C++ that takes real-world datat to localize a lost vehicle.
  • Implement the classic closed-loop controller — a proportional-integral-derivative (PID) control system-- in C++ to be able to drive a car around a track in Unity's simulator.
  • Implementation of a Model Predictive Controller in C++.

Term 3

  • Path planning is the brains of a self-driving car. It’s how a vehicle decides how to get where it’s going, both at the macro and micro levels. It has 3 core components:

    • environmental predictions: predict what other vehicles around will do next based on their past behavior.
    • behavioral planning: at each time step, the path planner must choose a maneuver to perform. It requires building finite-state machines (FSM) to represent all of the different possible maneuvers your vehicle could choose, and then having a Cost function that assigns cost to each maneuver.
    • trajectory generation: build candidate trajectories for the vehicle to follow, using C++ and Eigen Linear algebra library
  • The project consists in building an end-to-end path planner to safely navigate around a virtual highway with other cars.

  • Semantic segmentation identifies free space on the road at pixel-level granularity, which improves decision-making ability. This project consists in building a Fully Convolutional Network (FCN) to perform Semantic Segmentation of road image data.
  • Design and implementation of the perception, planning, and control subsystems to enable a physical car ("Carla", Udacity's self-driving car) to drive around a test track using waypoint navigation, while avoiding obstacles and stopping at traffic lights. It requires to integrate ROS nodes and Autoware modules with Carla’s software development environment.

  • Tags: Perception, Control, Planning, ROS

  • Implementation of the traffic light detector and classifier that is integrated in the self-driving car. It includes the Tensorflow model trained with 3 different datasets using Object Detection API.

About

My projects & code for the 9-months Udacity's SDCE ND

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published