Skip to content

This is an implementation of autonomous navigation and vehicle avoidance for the Duckietown Project..

Notifications You must be signed in to change notification settings

saryazdi/Duckietown-Object-Detection-LFV

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Duckietown LFV using Pure Pursuit and Object Detection

Table of Contents

Duckietown Logo

  • The quick start assumes that you have followed all of the steps in the Duckiebook operational manual up to "Unit E-12: Lane following".
  • Clone the repository:
      $ cd <LOCAL-DUCKIETOWN-DIRECTORY>/catkin_ws/src
      $ git clone https://github.com/saryazdi/pp-navigation.git
    

    Start docker container:

      $ cd <LOCAL-DUCKIETOWN-DIRECTORY>
      $ docker-compose up
    

    Build the package within the container and source the workspace:

      [CONTAINER]$ catkin build --workspace catkin_ws
      [CONTAINER]$ source catkin_ws/devel/setup.bash
    

    Run the code within the container:

      [CONTAINER]$ roslaunch catkin_ws/src/pp-navigation/packages/pure_pursuit_lfv/launch/lfv_start.launch
    
  • With your computer and the duckiebot connected to the same network, run the following command on your computer to pull the image onto the duckiebot:

      $ docker -H <DUCKIEBOT_NAME>.local pull saryazdi/pp-navigation:v1-arm32v7
    

    Run the following command on your computer to start running the lane following with vehicles code on your robot:

      $ dts duckiebot demo --demo_name HW_lfv_start --package_name pure_pursuit_lfv --duckiebot_name <DUCKIEBOT_NAME> --image saryazdi/pp-navigation:v1-arm32v7
    

    That's it! Your duckiebot should start moving within a minute or two.

  • Tuning the Parameters

  • The parameters might need to be re-tuned on different versions of the simulator (e.g. if the camera calibration or the camera blur or FPS changes) and on different duckiebots as well (due to different wheel/camera calibrations). In our experience, this becomes more important if you want to use high speeds. Luckily, there is a pipeline in place for changing the parameters as the code is running and see the effects right away.

    If running in simulation, first run the below command to get a bash in the container running your code (hint: if you do not know the container name, run docker ps to see a name list of all of the currently running containers):

      $ docker exec -it <CONTAINER_NAME> /bin/bash
    

    If running on hardware, first run the below command to get a bash in the container running your code and source the workspace:

      $ docker -H <DUCKIEBOT_NAME>.local exec -it demo_HW_lfv_start /bin/bash
      
      [CONTAINER]$ source /code/catkin_ws/devel/setup.bash
    

    From here, you can view the names of all of the parameters related to pure pursuit and duckiebot detection by running:

      [CONTAINER]$ rosparam list | grep -E 'pure_pursuit|duckiebot_detection'
    

    And from that list, you can change the value of any parameter by running:

      [CONTAINER]$ rosparam set <PARAMETER_NAME> <PARAMETER_VALUE>
    

We use a modified version of the pure pursuit controller for lane following which can be found here. To learn more about the pure pursuit controller, check out "Implementation of the Pure Pursuit Path Tracking Algorithm" by R. Craig Conlter. We use the following modifications on pure pursuit: We avoided computing the path by directly estimating our target point.
  • We offset the points on the ground-projected yellow lane to the right, and then take the average of them to have an estimate of our target point.
  • If we are not seeing the yellow lane, we offset the points on the ground-projected white lane to the left and then take the average of them to get an estimate of our target point.
  • Additionally, the average direction of the line segments is also taken into consideration for computing the offset: E.g., if the ground-projected yellow line segments are perpendicular to us (like when facing a turn), then the target point would not just be to the right of the average of the yellow points, but also downwards (towards the robot).
  • In the visualization below, we can see the ground projected and shifted line segments. The cyan point is our robot's position, and the green point is the pure pursuit target (follow) point.

  • Our robot detects whether it is close to a left turn, a right turn or on a straight path. Turns are detected using statistics of detected lines.
  • The duckiebot gradually speeds up on straight paths, while reducing the omega gain so that the robot corrects less when moving fast (to avoid jerky movement).
  • The duckiebot gradually slows down at turns, while increasing the omega gain (to make nice sharp turns).
  • A second order degree polynomial is used for changing the velocity/omega gain. So, after a turn the robot speeds up slowly, giving it enough time to correct its position before going really fast. At turns, the robot will slow down faster to ensure safe navigation of the turn.

  • We modified the "lane_filter" package so that at each update step, it computes how much time has passed since the last update, and based on that we scale the variance of the gaussian that is used for smoothing the belief. This is especially useful if there is too much variance in the FPS: Not scaling the covariance when the FPS has a high variance would cause us to either smoothen the belief too much or too little.
We annotated our own real-world duckietown object detection dataset and trained a deep learning model on it. However, since we also needed an object detector in simulation, we made a second object detector using image processing operators.

Vehicle Avoidance Behind Vehicle Avoidance Head-on

Disclaimer: We have not been able to get the GPU to work with docker yet, thus we are currently using the vehicle detection with image processing code on hardware as well. This is temporary to show that the pipeline is working correctly and we can integrate our trained deep learning model on hardware once we figure out how to get the GPU working with docker.
  1. Deep Learning

    • We annotated our own real-world dataset from Duckietown for detecting duckiebots, duckies and traffic cones. Information regarding our dataset can be found here.
    • For object detection with deep learning, we use Faster RCNN architecture with feature pyramid network. Faster RCNN is a popular 2 stage object detection pipeline where first stage is used to get the potential object regions in an image. First stage involves feature map extraction from a backbone network and the usage of region proposal network to find potential object regions. Once we find the object regions, we feed it through the second stage of the network. In the second stage, we do bounding box regression and object classification. In this architecture. We also use Feature Pyramid Network (FPN). FPN enables us to detect objects at various scales and sizes. We extract features at multiple different resolutions and fuse them to get a rich set of features before feeding it to the region proposal network to find final region proposals. FPNs are more effective at detecting small objects. We use the above object detection dataset to train the network. Below is the architecture of Faster RCNN.

      In this work, we use detectron2, a state of the art object detection framework from Facebook AI research. We train the model for 15000 iterations over the dataset with a learning rate of 0.015. We use Resnet 50 backbone for the model. Below are some qualitative results of the object detector, and you can find a video of our object detector in action here in the demos section.

    • For object detection using image processing, we use HSV filtering followed by erosion and dilation, we then find the bounding boxes around the contours. Bounding boxes with a small area are filtered out.
  • We modified the "ground_projection" package to subscribe to the topic with the obstacle bounding box coordinates, and then we ground project those coordinates and re-publish them.
  • If we get closer to a vehicle (which is directly in front of us) than some distance threshold, we stop. We stay still until the obstalce is no longer in front of us within that distance threshold. In the visualization below, the gray box is the "safety zone" where we stop if an obstacle is within that box.

About

This is an implementation of autonomous navigation and vehicle avoidance for the Duckietown Project..

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages