1. Provide a Writeup / README that includes all the rubric points and how you addressed each one. You can submit your writeup as markdown or pdf.
You're reading it! Sorry in advance for my crippled English
1. Run the functions provided in the notebook on test images (first with the test data provided, next on data you have recorded). Add/modify functions to allow for color selection of obstacles and rock samples.
- To successfully identify rock samples, color_thresh function had to be modified to be able to use interval rather then only low threshold values. Please see next section for detailed description
2. Populate the process_image()
function with the appropriate analysis steps to map pixels identifying navigable terrain, obstacles and rock samples into a worldmap. Run process_image()
on your test data using the moviepy
functions provided to create video output of your result.
- The following steps were taken to process rover vision image and update Rover object attributes
- To compensate Rover rolling, image is rotated around center in the direction opposite to roll angle. (please see rotate_image helper function). To get roll and pitch angles I had to add corresponding attributes to Databucket class
- To get rover map view, perspective transform was applied
- To get navigable area, thresholding (160,160,160) was applied to warped image
- To get obstacle area, previously obtained navigable area has been inverted (using numpy logical_not function)
- To get sample location, interval threshholding was applied to warped image, with the following interval low = (100,100,20), high = (255,255,30).
- There are 2 videos in output the folder:
- test_mapping.mp4 - based on provided test data
- test_mapping2.mp4 - based on my recording
1. Fill in the perception_step()
(at the bottom of the perception.py
script) and decision_step()
(in decision.py
) functions in the autonomous mapping scripts and an explanation is provided in the writeup of how and why these functions were modified as they were.
2. Launching in autonomous mode your rover can navigate and map autonomously. Explain your results and how you might improve them in your writeup.
- Resolution: 1024x768
- Graphic quality: Good
- FPS: ~18
Rover is considered as state machine with the following states
- Main states:
- Forward
- Stop
- Sample spotted
- Additional states:
- Stuck
- Circling
- If Rover is not stuck and not circling, check if there is enough of navigable terrain;
- If Rover is not stuck, spin unless there are enough of navigable terrain and go;
- If there are distances from sample (obtained in perception step) then set navigation angle to the direction of sample, and decrease velocity when approaching;
- "Stuck" here means that ground velocity of the rover does not change within specified timeinterval (defined as number of measurement cycles). After rover is marked as "stuck", recovery operation is performed (described below); stuck cycles and stuck set attributes defined as part of RoverState object;
- Circling is a form of "stuck" when rover is unable to change direction, resulting in it driving in circles indefinitely (usually happens on wide open spaces). Rover marked as "Circling" when steering angle does not change from -15 or 15 within specified time interval (defined as number of measurement cycles), circling recovery is performed in this case (described below); circling cycles and circling set attributes defined as part of RoverState object
- The following steps were taken to process rover vision image and update Rover object attributes
- To compensate Rover rolling, image is rotated around center in the direction opposite to roll angle
- To get rover map view, I applied perspective transorm
- To get navigable area, I applied thresholding (160,160,160) to warped image
- To get obstacle area, I inverted previously obtained navigable area
- To get sample location, I applied interval threshholding to warped image, with the following interval low = (100,100,20), high = (255,255,30). (these thresholds definitely need tuning:) )
- Additional Rover attribute "visited_map" has been created which is 20x20 array of ints. Visited Map is essentinally increased scale (x10) world map, each cell stores the number of perception cycles rover was in the map sector. The plan was to use this map to calculate priority when choosing steer direction. Currently only the cells which are adjacent to Rover's position are checked to calculate priority, which is obviously not enough.
- To "unstuck" rover, its throttle is set to negative number (driving backwards) for a specified time interval (defined as a number of measurment cycles);
- To "uncircle" rover, its steering angle is reset and mode is set to "stop" allowing it to pick new direction;
- Tune recovery strategies
- Direction prioritization is rather primitive. It only checks cells adjacent to current rover position. Some geofencing might be used here.
- Turn direction in "stop" mode is hardcoded, need better direction prioritization logic.
- Use camera image preprocessing to compensate roll and pitch angles, instead of discarding images where these angles beyond threshhold. I actually try to compensate roll using OpenCV warpAffine method, but not sure if it's optimal solution;
- Sample spotting and collection routines need improvement.
- It's possible to miss samples when setting rover throttle to higher numbers;
- Rover can "forget" where sample was if for some reason it gets off sight. Need to store last spotted sample coordinated until collected
- There are some screenshots located in screenshot folder in this repository.