Skip to content
/ F1-RL Public

Learning to race on F1 tracks using Deep Reinforcement Learning

License

Notifications You must be signed in to change notification settings

abhaybd/F1-RL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

52 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

F1-RL

Final project for CSE 579: Intelligent Control Through Learning And Optimization, by Abhay Deshpande and Arnav Thareja.

This project uses deep reenforcement learning to race 1/10th-scale F1 cars in simulation. The full project report is attached here. The trained agents succesfully learned to race competitively.

Interestingly, the agent was trained on COTA (left) and was able to transfer to new tracks, such as Interlagos (right).

GIF of agent racing on COTA GIF of agent racing on Interlagos

Installation instructions

Clone repo, initializing submodules:

git clone --recurse-submodules https://github.com/abhaybd/F1-RL.git
cd F1-RL

Create environment

conda env create -f env.yaml
conda activate f1-rl

Now install f1tenth gym, which has been included as a submodule.

cd f1tenth_gym
pip install -e .

Usage Instructions

Train

Train jobs are parameterized by a config file, an example of which is provided. You can train a model using:

python -m f1rl.train <config_path>

Train jobs are logged with wandb, so you can set that up or run jobs offline, in which case tensorboard can be used to view the logs.

Eval

After training a model, you can evaluate models using:

python -m f1rl.eval <run_path> <checkpoint_name>

To download a model that was saved with wandb, run_path should be the run path, in the form entity/project/run. To run a local model, it should be the path to the run files, and the --local flag should be specified. Use the --help flag to see a full list of options.

After saving an eval trajectory with the -s flag, you can render the trajectory as an image or video. To render it as an image run:

python -m f1rl.render_recording <recording_path>

And to render a video run:

python -m f1rl.render_recording_vid <recording_path> <vid_save_path>

About

Learning to race on F1 tracks using Deep Reinforcement Learning

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages