Skip to content
/ PPO Public

PyTorch implementation of Proximal Policy Optimization

License

Notifications You must be signed in to change notification settings

lnpalmer/PPO

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PPO

PyTorch implementation of Proximal Policy Optimization

live agents

Usage

Example command line usage:

python main.py BreakoutNoFrameskip-v0 --num-workers 8 --render

This will run PPO with 8 parallel training environments, which will be rendered on the screen. Run with -h for usage information.

Performance

Results are comparable to those of the original PPO paper. The horizontal axis here is labeled by environment steps, whereas the graphs in the paper label it with frames, with 4 frames per step.

Training episode reward versus environment steps for BreakoutNoFrameskip-v3:

Breakout training curve

References

Proximal Policy Optimization Algorithms

OpenAI Baselines

This code uses some environment utilities such as SubprocVecEnv and VecFrameStack from OpenAI's Baselines.

Releases

No releases published

Packages

No packages published

Languages