Skip to content

Final project for Deep Learning independent study at CUNY Lehman. A few DQN Agents that beat different types of games.

Notifications You must be signed in to change notification settings

danwein8/Deep-Q-Network-Agents

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep-Q-Network-Agents

Final project for Deep Learning independent study at CUNY Lehman. A few DQN Agents that beat different types of games.

Discrete Mountain Car Agent

  • beats classic control game Mountain Car with a discrete action space

Acrobot Agent

  • beats classic control game Acrobot with a discrete action space

CartPole Agent

  • beats classic control game CartPole with a discrete action space.

Pendulum Agent

  • Pendulum agent created with ActionDiscretizeWrapper() wrapper to make the continuous action space discrete allowing the DQN agent to work

Space Invaders Agent

  • Space Invaders agent created using a ROM from atarimania.com then defining the QNetwork based on the observation space, the action space, and taking into account the frame skipping inherent to old Atari games

Info

  • The DNQ Agents can only train on environments with a discreet action space, not a continuous action space, so continuous action spaces must be converted.
  • Action space is the range of moves you can make as the player.
  • Observation space is everything the agent will take into account when choosing its next action (environment).

About

Final project for Deep Learning independent study at CUNY Lehman. A few DQN Agents that beat different types of games.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published