This script trains an agent with Twin Delayed DDPG (TD3) to solve the Bipedal Walker challenge from OpenAI.
In order to run this script, NumPy, the OpenAI Gymnasium toolkit, and PyTorch will need to be installed.
Each step through the Bipedal Walker environment takes the general form:
state, reward, terminated, truncated, info = env.step(action)
and the goal is for the agent to take actions that maximize the cumulative reward achieved for the episode's duration. In this specific environment, the state and action space are continuous and the state space is 8-dimensional while the action space is 4-dimensional. The state space consists of position and velocity measurements of the walker and its joints, while the action space consists of motor torques that can be applied to the four controllable joints.
Since the action space is continuous, a naive application of vanilla policy gradient would likely have relatively poor or limited performance in practice. In environments involving continuous action spaces, it is often preferable to make use of DDPG or TD3 among other deep reinforcement learning (DRL) algorithms, since these two have been specifically designed to make use of continuous action spaces.
To learn more about how the agent receives rewards, see here.
A detailed discussion of the TD3 algorithm with proper equation typesetting is provided in the supplemental material here.
Solving the Bipedal Walker challenge requires training the agent to safely walk all the way to the end of the platform without falling over and while using as little motor torque as possible. The agent's ability to do this was quite abysmal in the beginning.
After training the agent overnight on a GPU, it could gracefully complete the challenge with ease!
Below, the average performance over 64 trial runs is documented. The shaded region represents a standard deviation of the average evaluation over all trials.
Additionally, the relative frequency among all trials of individual episode reward values is shown below through time.
Bipedal-Walker-Histograms.mp4
- Continuous Control With Deep Reinforcement Learning - Lillicrap et al.
- Addressing Function Approximation Error in Actor-Critic Methods - Fujimoto et al.
All files in the repository are under the MIT license.