Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The performance of the SAC algorithm in the project is significantly worse than the performance of SAC in the stable baseline3. #26

Open
ynulihao opened this issue Apr 19, 2022 · 4 comments

Comments

@ynulihao
Copy link

The performance of the SAC algorithm in the project is significantly worse than the performance of SAC in the stable baseline. The training of the slide cabinet subtask in the kitchen environment using the SAC algorithm in this project fails to converge, while the loss function tends to exponentially explode. I have carefully examined the code of the project and the SAC in stable baseline3 and found no reason for this anomaly.
https://github.com/clvrai/spirl/blob/master/spirl/rl/agents/ac_agent.py
https://github.com/DLR-RM/stable-baselines3/blob/master/stable_baselines3/sac/sac.py

@kpertsch
Copy link
Collaborator

Hi! Thanks for raising this issue!
I have not run this comparison before so I can't tell you why exactly you are observing such different outcomes. Generally, small implementation details can have outsized effects for RL algorithms. From a glance it eg seems that StableBaselines uses observation normalization by default while we do not. Similarly other small differences might cause the performance differences, eg in the architecture (network size, choice of normalization, ...) or the learning algorithm (eg target entropy value, multiple experience collection workers vs single worker, ...).
When I implemented SAC for this repo I verified that we can roughly match the performance of other SAC repos on a few standard OpenAI gym envs, but it is possible that other implementation choices work better on the sparser kitchen tasks you mentioned.

@ynulihao
Copy link
Author

Thanks for your reply, I experimented on KitchenAllTasksV0 environment using the SAC algorithm from your project. The training process is in wanai. One of the strange phenomena is that q_target, policy_loss, and critic_loss all increase exponentially, which is not found in other SAC implementations. What could be the reason for this.

@kpertsch
Copy link
Collaborator

kpertsch commented May 2, 2022

I am not sure why this is happening. Two things you could check:

(1) did you use Q-target clipping during training? Using this clipping can stabilize training by avoiding very large Q-errors. (you can use the existing flag clip_q_target = True)

(2) from the WandB plots it seems that the alpha-value is increasing a lot (which can explain why the Q-values grow too) -- you could try running with a fixed alpha value instead (you would need to sweep a couple different values to find a good one that balances the reward and entropy objectives --> you can use the existing flag fixed_alpha) -- I have sometimes found fixed alpha values to work better in the kitchen environment (maybe because it is sparse?)

@gunnxx
Copy link

gunnxx commented Aug 16, 2022

Hi @ynulihao , may I know the code you are running to get stable-baselines3 baseline?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants