Replies: 1 comment 4 replies
-
so while going through older issues i found a comment (by Antoine i think) where one would train the prior for 1M steps, i assume that was after training the rave with 2M steps..? |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
when training both models in order to combine them - should their training be about the same amount in steps / epochs - or is one model to be trained longer?
Beta Was this translation helpful? Give feedback.
All reactions