[PyTorch] Debug amax reductions in eval mode and async amax reductions #728
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR fixes two bugs related to amax reductions:
no_grad
and evaluation mode seems to be a common mistake in PyTorch. This PR fixes this by checking if a module is in training mode in its backward pass, similar to how we do it in the forward pass.I've attempted to keep this PR small since #575 touches a lot of the amax reduction logic. In the future, I think it would be worthwhile reworking the async amax reductions since it currently doesn't do much overlapping (it is launched when entering
fp8_autocast
and synchronized before the first TE module'sforward
).