Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optionally use flash-attn's CE loss for metrics #3394

Merged
merged 16 commits into from
Jun 17, 2024
Merged

Conversation

snarayan21
Copy link
Contributor

@snarayan21 snarayan21 commented Jun 11, 2024

What does this PR do?

Resubmission of #3214 -- using FA's CE Loss results in lower peak reserved memory usage and higher throughput. We are not adding flash attention as an optional dependency to composer since this makes installs and correct builds messy & take a lot longer.

Fixed a small typo where the torch 3.11 CPU tests were using the GPU image with flash attn installed by accident.

Also modified DeviceGPU class so that it instantiates a gloo backend for CPU tensors, if gloo is available. This handles cases where users may want to perform distributed operations with tensors present on CPU even if they are using GPUs.

Manual tests:

  • Run started on dev (13b-dense-fsdp-fullshard-hsdp-adam-shardedckpt-start-5PtEdK), resumed with this branch (13b-dense-fsdp-fullshard-hsdp-adam-shardedckpt-resume-E5SieL)
  • Run started on this branch (13b-dense-fsdp-fullshard-hsdp-adam-shardedckpt-start-0g8uD4), resumed with dev branch (13b-dense-fsdp-fullshard-hsdp-adam-shardedckpt-resume-TSGoUC)

4th time's the charm :0

Run with torch CE loss (green): tiny-sp-dtms1-32h-wCFWfa
Run with FA CE loss (tan): tiny-sp-dtms1-32h-jOfIPL

Screenshot 2024-06-11 at 3 53 53 PM Screenshot 2024-06-11 at 3 54 01 PM

What issue(s) does this change relate to?

Before submitting

  • Have you read the contributor guidelines?
  • Is this change a documentation change or typo fix? If so, skip the rest of this checklist.
  • Was this change discussed/approved in a GitHub issue first? It is much more likely to be merged if so.
  • Did you update any related docs and document your change?
  • Did you update any related tests and add any new tests related to your change? (see testing)
  • Did you run the tests locally to make sure they pass?
  • Did you run pre-commit on your change? (see the pre-commit section of prerequisites)

@snarayan21 snarayan21 requested a review from a team as a code owner June 11, 2024 22:56
Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Holding review until after freeze

Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we offer a flag to gate as well? IIRC there are occasionally numerics issues for long seq...

@ShashankMosaicML do u remember

@ShashankMosaicML
Copy link
Contributor

Can we offer a flag to gate as well? IIRC there are occasionally numerics issues for long seq...

@ShashankMosaicML do u remember

Flash attention fixed the long seq issue in this PR: Dao-AILab/flash-attention@c79de85

Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will review once CI passes

@snarayan21
Copy link
Contributor Author

Seeing the error below on CPU tests:

>       assert input.is_cuda and target.is_cuda, "Only support CUDA tensors"
E       AssertionError: Only support CUDA tensors

So i'm gonna add a check for torch.cuda.is_available()

@snarayan21
Copy link
Contributor Author

jk. The torch 3.11 cpu tests were using the cuda image on accident, causing this problem. It was only the torch 3.11 tests too. Fixed that in this PR as well.

@snarayan21 snarayan21 requested a review from a team as a code owner June 14, 2024 23:05
Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

@dakinggg dakinggg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

add unit tests for this before merging please

Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

holding till offline discussion

Copy link
Contributor

@mvpatel2000 mvpatel2000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Noting that changing the GPU backend to provide both mirrors PyTorch's default behavior, which initializes both a GPU and a CPU dist backend.

@snarayan21
Copy link
Contributor Author

Added manual test names to PR description

@snarayan21 snarayan21 merged commit 2cf9262 into dev Jun 17, 2024
17 checks passed
@snarayan21 snarayan21 deleted the saaketh/fa_ce_loss branch June 17, 2024 19:09
snarayan21 added a commit that referenced this pull request Jun 18, 2024
snarayan21 added a commit that referenced this pull request Jun 18, 2024
snarayan21 added a commit that referenced this pull request Jun 18, 2024
* Revert "Optionally use `flash-attn`'s CE loss for metrics (#3394)"

This reverts commit 2cf9262.

revert dat boi

* remove

* slamm
@snarayan21 snarayan21 restored the saaketh/fa_ce_loss branch June 19, 2024 04:31
mvpatel2000 added a commit to mvpatel2000/composer that referenced this pull request Jul 21, 2024
* yo

* slam

* cuda

* cuda checks

* test

* fix_test

* gloo

* gloo

* lint

* lint

---------

Co-authored-by: Daniel King <[email protected]>
Co-authored-by: Mihir Patel <[email protected]>
mvpatel2000 pushed a commit to mvpatel2000/composer that referenced this pull request Jul 21, 2024
mvpatel2000 pushed a commit to mvpatel2000/composer that referenced this pull request Jul 21, 2024
* Revert "Optionally use `flash-attn`'s CE loss for metrics (mosaicml#3394)"

This reverts commit 2cf9262.

revert dat boi

* remove

* slamm
mvpatel2000 added a commit that referenced this pull request Jul 21, 2024
* yo

* slam

* cuda

* cuda checks

* test

* fix_test

* gloo

* gloo

* lint

* lint

---------

Co-authored-by: Daniel King <[email protected]>
Co-authored-by: Mihir Patel <[email protected]>
mvpatel2000 pushed a commit that referenced this pull request Jul 21, 2024
mvpatel2000 pushed a commit that referenced this pull request Jul 21, 2024
* Revert "Optionally use `flash-attn`'s CE loss for metrics (#3394)"

This reverts commit 2cf9262.

revert dat boi

* remove

* slamm
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants