Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What's going on with T5 x torch.compile ? #33221

Closed
2 of 4 tasks
shivance opened this issue Aug 30, 2024 · 6 comments
Closed
2 of 4 tasks

What's going on with T5 x torch.compile ? #33221

shivance opened this issue Aug 30, 2024 · 6 comments
Labels
bug Compilation Issues related to torchdynamo and torchinductor WIP Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress

Comments

@shivance
Copy link

System Info

Hi Team,
First of all huge thanks for all the great work you are doing.

Recently, I was benchmarking inference for T5 model on ‪AWS EC2 ( G6E machine with L40 GPU) for batch sizes of 1, 2, 4.

I have heard tons about torch. compile and wanted to try it out and see if it reduces the inference time. Surprisingly, it did the other way around. On average, I saw an increase of ~1 sec in inference time for a sample size of 50 with a length of each sample ranging from [2200, 3000] characters, with an average of around 2550 chars.

I had a chat with a friend about this who told me that T5 is not a very suitable architecture for compilation yet and there are lots of graphbreaks. With his advice, I decided to open an issue here.

From my experience, T5 is still a very good model and I would want to see it work seamlessly with torch compile. If chance comes, I am ready to put my own time into this and contribute to the cause. Let me know what you think.

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

‪AWS EC2 ( G6E machine with L40 GPU) for batch sizes of 1, 2, and 4.

Expected behavior

The inference time should reduce post compilation.

@shivance shivance added the bug label Aug 30, 2024
@LysandreJik
Copy link
Member

Thanks for the issue and feature request @shivance!

cc @ArthurZucker regarding supporting torch.compile for T5.

@ArthurZucker
Copy link
Collaborator

Hey! T5 does not support the new "cache_positions" so generation will probably be slow as it has to deal with dynamic shapes.

T5 is not a very suitable architecture for compilation

I completely disagree with that! 😉 we just did not have time to ship compile for this model. Tho #32617 should give you alead and also #31166

@ArthurZucker ArthurZucker added the Compilation Issues related to torchdynamo and torchinductor label Sep 6, 2024
@zucchini-nlp
Copy link
Member

T5 and BART are planned to be compile-compatible in the next batch of models, as it is an encoder-decoder model. I will work on it next month if there's no PR by that time

Copy link

github-actions bot commented Oct 1, 2024

This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.

Please note that issues that do not follow the contributing guidelines are likely to be ignored.

@zucchini-nlp zucchini-nlp added the WIP Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress label Oct 1, 2024
@zucchini-nlp
Copy link
Member

The T5 model is compile compatible now, so closing as resolved

@alexcoca
Copy link

@shivance did you by any chance benchmark the new T5 code? Any perf improvements?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Compilation Issues related to torchdynamo and torchinductor WIP Label your PR/Issue with WIP for some long outstanding Issues/PRs that are work in progress
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants