-
Notifications
You must be signed in to change notification settings - Fork 27.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Zamba #30950
Add Zamba #30950
Conversation
Just for future reference, we measured latency of a single mamba layer in Zamba and compared it to that of a single layer in Mamba, which have very similar implementations (we have some reshapings in Zamba, but they should be a non-op, and a concatenation), and found that that the mamba layer in Zamba to have the same speed in a single forward pass, but to be slower on generation. More specifically, we instantiated these two (random) models:
(here n_mamba_heads=1 corresponds to the original Mamba architecture), and use this code for generation:
We found that the total time spent computing this line
is 8.1s, and for this line
is 6.3s. |
cc @younesbelkada ! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for this PR ! I left some minor suggestions in the modeling code for general improvements
Can you also make sure to rebase with main and make sure make fixup
pass locally ? Let me know if you need any assistance!
I tried running basic training script with gradient accumulation and without on this fork and am getting this error: The from_tf is not well described in the doc strings. Not sure what is not working here. |
Thanks for spotting this. We fixed the issue in the most recent push. Please try again and let us know if you still encounter issues. We are adding more docstrings to explain various parts of the architecture. We will add the description below for
|
Thank you for the thorough review! We ran
It looks like We pushed the fixes done by |
Tried training again and am now getting this: /trainer.py", line 3250, in training_step |
Hey there! We've been successfully using: huggingface/alignment-handbook@main...Zyphra:alignment-handbook:zamba-instruct here recently to do sft of Zamba. Does your setup meaningfully differ from this? I can't seem to reproduce, can you provide us a reproducer? |
cc @younesbelkada should I review this or do you want to do another pass? 🤗 |
I am trying to extend the max context length. also tried at 16k. I tried running in your fork of alignment handbook and saw the same results. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks very much for your great work on this ! I left few minor improvements to address and some file changes to revert - can you make sure to make our CI happy (by making sure make fixup
command passes + the tests pass pytest tests/models/zamba/
pass ) let me know if you need any help or have any question
src/transformers/models/bigbird_pegasus/modeling_bigbird_pegasus.py
Outdated
Show resolved
Hide resolved
Thank you for your help, @younesbelkada! We believe we have addressed most of the concerns you raised; we still have two pending questions:
all the lines are related to files outside of our PR, so we did not change those files, although indeed I do see that the CircleCI tests performed in this PR still fail. Please, let us know if further action is needed here and what would be the steps we'd need to take. Thank you so much for your time and help! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for iterating ! I left one minor suggestion, can you also merge your branch with upstream main branch? This should make the CI happy and tests should be green
0ec2417
to
18e8372
Compare
Thank you so much for your guidance. We tried to rebase our PR and ran into an error related to model generation. It looks like the rebased For reference, these are the calls performed from
and using the fork before rebasing:
Could you please let us know how we can force Zamba's cache to be Thanks very much! |
He! Super late in coming back to you, you should be able to force it by setting |
Hello @ArthurZucker, thank you for the suggestion! We tried adding We are happy to either keep this fix or to use the one you suggested, in which case we would appreciate if you could say a few more words on how to implement it. Meanwhile, we rebased our fork. All the local tests with
I see most of the CircleCI tests for this PR are still failing, please let me know if more needs to be done to fix those. Thank you so much! |
I'll have a look! |
Hey! 🤗 Thanks for your contribution to the Before merging this pull request, slow tests CI should be triggered. To enable this:
(For maintainers) The documentation for slow tests CI on PRs is here. |
Does this include support for Zamba2? |
@hg0428 thanks for asking! Support for Zamba2 will be added in a follow-up PR. Meanwhile, you can install Zyphra's local transformers as described in the Zamba2's model card. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your contribution! 🤗
Unfortunately, that does not work on my device. Zamba2 transformers runs on mamba_ssm, which requires an NVIDIA GPU. I have Apple Silicon. See my issue: Zyphra/transformers_zamba2#3 |
Just waiting for https://github.com/huggingface/transformers/actions/runs/11116137341/job/30897696145?pr=30950#step:12:64 to be fixed! (related to accelerate and auto device, good that we have this test!) |
Hi @Arthur, thank you again for reviewing. The test mentioned above Additionally, we updated all the model's checkpoints on the hub since this involved changing some of the weight keys related to the shared layers. Separately, given we updated the checkpoints, we also swapped All tests related to zamba appear to pass. Thank you! |
Does this Zamba support work on Apple Silicon? |
I don't believe so. We're working on MLX support in a separate (private for now) vein of work from this PR, which just seeks to get basic GPU integration into upstream HuggingFace Transformers. |
🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 🎉 |
🚀 |
Hi, |
I think the Zyphra team is already working on it! |
Hopefully we get Apple Silicon support for Zamba and Zamba2 soon. |
* Update index.md * Rebase * Rebase * Updates from make fixup * Update zamba.md * Batched inference * Update * Fix tests * Fix tests * Fix tests * Fix tests * Update docs/source/en/model_doc/zamba.md Co-authored-by: Arthur <[email protected]> * Update docs/source/en/model_doc/zamba.md Co-authored-by: Arthur <[email protected]> * Update configuration_zamba.py * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update modeling_zamba.py * Update modeling_zamba.py * Update modeling_zamba.py * Update configuration_zamba.py * Update modeling_zamba.py * Update modeling_zamba.py * Merge branch 'main' of https://github.com/Zyphra/transformers_zamba * Update ZambaForCausalLM * Update ZambaForCausalLM * Describe diffs with original mamba layer * Moved mamba init into `_init_weights` * Update index.md * Rebase * Rebase * Updates from make fixup * Update zamba.md * Batched inference * Update * Fix tests * Fix tests * Fix tests * Fix tests * Update docs/source/en/model_doc/zamba.md Co-authored-by: Arthur <[email protected]> * Update docs/source/en/model_doc/zamba.md Co-authored-by: Arthur <[email protected]> * Update configuration_zamba.py * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update modeling_zamba.py * Update modeling_zamba.py * Update modeling_zamba.py * Update configuration_zamba.py * Update modeling_zamba.py * Update modeling_zamba.py * Merge branch 'main' of https://github.com/Zyphra/transformers_zamba * Update ZambaForCausalLM * Moved mamba init into `_init_weights` * Update ZambaForCausalLM * Describe diffs with original mamba layer * make fixup fixes * quality test fixes * Fix Zamba model path * circleci fixes * circleci fixes * circleci fixes * circleci fixes * circleci fixes * circleci fixes * circleci fixes * circleci fixes * circleci fixes * Update * circleci fixes * fix zamba test from merge * fix ValueError for disabling mamba kernels * add HF copyright Co-authored-by: Arthur <[email protected]> * shared_transf --> shared_transformer * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Fixes * Move attention head dim to config * Fix circle/ci tests * Update modeling_zamba.py * apply GenerationMixin inheritance change from upstream * apply import ordering * update needed transformers version for zamba Co-authored-by: Arthur <[email protected]> * add contribution author * add @slow to avoid CI * Update src/transformers/models/zamba/modeling_zamba.py Co-authored-by: Arthur <[email protected]> * Define attention_hidden_size * Added doc for attention_head_size * trigger CI * Fix doc of attention_hidden_size * [run-slow] zamba * Fixed shared layer logic, swapped up<->gate in mlp * shared_transformer -> shared_transf * reformat HybridLayer __init__ * fix docstrings in zamba config * added definition of _get_input_ids_and_config * fixed formatting of _get_input_ids_and_config --------- Co-authored-by: root <[email protected]> Co-authored-by: Arthur <[email protected]> Co-authored-by: root <[email protected]> Co-authored-by: Quentin Anthony <[email protected]>
What does this PR do?
Please include support for Zamba architecture created by Zyphra Technologies.
Before submitting
Pull Request section?
to it if that's the case.
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
@ArthurZucker @younesbelkada