-
Notifications
You must be signed in to change notification settings - Fork 27.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Upgrade Weights & Biases callback #30135
feat: Upgrade Weights & Biases callback #30135
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
…to wandb/upgrade-integration
…ansformers into wandb/upgrade-integration
@amyeroberts: I think I was finally able to get all ci checks to pass. The issues were related to type annotations and lazy imports. I have fixed these. Can you please take a look ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for all of the work adding this!
* feat: upgrade wandb callback with new features * fix: ci issues with imports and run fixup
* feat: upgrade wandb callback with new features * fix: ci issues with imports and run fixup
Hey @amyeroberts , do you know if this PR is going into the next release? |
@morganmcg1 Yes, all commits merged into main will be part of the next minor release - v4.41. If there are patch releases before this e.g. 4.40.2, then it won't be part of that release. If you want to use this immediately, you can install from source, getting all the most recent commits from main: |
This PR causes an error when a custom model with no |
@jonghwanhyeon Could you open an issue, detailing the errors and linking to this PR for reference? This way we'll be able to properly track if/when it's resolved |
* feat: upgrade wandb callback with new features * fix: ci issues with imports and run fixup
@amyeroberts @parambharat It seems that this always logs the initial model checkpoint to wandb, with no way of turning it off, and uploads the entire model when doing a full fine-tune. This is not intended, is it? See #30896 and #30897 |
This reverts commit 4ab7a28.
@ArthurZucker I appreciate you reverting this; my W&B storage was immediately full and now I can't access my W&B logs anymore without being immediately redirected to the billing page :/
|
What does this PR do?
This PR adds a few new functionalities to the Weights & Biases Callback
Logs Peft and Lora Config to wandb if present
Adds model parameter counts to wandb config and artifact metadata
Adds on_predict methods to log prediction metrics
Prints the model architecture to a file alongside the wandb artifact
Logs initial and final models to the wandb artifact to full reproducibility
Adds steps and epoch aliases to checkpoint artifacts
Here's a link to the what the logged artifacts look like
Here's a run overview page with added config and metadata for the run with peft configs logged
Before submitting
Pull Request section?
Who can review?