-
Notifications
You must be signed in to change notification settings - Fork 27.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark GitHub Actions workflow #31163
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great already!
.github/workflows/benchmark.yml
Outdated
working-directory: /transformers | ||
run: | | ||
python3 -m pip install optimum-benchmark>=0.2.0 | ||
HF_TOKEN=${{ secrets.TRANSFORMERS_HUB_BOT_HF_TOKEN }} python3 benchmark/benchmark.py --repo_id hf-internal-testing/benchmark_results --path_in_repo $(date +'%Y-%m-%d') --config-dir benchmark/config --config-name generation --commit=${{ github.sha }} backend.model=google/gemma-2b backend.cache_implementation=null,static backend.torch_compile=false,true --multirun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we post it somewhere / run a python script to compare the results from previous commit / from average of previous commits?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am considering doing this with a Space app which will fetch the results from this dataset and show some graph.
But if you think we should also do such comparison within the same workflow run, I can add something.
(so far the dataset is kinda empty, so maybe better I add that part one day in the next week?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok sounds good
.github/workflows/benchmark.yml
Outdated
working-directory: /transformers | ||
run: | | ||
python3 -m pip install optimum-benchmark>=0.2.0 | ||
HF_TOKEN=${{ secrets.TRANSFORMERS_HUB_BOT_HF_TOKEN }} python3 benchmark/benchmark.py --repo_id hf-internal-testing/benchmark_results --path_in_repo $(date +'%Y-%m-%d') --config-dir benchmark/config --config-name generation --commit=${{ github.sha }} backend.model=google/gemma-2b backend.cache_implementation=null,static backend.torch_compile=false,true --multirun |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's be super careful and have a finegrained token for that!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will regenerate new tokens for this and also for .github/workflows/check_tiny_models.yml
which also uses the same token.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update to a new secret TRANSFORMERS_BENCHMARK_TOKEN
(finegrained token)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost good to go, let's run it when important model slow is triggered as well
on: | ||
schedule: | ||
- cron: "17 2 * * *" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMO we should run it when pushed on main for important models!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree. I would instead push the results to another dataset (one for daily CI, one for push to main event)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
update to add the following new block:
- for
.github/workflows/push-important-models.yml
being triggered - upload to
hf-internal-testing/benchmark_results_merge_event
- name: Benchmark (merged to main event)
if: github.event_name == 'push' && github.ref_name == 'main'
working-directory: /transformers
run: |
python3 -m pip install optimum-benchmark>=0.2.0
HF_TOKEN=${{ secrets.TRANSFORMERS_BENCHMARK_TOKEN }} python3 benchmark/benchmark.py --repo_id hf-internal-testing/benchmark_results_merge_event --path_in_repo $(date +'%Y-%m-%d') --config-dir benchmark/config --config-name generation --commit=${{ github.sha }} backend.model=google/gemma-2b backend.cache_implementation=null,static backend.torch_compile=false,true --multirun
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! 🚀
What does this PR do?
Benchmark GitHub Actions workflow.
Workflow run page
Dataset on the Hub
(Not done yet)
We are mostly only interested in
summary.json
file.However, I am uploading the whole directory of an experiment in each run. The reason is to keep the benchmark config files available if we ever need to access this information. However, this seems a bit too much (too many files for which most of time they remain the same across different dates)