-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set up repo where we can push benchmark results #473
Comments
@Dandandan @isidentical What do you think? This could apply to DataFusion as well. |
I think this might have an interesting use. Each user would have to run at least two revisions when they are submitting since there is no common baseline (everyone's machines & runtime conditions are different). E.g. a script that automatically runs the last released version as the baseline and the last commit from the master branch as the target revision (and maybe we can add options to also compare it with older revisions to get a holistic sense; v12 vs v13 vs HEAD). After a certain number of samples, we should at least start seeing the trend in performance (and most importantly detect obvious regressions) which I think would be really cool 💯 |
Sounds great! |
Ballista already has the option to produice a summary JSON File: /// Path to output directory where JSON summary file should be written to
#[structopt(parse(from_os_str), short = "o", long = "output")]
output_path: Option<PathBuf>, |
I updated the PR description to suggest creating this repo at |
I went ahead and created https://github.com/datafusion-contrib/benchmark-automation |
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
We do not have a formal way of tracking benchmark performance over time. I have been running benchmarks occasionally and I have some spreadsheets that I have shared but this is not ideal.
Describe the solution you'd like
Create a new GitHub repo
datafusion-contrib/benchmark-automation
and define a structure so that anyone can create a PR to submit results from a benchmark run for DataFusion or Ballista.Later, we can add scripts to produce charts and look for regressions.
Describe alternatives you've considered
Keep doing this in an ad-hoc way.
Additional context
None
The text was updated successfully, but these errors were encountered: