This repository has been archived by the owner on Nov 28, 2020. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 47
Investigate the time taken, and number of benchmarks in core #225
Labels
Comments
@mhdawson i'm writing a script to go through and collect the runtime for each core benchmark - could you create me a temporary job in jenkins I can modify and use to run this? |
This was referenced Oct 3, 2018
Closing as we have not made progress. @gabrielschulhof is going to write a proposal for an alternate approach. |
Spreadsheet tracking relative importance of benchmarks: https://docs.google.com/spreadsheets/d/17ey-6r_sTVYpy6Zv0n55kqkVgqRf67aaf6Ub7eebGgo/edit#gid=0 |
This was referenced Oct 16, 2019
Here's a spreadsheet with the processed responses where the benchmarks are sorted in order of popularity: https://docs.google.com/spreadsheets/d/1_7VrAFO8K9KdQW8qEmnKnVBb514SMRqYiitoDu_cMiM/edit#gid=979605 |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
This is the starting place to begin running core benchmarks on a regular basis.
First steps: Put together a benchmark run that summarises time to run each suite as well as record number of results out of the run.
Then, at a future meeting: Look at results, decide on a subset to run on a regular basis that can fit into the time available and can be summarised appropriately.
The text was updated successfully, but these errors were encountered: