Skip to content
This repository has been archived by the owner on Nov 28, 2020. It is now read-only.

Investigate the time taken, and number of benchmarks in core #225

Open
gareth-ellis opened this issue Jun 5, 2018 · 6 comments
Open

Investigate the time taken, and number of benchmarks in core #225

gareth-ellis opened this issue Jun 5, 2018 · 6 comments
Assignees

Comments

@gareth-ellis
Copy link
Member

This is the starting place to begin running core benchmarks on a regular basis.

First steps: Put together a benchmark run that summarises time to run each suite as well as record number of results out of the run.

Then, at a future meeting: Look at results, decide on a subset to run on a regular basis that can fit into the time available and can be summarised appropriately.

@gareth-ellis
Copy link
Member Author

@mhdawson i'm writing a script to go through and collect the runtime for each core benchmark - could you create me a temporary job in jenkins I can modify and use to run this?

@mhdawson
Copy link
Member

@mhdawson
Copy link
Member

Closing as we have not made progress. @gabrielschulhof is going to write a proposal for an alternate approach.

@gabrielschulhof
Copy link

Spreadsheet tracking relative importance of benchmarks:

https://docs.google.com/spreadsheets/d/17ey-6r_sTVYpy6Zv0n55kqkVgqRf67aaf6Ub7eebGgo/edit#gid=0

@gabrielschulhof
Copy link

@gabrielschulhof
Copy link

Here's a spreadsheet with the processed responses where the benchmarks are sorted in order of popularity:

https://docs.google.com/spreadsheets/d/1_7VrAFO8K9KdQW8qEmnKnVBb514SMRqYiitoDu_cMiM/edit#gid=979605

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

4 participants