You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently we have a few integration benchmarks that only execute a single repetition, but some of these benchmarks share input files. As a consequence, the first of these benchmarks would run on a cold file cache and the others on a warm cache. We have to normalize this, e.g. by having each integration test run two iterations and discarding the runtime of the first.
The text was updated successfully, but these errors were encountered:
Isn't that a job of the benchmark itself rather than the infrastructure?
The benchmark author should ensure the run conditions in the benchmark initialization phase. Indeed if the intent is to benchmark on cold cache, then we have a little to do as previous benchmarks may have forced the kernel to load data files.
I would be in favor of that approach rather controlling this indirectly by infrastructure at the cost of making it complicated.
Currently we have a few integration benchmarks that only execute a single repetition, but some of these benchmarks share input files. As a consequence, the first of these benchmarks would run on a cold file cache and the others on a warm cache. We have to normalize this, e.g. by having each integration test run two iterations and discarding the runtime of the first.
The text was updated successfully, but these errors were encountered: