-
Notifications
You must be signed in to change notification settings - Fork 18
metrics: amalgamate the two metrics invoke scripts #912
Comments
My concern is that if we're running a subset of metrics tests for the metrics CI that we forget to review the disabled tests periodically (bitrot might set in, or we might determine in the future that we could re-enable certain tests). Hence, putting all the code together may minimise those risks (and reduce code duplication). However, I'm not as close as @grahamwhaley is to all this code to comment further. @chavafg - do you have any thoughts on this? |
Right, valid point. Some (but not all - some are outdated, or special cases, or duplicates etc.) of the disabled tests can be used 'longer term' to detect regressions. My thoughts on this so far have been:
|
One more thought somebody injected - as well as the 'quick' CI on the PRs, we could have a slower back-burner CI doing more extensive tests on the PRs as well. It would not report back to the blocking ack/nack status on github, but if it found a regression then could post a message to the comment thread. That means it would not hold up PR merges, and could also post results to a merged PR if later on it found there actually was a regression. Just a thought. |
That also sounds interesting. I think that realistically we're going to have to have this bipartite PnP system to keep the project momentum. But another dimension is release PRs: we don't want to make a release and create the corresponding OBS binaries if at some after the release we then discover PnP issues. Hence, we should tie some PnP process into: We could run the full battery of PnP tests and crucially wait for the full results: On a release PRPros
Cons
On all PRs that will go into a releasePros
Cons
In summary, would we rather take the pain (time hit):
/cc @jcvenegas? Aside: I'm amazed there is no decent pair of emojis for tick and cross. Call me "thumbist", but I don't like the look of 👍 and 👎 😄 |
setup: add CI default value on setup.sh
We currently have two metrics scripts:
They perform different roles (literally run all the metrics tests, or just run the CI metrics subset under controlled circumstances), but there is some crossover.
Evaluate if they can be combined into a single script. Some points:
Maybe we need a third script (indirection), which has parameters to control if we are running for performance or stability, and which other features (KSM, comparing, reporting) we want enabled.
tbh, after writing this, I suspect we are fine as we are, without adding in a bunch of script complexity that currently would not gain us any extra functionality.
@jodh-intel - wdyt?
The text was updated successfully, but these errors were encountered: