-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI:BUILD] Packit: run cockpit-podman tests in PRs #19500
[CI:BUILD] Packit: run cockpit-podman tests in PRs #19500
Conversation
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
should be fixed once containers/crun#1256 merges. EDIT: fix crun PR link |
/packit retest-failed |
PR LGTM in its current form. @martinpitt would you mind re-running these tests over a period of few days on this very PR to make the podman team kinda sorta comfortable 😄 . I did something like that for the initial packit enablement: #17898 . Alternatively, consider adding these to a lower traffic repo where the podman team won't mind that much. |
.packit.yaml
Outdated
@@ -18,6 +18,20 @@ jobs: | |||
srpm_build_deps: | |||
- make | |||
|
|||
- job: tests | |||
identifier: revdeps |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
my bad, forgot this one. Maybe cockpit-podman-revdeps
/ cockpit-revdeps
would make it more explicit.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I don't have a strong opinion about that one. I suppose that depends on whether you prefer more and smaller statuses (each reverse dependency tested in a separate run), or fewer bigger statuses. Myself I also prefer the former.
I'll rename it in the next push, say next Tuesday or so. Good opportunity to run them a few times 😁
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@martinpitt how about including the minimum set of tests that you're hoping to see in podman gating, and doing multiple runs on them? Meaning, each test run includes all of the minimum tests. As the team gets comfortable (assuming they do), we can include more.
Btw, I pinged packit folks to check for invisible
tasks, meaning something which won't show up in the github UI for PRs, but would show up on packit's dashboard. They don't have anything like that yet and have asked me to file a feature request. Assuming that's doable, combined with ping specific people on failure
feature request, is that something that would work for you, if the team as a whole is not comfy with including these?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minimum set of tests that you're hoping to see in podman gating
Sorry, this is impossible to answer for me, at least in the way you hope. Let me turn the question around: Which of podman's features do you consider okay to break for users, and which not? E. g. a bare minimum test run of "I can start a container with sleep 1
" would not have caught the regression of breaking checkpoints (which would have landed in the next release).
doing multiple runs on them
Like, re-run them 50 times to see how many times they fail? That I can do. We have a whole statistics database of test runs on our own CI, but not on the TF runs. This would be an interesting experiment.
invisible tasks
That seems to make tracing back regressions unnecessarily hard for everyone, and is defeats the whole point of this exercise. The goal of this is to make your and my team's life more efficient and less frustrating by flagging and preventing regressions before they land, not sweep them under the rug (that's what we have today already, and it makes them unnecessarily hard to trace back).
"ping specific people on failure" feature
That absolutely makes sense, yes (see description). I believe it's not a hard blocker, but it would certainly make it much easier for both of us to handle problems when they happen. I'm happy to help the packit folks with implementing this. If you want to wait until that is a thing, that is certainly acceptable to me.
FTR: Of course your team has the right to say "we don't want this" or "we want to stop this experiment after one or six weeks" -- I'm just asking to give it a try. (I've gone through this discussion at least three times in my life, and after a few weeks nobody ever wanted to go back 😁 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doing multiple runs on them
Like, re-run them 50 times to see how many times they fail
Procedural warning: There is no "restart" button for passed tests, and even for failed tests I don't have the privilege to press that button anyway (as I'm not a project member). So I'll have to do that through force-pushing 50 times. That's going to create a lot of notification noise, but at least we'll have a record of all these statuses in GitHub -- if that's okay with you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, nevermind -- I can do the "50 re-runs" on my fork, like in https://github.com/martinpitt/podman/pull/1 . That avoids noise here, and also unnecessary runs of the other CI jobs (which would just be a waste of electrons).
🏳️ I have a gut feeling that this is maybe too big a step. So I have another proposal:
That way, your team can become familiar with this, and land changes in these three projects with more confidence. Then I'll come back to you after a quarter or so, and sort out packit notifications until then? |
I'm cool with that. Though I can't say for sure if a quarter will suffice given I'll be getting pushback too 😄 . I can keep you posted. |
See https://cockpit-project.org/blog/tmt-cross-project-testing.html [NO NEW TESTS NEEDED] - quiesce bot, that whole commit *is* a new test Signed-off-by: Martin Pitt <[email protected]>
No-change rebase to current main, as the previous branch was already two weeks old. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have received approval from the team to run these tests on PRs.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lsm5, martinpitt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherry-pick v4.6 |
@lsm5: once the present PR merges, I will cherry-pick it on top of v4.6 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@lsm5: new pull request created: #19738 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Wohoo! You claim the prize of the first project which landed that (I have some others in the works, but they are still in review) 🏆 . Thanks for your trust for doing this experiment! Please don't hesitate to ping me on questions or issues. |
thanks for this contribution!!
Sure thing. And I'm sure there will be some grumpy pings from our resident Gandalf @edsantiago in the near future :) . So, let's hope the packit feature to ping specific people gets done soon :D |
Thanks for bearing with me! Yes, I'm sure we'll hit some bumps here and there, it's a learning experience for both of us. That TF auto-notification feature will surely be very important for scaling this up. |
See https://cockpit-project.org/blog/tmt-cross-project-testing.html
That's what we just talked about in the cabal meeting. This is mostly a foundation for your planned jira card/team discussion, and a good place to collect notes, links, try notifications, etc.
I tested this on my fork. This is usually green. I tested an extra commit to break the checkpoint API there, and it failed the checkpoint tests, confirming that this really uses the podman package from the PR.
Ironically, the very first run here found an user-visible regression that isn't released yet: containers/crun#1255 .
Questions:
(please feel free to add more)