-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Vendor c/image after https://github.com/containers/image/pull/1816 #17221
Vendor c/image after https://github.com/containers/image/pull/1816 #17221
Conversation
Cc: @edsantiago |
@edsantiago the test logs (very reasonably) don’t include output from Podman commands ran as part of the tests if the tests succeed. Do you know if there is some pre-existing, reasonably convenient, trick I could use to log things from deep inside Podman code in a way that would then be possible to view on test success? Maybe syslog? |
7d41d9e
to
6ad3e48
Compare
I mean, are we supposed to detect and workaround consistently incorrect DNS in redirect targets? It’s probably possible (after we successfully ping |
From a BATS script, you can echo to fd3, prepending a hash: echo "# debug: $output" >&3 From podman itself, you can't, because I take pains to close fd3 before running podman. But journal logs are available in CI, via a series of clicks on the Cirrus page (near the bottom of the accordions).
I reported the flake to quay on Thursday. Haven't made it past the "works for me" phase yet. |
e6a3ec5
to
a4987ad
Compare
One caught failure:
Good, so the heuristics did match at least in this one case. |
a4987ad
to
ec90b82
Compare
The current heuristic requires 1 MB, and did not trigger here. |
c3af877
to
c9d4ccc
Compare
Analyzing test runs from Jan 30 (and making a record here so that we have some sort of statistics): Quite a few tests are broken by the extra logging; I need to tune that down.
so “immediately” after successfully receiving data (possibly buffered locally? OTOH clearly not a stalled connection), we get unexpected EOF. On the positive side, reconnecting seems to have worked (the first? recorded instance).
also seems to have worked. |
c9d4ccc
to
e73a866
Compare
e73a866
to
f70523d
Compare
Note to self: a backport to the 5.24 branch might be in order… |
f70523d
to
0cf89b5
Compare
Backported, let’s see… (This PR is currently primarily helpful as a source of frequent enough failures to gather more data. Actually backporting depends on following a process…) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I am also in favor of a backport. If we backport, today is a good time to get it in before the 4.4.1 release.
0cf89b5
to
8ba67ba
Compare
974b4bc
to
1d54f86
Compare
f493928
to
2f81a2f
Compare
Due to #17342 being merged, just using a c/image 5.24 branch would be a revert. So, this PR now just rolls c/image forward to include the So, this also:
Ready for review and possible merging. |
Also includes unreleased openshift/imagebuilder#246 to work with the updated docker/docker dependency. And updates some references to newly deprecated docker/docker symbols. [NO NEW TESTS NEEDED] Signed-off-by: Miloslav Trmač <[email protected]>
2f81a2f
to
e308ba0
Compare
Ouch. Restarted system-test flake, it's the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@edsantiago PTAL
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mtrmac, vrothberg The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I'm sorry... this is not something I can realistically review, given my ignorance of the underlying modules. |
@rhatdan PTAL |
@containers/podman-maintainers PTAL |
/lgtm |
WIP, this is primarily to expose the code to flakes, and to collect production-environment error values and other data.Look forBODYREADER DEBUG
messages in logs, they are intentionally very loud right nowDoes this PR introduce a user-facing change?