Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DO NOT MERGE - WIP] Cirrus: use Ubuntu 22.04 LTS #14397

Closed
wants to merge 1 commit into from

Conversation

lsm5
Copy link
Member

@lsm5 lsm5 commented May 27, 2022

Signed-off-by: Lokesh Mandvekar [email protected]

Does this PR introduce a user-facing change?

None

depends on containers/automation_images#134

@lsm5 lsm5 added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label May 27, 2022
@openshift-ci openshift-ci bot added release-note-none and removed do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. labels May 27, 2022
@lsm5 lsm5 requested a review from cevich May 27, 2022 13:40
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 27, 2022
@lsm5 lsm5 force-pushed the ubuntu-2204-lts-cirrus branch 2 times, most recently from cd67454 to 4949830 Compare May 27, 2022 14:52
@cevich
Copy link
Member

cevich commented May 27, 2022

And now the "real fun" begins (log). 😁

@mtrmac
Copy link
Collaborator

mtrmac commented May 30, 2022

(The linked log seems to be a straightforward repo unavailability; looking at https://api.cirrus-ci.com/v1/task/5492536713150464/logs/main.log instead.)

  • Unlikely hypothesis:
    • The test is supposed to generate random lockPath names in PodmanTestCreateUtil; maybe the RNG is being set up by Ginkgo to always generate the same names. That could be verified by adding logging of the lockPath value. (If the generated names are truly random, I have absolutely no idea what’s going on. If they aren’t random, that’s not a problem per se, just a precondition for the hypothesis.)
    • The design requires each test to cleanup up the remote socket after itself when done, so at least assuming there are not 1000 tests running in parallel, an available path should exist even if the names are deterministic. Typically that seems to happen via podmanTest.Cleanup()StopRemoteService. Is that cleanup missing in some situations?
    • Even assuming the cleanup is missing or not performed, it is seriously surprising to me that the test log shows a failure on just the 16th test, when there are supposed to be 1000 attempts. Is the test somehow running on a pre-existing directory with close to 1000 lock files already? (And is the RNG seed consistent across runs?)
  • Alternative hypothesis: The lockFile creation is failing for some other, consistent, reason, maybe a permission error; so a single attempt to create the lock file runs out of all 1000 attempts. Logging the errors returned by os.OpenFile(lockPath) would reveal that. Guessing, maybe the current directory (I don’t even know what that directory is) is not writable.

Copy link
Member

@cevich cevich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changes LGTM

@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 31, 2022

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: cevich, lsm5

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@cevich
Copy link
Member

cevich commented May 31, 2022

int podman fedora-36 root container is hitting a 1:30 timeout set in .cirrus.yml so there's something happening that's seriously affecting test performance. Unfortunately by timing out before completion Cirrus is skipping collection of a bunch of logs that would be helpful to see. It might be worth running this test manually through hack/get_ci_vm.sh (by literally executing hack/runner.sh). This will allow inspection of the environment state and timing data reported at the end of all tests. Something(s) is/are soaking up somewhere around 30-45 minutes of unexpected time.

@cevich
Copy link
Member

cevich commented May 31, 2022

The RNG/name-collision problem was seen in Fedora-land last year I believe. Miloslav and I (mostly Miloslav) spent quite a bit of time poking at it. I would point out that we do have a software RNG service (rngd) enabled in the Fedora images, but not explicitly enabled in Ubuntu (IIRC). If there isn't one by default, it's possible this needs to be installed/enabled for Ubuntu.

@mtrmac
Copy link
Collaborator

mtrmac commented May 31, 2022

Note that the collision happens on the very first test that is not skipped, and all subsequent tests consistently fail as well. This is not like the random collisions of earlier, when there would be a random failure once in a month.

At this point I’m guessing there’s nothing random about that; the lock file creation is consistently failing for a reason that should be fixable. (But I also didn’t do any work to diagnose this further.)

@lsm5 lsm5 force-pushed the ubuntu-2204-lts-cirrus branch from 4949830 to aea263e Compare June 1, 2022 17:21
@cevich
Copy link
Member

cevich commented Jun 1, 2022

Note that the collision happens on the very first test that is not skipped, and all subsequent tests consistently fail as well. This is not like the random collisions of earlier, when there would be a random failure once in a month.

I remember things differently (and maybe wrongly), a fmt.print() showed every test basically starting with the same random seed and therefore generating the exact same filenames. I believe the underlying issue had something to do with using the seed from ginkgo instead of c/common (or something to that effect)? Anyway, I also remember there were a number of PRs that went out, I just don't can't recall what was fixed or where 😞

@cevich
Copy link
Member

cevich commented Jun 1, 2022

Hmmm, so adding rngd to Ubuntu doesn't seem to have resolved the issue. At least for int remote ubuntu-2204 root host. @mtrmac would you suggest @lsm5 instrument the socket generating code, maybe getting us more detail about what err actually is?

e.g. I wonder if we're loosing XDG_RUNTIME_DIR (it's empty) and the real problem is a permission denied or file/directory doesn't exist kind of error.

@mtrmac
Copy link
Collaborator

mtrmac commented Jun 1, 2022

Yes, that’s what the “alternative hypothesis” part of #14397 (comment) suggests. Do y’all need me to prepare a patch to that effect?

@lsm5
Copy link
Member Author

lsm5 commented Jun 1, 2022

Yes, that’s what the “alternative hypothesis” part of #14397 (comment) suggests. Do y’all need me to prepare a patch to that effect?

na, looking into it now. Let me get back to you.

@cevich
Copy link
Member

cevich commented Jun 1, 2022

@lsm5 as per https://paste.centos.org/view/f7b8a0f8 I'm onboard with mtrmac. Damn lock file simply isn't being created for some reason specific to Ubuntu 😕

I peeked at the other failures here and TBH I'd just ignore them until the lockfile thing is figured out. Hopefully by then there won't be too many more new problems (and maybe less if we're luck) 😁

@cevich
Copy link
Member

cevich commented Jun 2, 2022

@lsm5 looking better now. The next-worse problem seems to be the [Fail] Podman manifest [It] authenticated push in both Fedora and Ubuntu.

@lsm5
Copy link
Member Author

lsm5 commented Jun 6, 2022

@lsm5 looking better now. The next-worse problem seems to be the [Fail] Podman manifest [It] authenticated push in both Fedora and Ubuntu.

rerunning them now. Could it be because registry was down or something?

@lsm5
Copy link
Member Author

lsm5 commented Jun 6, 2022

[+0673s] Error: pod 11cad2f5b91a006055ed097c6527926a5e061b4829380627e6443a79b4659be2 not found in database: no such pod
[+0673s] Error: pod 02c775b342c144cd0d780e6cec8c160f466fd633f0dc7199c435432891136f96 not found in database: no such pod
[+0673s] Error: pod 267e1676159880afb3444c97398a1f6cbb3d04e5b23d56955037c3afd21446cc not found in database: no such pod

@cevich
Copy link
Member

cevich commented Jun 6, 2022

Could it be because registry was down or something?

IIRC this is a locally run registry container. But yeah, it could be a flake, worth checking open-issues if it reproduces.

@cevich
Copy link
Member

cevich commented Jun 6, 2022

not found in database: no such pod

Yeah, that's concerning.

@cevich cevich force-pushed the ubuntu-2204-lts-cirrus branch from 43745a2 to d957460 Compare June 7, 2022 21:13
@cevich
Copy link
Member

cevich commented Jun 7, 2022

force-push: Rebased on main.

@edsantiago
Copy link
Member

I think there's something broken in networking. I did a re-run with console, and while tests were running:

# hack/podman-registry start
PODMAN_REGISTRY_IMAGE="quay.io/libpod/registry:2.6"
PODMAN_REGISTRY_PORT="5055"
PODMAN_REGISTRY_USER="user0cDX"
PODMAN_REGISTRY_PASS="keyT0ZSieGrqCy3"
# telnet localhost 5055
Trying 127.0.0.1...
telnet: Unable to connect to remote host: No route to host

lsof on the process shows it listening:

conmon  178016 root    5u     IPv4             706971      0t0     TCP *:5055 (LISTEN)

There is a bad interface on the system:

# ip a
...
3: cni-podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 12:d0:4c:e4:e6:c3 brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.1/16 brd 10.88.255.255 scope global cni-podman0
       valid_lft forever preferred_lft forever
    inet6 fe80::10d0:4cff:fee4:e6c3/64 scope link 
       valid_lft forever preferred_lft forever

Deleting it (via ip link del) makes the port accessible again. So, something seems to be creating the old-style CNI interface somehow. But what?

@Luap99 here's the iptables output you requested:

# iptables -nvL -t nat
Chain PREROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
   19  9639 CNI-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
    1    32 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain INPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         

Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
 1654  107K CNI-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL
  133  8891 NETAVARK-HOSTPORT-DNAT  all  --  *      *       0.0.0.0/0            0.0.0.0/0            ADDRTYPE match dst-type LOCAL

Chain POSTROUTING (policy ACCEPT 0 packets, 0 bytes)
 pkts bytes target     prot opt in     out     source               destination         
  488 30887 NETAVARK-HOSTPORT-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0           
 3042  192K CNI-HOSTPORT-MASQ  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* CNI portfwd requiring masquerade */
    0     0 CNI-a8255fdfc238d8dd191b91e2  all  --  *      *       10.88.0.176          0.0.0.0/0            /* name: "podman" id: "0efdabc90252e4ecb9d07ac8dfb8dd0289c6fbb8b4420e1efcf7a3688913690a" */
    0     0 CNI-e060fd9781bbc14435e13cb1  all  --  *      *       10.88.2.67           0.0.0.0/0            /* name: "podman" id: "8d20cf97764f17ec4fe5bc64ab301a043803ea175260465085d0defa9685c20e" */
    0     0 CNI-d2034e09d0207d90cb0b3c73  all  --  *      *       10.88.73.146         0.0.0.0/0            /* name: "podman" id: "0af3ee4d0991539a49ffbd4bfd1e7b512a5a3e981ed63880c486a60f0230107c" */
    0     0 CNI-e9768ba8e3d20a3ac43ebc4d  all  --  *      *       10.88.73.147         0.0.0.0/0            /* name: "podman" id: "82f1c45e16095ed236ada94be140f8dcd82ea0d6e92fadc82412bb732e08b369" */
   13   828 NETAVARK-1D8721804F16F  all  --  *      *       10.88.0.0/16         0.0.0.0/0           
    0     0 CNI-0b1cafeb9628ee0b55490e13  all  --  *      *       10.88.0.16           0.0.0.0/0            /* name: "podman" id: "edfe69e088678a6d08c47f634caf413dc6b874b33e644d2b20cf6559d43a3a58" */
    0     0 CNI-8b675d9317655824ae30fa13  all  --  *      *       10.88.35.217         0.0.0.0/0            /* name: "podman" id: "1969dd24f05c18fb26aa82daa25045dff36879132efb2d495cf2992d80c70b4f" */
    0     0 CNI-21bee5fab41f61342381d410  all  --  *      *       10.88.35.232         0.0.0.0/0            /* name: "podman" id: "14f96381d6958166bea5ef1d81c70570b07ff4505c30894f19afc0d38b82592d" */
    0     0 CNI-4e1cb35a555b72d297edfd63  all  --  *      *       10.88.35.233         0.0.0.0/0            /* name: "podman" id: "469afa180258626ce228bd42c83fb53b10d34ec5e91b72d9bce3542072e3463d" */

Chain CNI-0b1cafeb9628ee0b55490e13 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "edfe69e088678a6d08c47f634caf413dc6b874b33e644d2b20cf6559d43a3a58" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "edfe69e088678a6d08c47f634caf413dc6b874b33e644d2b20cf6559d43a3a58" */

Chain CNI-21bee5fab41f61342381d410 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "14f96381d6958166bea5ef1d81c70570b07ff4505c30894f19afc0d38b82592d" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "14f96381d6958166bea5ef1d81c70570b07ff4505c30894f19afc0d38b82592d" */

Chain CNI-4e1cb35a555b72d297edfd63 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "469afa180258626ce228bd42c83fb53b10d34ec5e91b72d9bce3542072e3463d" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "469afa180258626ce228bd42c83fb53b10d34ec5e91b72d9bce3542072e3463d" */

Chain CNI-8b675d9317655824ae30fa13 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "1969dd24f05c18fb26aa82daa25045dff36879132efb2d495cf2992d80c70b4f" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "1969dd24f05c18fb26aa82daa25045dff36879132efb2d495cf2992d80c70b4f" */

Chain CNI-HOSTPORT-DNAT (2 references)
 pkts bytes target     prot opt in     out     source               destination         

Chain CNI-HOSTPORT-MASQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  119  7140 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            mark match 0x2000/0x2000

Chain CNI-HOSTPORT-SETMARK (0 references)
 pkts bytes target     prot opt in     out     source               destination         
  296 17760 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* CNI portfwd masquerade mark */ MARK or 0x2000

Chain CNI-a8255fdfc238d8dd191b91e2 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "0efdabc90252e4ecb9d07ac8dfb8dd0289c6fbb8b4420e1efcf7a3688913690a" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "0efdabc90252e4ecb9d07ac8dfb8dd0289c6fbb8b4420e1efcf7a3688913690a" */

Chain CNI-d2034e09d0207d90cb0b3c73 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "0af3ee4d0991539a49ffbd4bfd1e7b512a5a3e981ed63880c486a60f0230107c" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "0af3ee4d0991539a49ffbd4bfd1e7b512a5a3e981ed63880c486a60f0230107c" */

Chain CNI-e060fd9781bbc14435e13cb1 (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "8d20cf97764f17ec4fe5bc64ab301a043803ea175260465085d0defa9685c20e" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "8d20cf97764f17ec4fe5bc64ab301a043803ea175260465085d0defa9685c20e" */

Chain CNI-e9768ba8e3d20a3ac43ebc4d (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16         /* name: "podman" id: "82f1c45e16095ed236ada94be140f8dcd82ea0d6e92fadc82412bb732e08b369" */
    0     0 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4          /* name: "podman" id: "82f1c45e16095ed236ada94be140f8dcd82ea0d6e92fadc82412bb732e08b369" */

Chain NETAVARK-1D8721804F16F (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    1    60 ACCEPT     all  --  *      *       0.0.0.0/0            10.88.0.0/16        
   12   768 MASQUERADE  all  --  *      *       0.0.0.0/0           !224.0.0.0/4         

Chain NETAVARK-DN-1D8721804F16F (1 references)
 pkts bytes target     prot opt in     out     source               destination         
    0     0 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       10.88.0.0/16         0.0.0.0/0            tcp dpt:5055
    3   180 NETAVARK-HOSTPORT-SETMARK  tcp  --  *      *       127.0.0.1            0.0.0.0/0            tcp dpt:5055
    3   180 DNAT       tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5055 to:10.88.0.3:5000

Chain NETAVARK-HOSTPORT-DNAT (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    3   180 NETAVARK-DN-1D8721804F16F  tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp dpt:5055 /* dnat name: podman id: 5a0f24868116ef2bd39df01e9065b1a8775584491afd2cd34cf72bcaa0134864 */

Chain NETAVARK-HOSTPORT-MASQ (1 references)
 pkts bytes target     prot opt in     out     source               destination         
  180 10800 MASQUERADE  all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* netavark portfw masq mark */ mark match 0x2000/0x2000

Chain NETAVARK-HOSTPORT-SETMARK (2 references)
 pkts bytes target     prot opt in     out     source               destination         
    3   180 MARK       all  --  *      *       0.0.0.0/0            0.0.0.0/0            MARK or 0x2000

@Luap99
Copy link
Member

Luap99 commented Jul 5, 2022

That clearly shows a mix use of cni/netavark which is not supported!
It looks to me like that something is calling the systems podman which uses cni and not ./bin/podman. Or they do not use the correct --network-backend option that is set by the e2e tests.

@edsantiago
Copy link
Member

edsantiago commented Jul 5, 2022

Or they do not use the correct --network-backend option that is set by the e2e tests.

Thanks you, that's it.

manifest_test.go is the only test that uses hack/podman-registry-go:

registryOptions := &podmanRegistry.Options{
Image: "docker-archive:" + imageTarPath(registry),
}
registry, err := podmanRegistry.StartWithOptions(registryOptions)

(side note: the double use of registry here is horrible: it's both a constant and a variable. Reminder to self, fix that Submitted #14834 to fix).

Anyhow, hack/podman-registry-go invokes hack/podman-registry, which runs bin/podman but has no way of knowing the magic options used in e2e tests.

SOLUTION: get rid of the hack/podman-registry-go stuff. Just run the registry however all the other tests do it.

@edsantiago
Copy link
Member

Submitted #14845 to address the registry startup issue. I chose not to heed my "however all the other tests do it" declaration because that way lies madness.

edsantiago added a commit to edsantiago/libpod that referenced this pull request Jul 7, 2022
manifest_test:authenticated_push() is the final test left to
fix before merging containers#14397. The reason it's failing _seems_ to be
that podman is running with a mix of netavark and CNI, and
that _seems_ to be because this test invokes hack/podman-registry
which invokes plain podman without whatever options used in e2e.

Starting a registry directly from the test is insane: there is
no reusable code for doing that (see login_logout_test.go and
push_test.go. Yeesh.)

Solution: set $PODMAN, by inspecting the podmanTest object
which includes both a path and a list of options. podman-registry
will invoke that. (It will also override --root and --runroot.
This is the desired behavior).

Also: add cleanup. If auth-push test fails, stop the registry.

Also: add a sanity check to podman-registry script, have it
wait for the registry port to activate. Die if it doesn't.
That could've saved us a nice bit of debugging time.

Signed-off-by: Ed Santiago <[email protected]>
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Jul 9, 2022

@lsm5: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci openshift-ci bot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 9, 2022
@cevich
Copy link
Member

cevich commented Jul 11, 2022

because that way lies madness.

Lol, thanks for checking into this Ed and Paul. FYI: I'm pretty sure during setup we make install so the system podman should be the same as ./bin/podman. However, I cannot predict what evil lurks in /etc/containers/* or the runtime database, so it's easily possible something causes it to revert to CNI.

@edsantiago
Copy link
Member

edsantiago commented Jul 11, 2022

during setup we make install

That doesn't matter. The short version of the problem is: e2e tests make excessive use of podman command line args (podman --this --that --oh --my --gosh --so --many --options), hence, anything that invokes plain podman (like the registry script) is gonna screw up the system.

But it's moot, because my fixer PR is merged, so, @lsm5, please rebase & repush, this should pass now.

@cevich
Copy link
Member

cevich commented Jul 11, 2022

hence, anything that invokes plain podman (like the registry script) is gonna screw up the system.

Well screw up any future e2e runs, yes maybe. Though --despite --so --many --options, all the e2e tests still share some resources, disk space, lots of networking aspects, firewall rules, etc. So we'll never fully be able to isolate them. In any case, at least for e2e maybe it would be be useful to link /usr/bin/podman to /bin/false instead of installing: To help make this class of problem more obvious if it ever creeps back in later. Just a random idea, and an easy change to implement.

@edsantiago
Copy link
Member

Well screw up any future e2e runs

No, I mean, screw up the e2e test run right smack in the middle of the run. It's kind of complicated, but here's the skeleton:

  • e2e tests run, blah blah, yeah, going fine
  • one of the e2e tests invokes a script that runs plain podman, not podman --with --all --the --options
  • ker-blammo. Everything gets destroyed because plain podman switches the system from netavark to CNI and all goes to hell

Does that make sense?

@cevich
Copy link
Member

cevich commented Jul 12, 2022

Does that make sense?

Oh I see, gotcha, okay thanks.

@cevich
Copy link
Member

cevich commented Jul 14, 2022

I'm going to close this PR in favor of: #14719 because:

  1. Lokesh is busy with other things
  2. Having multiple image update PRs open is confusing
  3. We are in a holding-pattern for new criu and runc packages getting into new images anyway

@cevich cevich closed this Jul 14, 2022
@edsantiago
Copy link
Member

Aw, phooey, @cevich, can I ask you to reconsider? All this needs is a rebase & push, and life will (should) be happy. We can get Ubuntu 22 right now, all we need is this PR to merge. The criu issue does not affect this PR.

@cevich
Copy link
Member

cevich commented Jul 14, 2022

I think all we're loosing is the PR comments (which may actually be valuable). My #14719 has all the changes here + the fix for bats (slightly newer images) + my attention - @lsm5 asked to be let off the hook in a "help!" comment above.

Otherwise I'm not strongly opposed to re-opening this and re-running the tests, just lazy and don't want to be overwhelmed with too many image-update PRs in-flight.

@lsm5 lsm5 deleted the ubuntu-2204-lts-cirrus branch March 16, 2023 11:25
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 6, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 6, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note-none
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants