-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Produce x86_64 & ARM64 fedora container images #381
Conversation
30393ed
to
d417fd4
Compare
2b88b98
to
4f9f677
Compare
4f9f677
to
ad726ea
Compare
Cirrus CI build successful. Found built image names and IDs:
|
|
Hrmmm, it looks like a bunch of tasks are still waiting for the container task to finish. We don't want that since the new multi-arch fedora container takes >1hr to build. |
ad726ea
to
0b53866
Compare
Force-push: Fixed up commit message + simplified diff slightly. I'm not sure why many/most of the VM build tasks aren't running in parallel w/ container-based tasks. Maybe it's some Cirrus-CI quota/restriction. |
Can you give context on what it is you noticed? All I saw was green CI |
The commit message needed a minor tweak. I thought I could simplify the YAML and just make everything use the &IBI_VM alias, but I forgot the nested virt stuff is important too -sigh-. |
More detail: I afraid it "accidentally" passed. This test job failure makes me think we simply got lucky and were assigned a machine that supports nested-virt (which is required for the base image builds). I want to try and encode that requirement into the &IBI_VM alias so it's guaranteed. |
0b53866
to
7c226ef
Compare
There, I think this should be better. The last thing I want is to add a flake into these builds 😕 |
Re:
I was looking at the task scheduling sequence (green-bars on the right). I seemed like a bunch of tasks were waiting for the new, slow container build. It may have just been a fluke though, nothing in the dependency tree suggested the VM builds should block. |
That's the nested-virt isn't supported problem 😞 |
Hrmmm, okay, taking a step back. Let me just go back to the CI-green commit, then only fix the commit message, and add in the |
At the time of this commit, podman's Makefile has a target to allow validating code changes locally (`validatepr`). However, it's based on a bespoke image completely unassociated with the image used in CI. This can easily lead to a situation where validation passes in the local environment but fails in CI. Support the podman `validatepr` target use of `quay.io/libpod/fedora_podman:latest` images by performing a manifest-list build that includes `arm64` (a.k.a. `aarch64`). The trade-off here is image build-time, since emulation is extremely slow (over an hour). Therefore, the `container_images` CI task has also been removed as a dependency from `base_images` CI task, allowing them to run in parallel. Note: This will not impact pulling the image, since the client always only pulls the layers necessary for the indicated architecture. Signed-off-by: Chris Evich <[email protected]>
7c226ef
to
0b13b48
Compare
Confirmed, this is misleading. VM builds are running concurrently with the container builds as intended. |
I'm going to abandon this, the emulated build is just too slow. The overall CI VM image build process is complex and lengthy enough. Nobody want more complexity and an even slower (1-1/2 hour) container build on top. If somebody wants to take this up in the future, I'd suggest doing a native arm64 build, then (somehow) combining the two into a manifest-list after-the-fact. This is also complex, but at least it'll run quickly (like 15m probably). |
Depends on: #380
At the time of this commit, podman's Makefile has a target to allow
validating code changes locally (
validatepr
). However, it's basedon a bespoke image completely unassociated with the image used in CI.
This can easily lead to a situation where validation passes in the local
environment but fails in CI. Support the podman
validatepr
targetuse of
quay.io/libpod/fedora_podman:latest
images by performinga manifest-list build that includes
arm64
(a.k.a.aarch64
).The trade-off here is image build-time, since emulation is
extremely slow (over an hour). Therefore, the
container_images
CItask has also been removed as a dependency from
base_images
CI task,allowing them to run in parallel.
Note: This will not impact pulling the image, since the client always
only pulls the layers necessary for the indicated architecture.
Ref: Podman's validatepr Makefile target