-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Build multi-arch rootless-cni-infra on GitHub Actions #8415
Build multi-arch rootless-cni-infra on GitHub Actions #8415
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: AkihiroSuda The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
be0b7fe
to
b3ab81d
Compare
On second thought, this might not be true for multi-arch? |
The image is built as a multi-arch OCI tar.gz and uploaded as a GitHub Artifact. The image is NOT pushed to quay automatically. A maintainer can use `skopeo` to push the artifact archive to quay. Signed-off-by: Akihiro Suda <[email protected]>
b3ab81d
to
fc34be7
Compare
Can Red Hat configure GitHub Actions to allow pushing to quay? |
@barthy1 would mind taking a peak at this? |
And FWIW, here's the PR that @barthy1 put together for Skopeo, I've not looked at this PR to see if it's using a similar technique or not, but would like the ending images to be similar. containers/skopeo#1066 |
Hi @AkihiroSuda Don't you consider to use github actions to push multi-arch image to |
That would be better. Needs assistance from Red Hat for adding quay token to GitHub Actions. |
@TomSweeneyRedHat in this PR the approach is - github action + emulation with If emulation approach works for all listed architectures in this PR, I guess, it's the best way to build multi-arch image. To push to The only one disadvantage I can see for the future extensions, if some podman image needs to be built with native hardware (as it was for Skopeo), then you will need to support 2 different build environments - github actions and Travis |
Can we get a separate quay account for GitHub Actions? Also, I think the account should be maintained by maintainers rather than me, so that the account can be potentially reused for other image as well in future. |
This sounds sounds like it's going to require collaboration from @cevich to get the Github Actions piece working and @TomSweeneyRedHat who maintains the Quay account last I checked |
@barthy1 @AkihiroSuda I'm trying to get my head around this. So this looks to produce the same images that @barthy1 did in Skopeo in the attached PR, but they're in a tar file that needs to be untarred and then the images pushed to quay.io via a GitHub action? Currently, in Skopeo there are v-1.2.0-* tags where the '*' is amd64, ppc64le, and s390x. It looks like this would create arm and arm64 variants too. I don't see a version in the change, would this change add the Podman version number too, or is that work for the GitHub action? We don't currently have travis here in this repo, but if we did, would it be better to build these the way the @barthy1 did using Travis? I'm sure @cevich would be happy to add the GitHub actions, but I believe he's enjoying PTO at the moment and probably won't be back until early next week. @AkihiroSuda and @barthy1 ty both very much for all your help on this in both projects. |
It is built as a tar file just because I don't have the registry token in CI to push the image to quay |
@AkihiroSuda thanks for the info. @cevich is away until next week, let's touch base on this again then. |
I'm back, and happy to create/add needed credentials I'll take a peek at this PR later today. |
I took a look, and am concerned on several fronts.
The second point is the real kicker beceause, While I'm not a big fan of Travis-CI...the approach used @barthy1 for skopeo is free from dependence on intermediate tooling and simple enough that lay-people can understand it from start to finish. Since native hardware is utilized, it only requires a few generic docker operations (which could easily be replicated with future podman), and a simple(ish) Makefile. That all said, I'm sensitive to and appreciate the amount of work that's already been committed here. Clearly it works as designed, and just needs some secrets added (which we're happy to create/provide). On the other hand, and in accordance with the "sunken-cost fallacy", I wonder if the greater-good might be best served by re-implementing this to more closely resemble the skopeo setup. I will support the decision of the group. |
What if we use buildctl directly instead of docker buildx CLI? |
That's (also) not something I'm familiar with. Poking around on google, it seems to be the nuts-and-bolts behind buildkit? I thought about this situation last night, and believe my main concern is over long-term maintainability. My other argument over tooling/dependency hegemony seems a lesser concern. This root of this is that there's a HUGE disparity in the number of people working on podman and those working on CI/CD infra. and tooling. I'm not above relying on the community to help with maintenance, that's a positive force in my book. Rather, we need to accept the historically-likely possibility that if/when broken for any reason, and nobody is around with skills to fix them quickly, these workflows may simply be disabled or deleted 😞 One idea that could help: Document the heck out of it. If a future podman-developer (not automation guru) can read a doc. and be able to upgrade/update/replace components of the image-build workflow w/o trolling around the internet much, that would be an acceptable compromise. Perhaps that would be less work than re-implementing this to use the Travis+Native HW approach used by skopeo? What are @rhatdan and @TomSweeneyRedHat and @mheon opinions here? Am I being overly paranoid, and should go back into my little corner to snuggle my bash scripts? |
--platform amd64,arm,arm64,s390x,ppc64le \ | ||
-f contrib/rootless-cni-infra/Containerfile \ | ||
contrib/rootless-cni-infra | ||
gzip -9 rootless-cni-infra.tar |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure the compression is worthwhile here, actions/upload-artifact
will create a compressed zip-file automatically.
gzip -9 rootless-cni-infra.tar | ||
- name: "Print SHA256SUM of rootless-cni-infra.tar.gz" | ||
run: | | ||
sha256sum rootless-cni-infra.tar.gz |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps this is my own ignorance: What is the purpose of calculating/printing the sha256sum of the tarball?
I don't know where |
path: rootless-cni-infra.tar.gz | ||
- name: "Notice" | ||
run: | | ||
echo "The image is NOT pushed to quay. To push the image to quay, run skopeo manually with the artifact archive." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Were you thinking of implementing the push to quay, or is that something I could/should do as a followup PR?
I'm okay either way, but since I need to create the accounts and add the secrets, it might be easier if done as a followup.
Another option I'm thinking of is embedding the Containerfile into podman binary, and let podman build it on first run. This should work on any arch, and yet we no longer need to maintain the quay image. Yet another option is to build the image in Makefile (using podman/buildah present in the build host) and embed its OCI archive into podman binary (or just place the OCI archive as /usr/share/podman/rootless-cni-infra.tgz). This complicates Makefile but simplifies runtime. Also, this option is beneficial for internetless deployment. RFC. |
I like this - it resolve the distribution problem completely. |
Sorry, which is "this"? |
Your latest suggestion, building the image automatically on first run. |
Oh! Wasn't aware of that. Yes, that negates my buildx + docker conflict-of-interest argument completely.
I like this idea also, as we're all familiar with Makefiles and it sounds like only basic tooling would be required. It also means writing less documentation 😁 |
Based on my experience working the RHEL-support front-lines: That benefit should not be under-stated. It's a very critical/important benefit to a great number of often non-advertised uses. It's also "one less thing to flake/break" should there be a networking failure at an inopportune time. |
Maybe I'm missing the point of the embedded Containerfile in the Podman binary, but it sounds like the end-user would have to install Podman first, then know how to make the image for their arch using the Podman binary? From what I've seen, most people just want to grab an image from some "official" location on quay.io and go from there. I love @barthy1's work on Skopeo, but we no longer have travis here in Buildah or Podman. I'm not sure how easily that work would translate here without reviving travis and I'd rather not do that. I really don't have a leaning towards the approaches discussed here as long as it documented so even I could understand it, and that we end up with an image for each of the arches that @AkihiroSuda created in this PR that automatically get out to quay.io. Then whatever we do here, I'd like to also do on the Buildah project. If it makes sense to do the same kind of work on the Skopeo project, I'd be OK with replacing the work @barthy1 has provided there, but I'd prefer just adding the one or two archs to @barthy1's work on Skopeo that are here in this work but not yet in Skopeo. Hopefully, my thoughts here are somewhat on track, I've been completely buried in emails/PR's this week and it's all blurring at this point. |
@TomSweeneyRedHat you're not far off. Yes I'm not too happy about reviving Travis, but it is possible.
No...as I understand it, we would be embedding the multi-arch image (the tarball) into the release. Nothing gets built at package install time, only perhaps
Yes, and this is the tail-end of my multi-dependency + complexity fear: Users don't want some months old image, they assume it's "recent". But if the automation is broken and nobody knows how to fix it, nothing newer will get pushed to quay 😖 It becomes 100% manual building and pushing at that point 😞 |
I'm going to bring this up for discussion with the team at scrum tomorrow, because we really need a plan here. We are preparing to ship a release of Podman including support for this in RHEL, which means we need a plan for distributing this image via RHEL channels, so this needs to be solved sooner rather than later. |
We deferred some of the discussion to Monday, but from the options we looked at, the most promising seems to be moving builds to OBS and then pushing to Quay from there - OBS possesses builders for all major architectures, the capability to autobuild on new commits (though this may require us to move the containerfile over to its own repository so builds are only triggered by changes to the image itself), and we're already very familiar with it as we use it to distribute the latest version of Podman for non-Red Hat OSes. |
My current opinion is to just place the OCI archive as /usr/share/podman/rootless-cni-infra.tgz . It doesn't need multi-arch builder, and doesn't need Internet either. |
I like that idea. |
After some discussion, I believe the best solution would be to build a container image off of the host OS. We should not need an image at all. Just use the hosts CNI plugins and friends. (We should do this for the pause container as well). Someone needs to work out the logistics of this, but theoretically we should be able to do something like: podman run --rootfs / echo hello Doing this from rootless mode would allow us to do everything your container image requires, without needing to download or load an image, and using up all the extra disk space. (Or adding the image to the podman package.) |
I agree that --rootfs / is the correct solution, but implementation may take long time. I think we need a short-time solution as well (i.e. local OCI archive under /usr/share/podman) |
Yes I think we should ship a Containerfile that could be used to build the image and to allow pulling form container registry. But long term, we need to add support for OS based container images. |
This might be easy. |
opened a proposal for imageless rootless-cni-infra #8709 |
Very nice. |
The image is built as a multi-arch OCI tar.gz and uploaded as a GitHub Artifact.
The image is NOT pushed to quay automatically.
A maintainer can use(EDIT: might not be true for multi-arch...)skopeo
to push the artifact archive to quay.Relates to #8411