-
Notifications
You must be signed in to change notification settings - Fork 293
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support an OCI Image Builder other than Docker #564
Comments
It is possible add support for However, for other builders, such as LXD, a more generalized interface could be used. Canonical maintains https://snapcraft.io/multipass as generalized interface for running containers and VM. |
The pack CLI is intended to be a tool for running Cloud Native Buildpack builds on a local workstation that doesn't natively support containers (often Windows or macOS). While it seems reasonable to support other local container runtimes besides Docker, CI platforms that support running container images are probably better off running the lifecycle directly, without needed a nested container runtime. (This doesn't require any privileges or capabilities.) Here's a complex example of this for Tekton: Relevantly, we've recently introduced a single lifecycle command that threads all of those steps together: When that functionality is documented, it should make running CNB builds on CI platforms much easier.
Kpack uses the lifecycle directly, and doesn't depend on a Docker daemon or expose a Docker socket. Builds run in unprivileged containers that are fully isolated from registry credentials. |
Another way of describing this is: the lifecycle is comparable to kaniko or other unprivileged image building tools. The pack CLI is glue code that makes it easy to use lifecycle with the Docker daemon. We could expand the functionality of the pack CLI so that it acts as glue code for other container runtimes, but that glue code is only necessary when containers are not natively accessible already. Maybe that's a good idea, but I'd like to hear concrete use cases first. |
@sclevine building random projects on a local workstation without containerization has a risk of killing the system or build environment. If |
This is not what I'm suggesting (or permitted by the CNB specification). Running the lifecycle directly is only supported on CI platforms that support running container images (such as a CNB builder image with the lifecycle binary). I'm suggesting that supporting another container runtime would only benefit desktop Linux users and users of non-container CI systems. That doesn't match the requested use case:
|
@sclevine sorry, but your assumption that supporting another container runtime would benefit users of non-container CI systems contains logical error to me. I also don't see the connection to Linux desktop, which is about having Gnome or another WM. If you want to say, that as a DevOps I should not have ability to use buildpacks on my Linux, and only do this in a self-hosted or vendor cloud, then I disagree. The system should be simple enough to troubleshoot it in parts gradually. The last part of requested use case mentions my system explicitly.
|
There are currently two ways to use the tooling provided by the Cloud Native Buildpacks project:
While I imagine that we would welcome contributions to the pack CLI to add support for alternative container runtimes (like podman), those alternative container runtimes aren't easy to use on macOS or Windows. Additionally, platforms that support running container images natively (like k8s) wouldn't benefit from it, because they can already do what pack does (run builder images). Running pack inside of a container (which creates nested containers) is unnecessarily and decreases performance. The lifecycle can run directly in that container instead. Therefore, as far as I can tell, only Linux users who don't want to build using Docker or K8s would benefit from support additional runtimes in the pack CLI. I'm not opposed to it, but I'm also not about to implement it myself 😄 |
Yes, I am interested that pack CLI in 1. allowed more secure alternatives than Docker. With |
|
While I can't speak for the other core team members, I imagine that we would welcome contributions to make the pack CLI compatible with Nautilus (or, as I mentioned, other alternative container runtimes). To be clear, given that pack's only job is to interface with the container runtime and run the lifecycle, there is no way to implement it generically to support any container runtime. The lifecycle is the generic component. So support for, e.g., Nautilus would need to be added to pack explicitly. Are you interested in submitting a PR for it?
I don't believe that setting up a CI/CD pipeline that uses pack to keep containers up-to-date is easier than using kpack. You would need to monitor for changes to a number of upstream resources (buildpacks, stack run images, stack build images, source code). A simple pipeline that uses the pack CLI might beat most Dockerfile-based pipelines, but you'd lose the stronger security guarantee that, e.g., kpack provides. |
|
whoops! |
@abitrolly I think setting |
we're in the process of testing this. cc @GrahamDumpleton |
It failed is all I can say:
It is hard for me to take it any further at this point since don't understand enough about either the podman socket support or the process by which pack works. Since I am doing this from inside of a container may also be complicating things. Really should be tested directly on a full Fedora operating system initially. |
@GrahamDumpleton can you try specifying a builder? pack build sample-java-app -B cnbs/sample-builder:alpine --path sample-java-app |
The builder was already set previously using:
Using different builder on command line makes no difference.
|
@sclevine specifying
However, the
|
I haven't got the environment setup to check again myself, but if you run |
@GrahamDumpleton the images is there. Here are the labels.
|
I could find out which requests are being sent to $ podman system service --log-level debug
...
DEBU[0015] APIHandler -- Method: POST URL: /v1.38/images/create?fromImage=heroku%2Fbuildpacks&tag=18 (conn 0/0)
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/heroku/buildpacks:18"
DEBU[0015] APIHandler -- Method: GET URL: /v1.38/images/index.docker.io/heroku/buildpacks:18/json (conn 0/1)
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]docker.io/heroku/buildpacks:18"
DEBU[0015] parsed reference into "[overlay@/home/anatoli/.local/share/containers/storage+/run/user/1000:overlay.mount_program=/usr/bin/fuse-overlayfs,overlay.mount_program=/usr/bin/fuse-overlayfs]@c533962c38b1b71b08ff03d07119d9d63f82d03192076016743cdde9d79fbd70"
DEBU[0015] exporting opaque data as blob "sha256:c533962c38b1b71b08ff03d07119d9d63f82d03192076016743cdde9d79fbd70"
DEBU[0020] APIServer.Shutdown called 2020-04-19 07:46:17.331984794 +0300 +03 m=+20.612378751, conn 0/2 $ curl -sS --unix-socket $XDG_RUNTIME_DIR/podman/podman.sock http:/v1.38/images/index.docker.io/heroku/buildpacks:18/json | jq . | grep Labels
"Labels": null
"Labels": null |
+1 to this, I was already thinking about how |
@sclevine Have also been hoping to run
I have been poking around with trying to do this with the The latter is the desired end result of course; however, this would break a lot of Concourse flows since we lose the ability to track resource versioning via explicit resources that we I would imagine other [CI] users would like to have the option to also just simply export to a tarball. Do you have any suggestions for handling this then? Or should I try something else that is to the same effect as "executing a builder image directly"? Can also open an issue about this potential feature request in the |
Not only that, but eventually not requiring Docker could also help improve the lifecycle in terms of build speed, artifacts caching, rebase, .... |
- copied this from the BOSH team: - https://github.com/cloudfoundry/bosh/blob/master/ci/old-docker/main-bosh-docker - ideally should be able to remove everything besides downloading the `pack` CLI after the following issue in the `pack` repo is resolved: - buildpacks/pack#564 [#172847711]
- copied this from the BOSH team: - https://github.com/cloudfoundry/bosh/blob/master/ci/old-docker/main-bosh-docker - ideally should be able to remove everything besides downloading the `pack` CLI after the following issue in the `pack` repo is resolved: - buildpacks/pack#564 [#172847711]
@jspawar I think we would welcome a contribution to the lifecycle that allows exporting an OCI image to tar format on disk. You could simulate this right now by spinning up a local registry in the container and pulling the image to disk, but I agree that it would be a nice feature when you're using the builder directly in concourse / other CI. Just FYI, we've made the workflow you're describing much easier recently with the lifecycle @jspawar kpack is a Docker-less CNB platform for k8s. @jorgemoralespou The lifecycle already runs efficiently without Docker on platforms that natively provide a container runtime. But like I said, I think we'd be happy to merge support for podman, etc. to support VM-based CI / Linux workstation use cases. 😄 |
@sclevine a sequence diagram with API calls employed in building an image would help to estimate the effort required to add |
CC: @jromero |
All, FYI, we have an issue open on the Podman side. I just tested with the latest version of Podman in Fedora (podman 2.1.1) and we still have the lack of an archive method blocking us. But, I wanted to say that this is on our radar and building up and stabilizing the docker compatible interface is high on our priority list. I can't commit to a timeline, but I'm investigating adding Pack CLI to RHEL 8/9 so we'll be doing more research over the coming months. @jorgemoralespou thanks for submitting this issue. We are interested from our side. |
Given that this issue was a little broad to begin with, I'm going to close it in favor of what did come out of it. Pack now supports podman via the docker socket interface. Any alternative to Docker that supports the docker socket interface should also work. |
Maybe others also come along here looking for a solution to the initially mentioned
We have a GitLab CI connected to a EKS / K8s cluster with Kubernetes executors/runners, where we don't have
So here's our interpretation/solution to the problem simply using the "lifecycle directly" (here's the full story on stackoverflow) in our image: paketobuildpacks/builder
stages:
- build
# We somehow need to access GitLab Container Registry with the Paketo lifecycle
# So we simply create ~/.docker/config.json as stated in https://stackoverflow.com/a/41710291/4964553
before_script:
- mkdir ~/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_JOB_TOKEN\"}}}" >> ~/.docker/config.json
build-image:
stage: build
script:
- /cnb/lifecycle/creator -app=. $CI_REGISTRY_IMAGE:latest Hope this is of help 😃 |
An awesome writeup at SO. Deserves to be a blog post. |
Great idea, |
great blog @jonashackt ! i implemented exaclty as you describe it, but sadly i am getting this with my spring boot application: ERROR: failed to launch: determine start command: process type web was not found |
your solution @jonashackt works really nice. It gets a bit more tricky when you need to pass maven build arguments. I managed to add the maven arguments like this: `- echo "-Dmaven.test.skip=true --no-transfer-progress package spring-boot:repackage" >> platform/env/BP_MAVEN_BUILD_ARGUMENTS` |
@jonashackt hey, thanks a lot for the solution. is there a way to pass BP env variables to the build? |
Did you see my reply? I posted how to pass an env, but i have to tell you i am afraid it doesnt work with all the variables |
Description
There's many users that are starting to not have Docker installed on their systems because there are other alternatives that let's them create containers in a secure way as they typically run these containers on remote systems (e.g. kubernetes clusters).
Some of such alternatives are:
Pack, although not depending on
docker build
per this comment it does require Docker to be running on your container.When you want to run pack as part of your CI/CD process or any other requisite (learning purposes) you might run it in a container on a Kubernetes platform and in order to run you will need to expose the Docker socket on the host machine and making the whole platform insecure.
Building containers should be a secure process that does not compromise your system in any possible way.
Proposed solution
Provide a mechanism to replace or have an alternative to using Docker to Build images.
Describe alternatives you've considered
Using kpack on the platform can be an alternative although AFAIK can have the same considerations on security (or lack of security).
The text was updated successfully, but these errors were encountered: