-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider removing Docker context generator #1219
Comments
It's unfortunate I didn't see this ticket before the goal was removed. Our organization is using Pipeline Jenkins builds. Access to uploading/installing new docker images into the image repository is restricted to a proprietary corporate artifact as part of the build. My project currently uses the As far as I am aware, I can't upgrade to 1.0.0 without this goal. What are the reasons (value add) to removing the goal? Is it possible to get it back? |
We removed the goal because as we added new features, the dockerless builds Jib did started to diverge from the builds produced using If you need a dockerfile, we have a short example of one in the FAQ that executes a Jib-style build, including the default base image, the layering Jib does, and setting the entrypoint (although you'll have to set the main class yourself). Also, is there some way you could possibly use the |
Thank you for your reply. I understand the reason it was removed. I will
investigate the ability to load a tarball.
…On Wed, Feb 6, 2019, 9:22 AM Tad Cordle ***@***.*** wrote:
We removed the goal because as we added new features, the dockerless
builds Jib did started to diverge from the builds produced using
exportDockerContext, since some of the features/reproducibility settings
we implemented didn't translate well to dockerfiles. It started to become a
maintenance burden, yet 99% of its use cases could be covered just using
jibDockerBuild to build straight to a daemon. I don't think we have any
plans of bringing it back.
If you need a dockerfile, we have a short example of one in the FAQ
<https://github.com/GoogleContainerTools/jib/blob/master/docs/faq.md#what-would-a-dockerfile-for-a-jib-built-image-look-like>
that executes a Jib-style build, including the default base image, the
layering Jib does, and setting the entrypoint (although you'll have to set
the main class yourself).
Also, is there some way you could possibly use the jibBuildTar task to
build a tarball locally, and submit it to your corporate artifact to be docker
loaded instead of using docker build?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#1219 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AC7o9ktVEKfPtPZMJAIXAJEaibMuixvPks5vKwFNgaJpZM4YS4Xc>
.
|
We also missed this. Jib doesn't work for us inside of Codefresh.io CI. Calling the Docker daemon directly from jib is problematic. Even if it worked it would either miss a lot of standard metadata we get for free by using their tooling. Generating the Dockerfile and the file structure required for building the image (e.g. /app directory, config.json, manifest.json etc) without actually invoking Docker to create the layered image would be ideal for us. |
BTW, I actually don't get what you mean by this. Lastly, |
In unpacking the the tar, the layers ordering may be important if there's an overlap in the file structure. It's a form of reverse engineering the tool which adds complexity. I wasn't aware Is it possible using the model representing the desired image to be rendered in more than a single way to produce both artifacts (layers structure present in the tar ball or Dockerfile + directory structure) similarly? Can't the 2 be regarded as 2 products of a similar data structure? I'm sorry for not being clear about Codefresh's services. Any pipeline in Codefresh that sports a Dockerfile (either static or dynamically generated) is automatically built and pushed to their registry. It's also being tagged with relevant data from the specific build (branch, build id, commit hash, build status etc) it's then available in a convenient way in their UI and API for deployment and other automation purposes. It's effortless and free (at least under our terms). Invoking Docker though from within the isolated containers our builds run from makes things more difficult, adds the complexity of tagging dynamically correctly and in general has a less attractive integration with the rest of the CI environment. |
I can see where it could be confusing, but docker context generation was not really intended as a normal use of jib, and as time went on we realised we could not satisfy the goals of jib through that mechanism. Your use case also seems to benefit minimally from the use of jib. Leveraging all that docker infrastructure, it's possible you could be better off skipping jib and just generating a fat jar and using a simple Dockerfile. |
@liqweed just to be clear, Jib is docker-less: it constructs images itself. Jib never invokes docker. It looks like codefresh.io provides much of the useful metadata you describe as environment variables, and you could configure Jib in your pom/buildscript and to create corresponding labels with those values. And it looks like they provide a container registry at |
If you continue to use
jibExportDockerContext
orjib:exportDockerContext
, thumbs up this issue and/or leave a comment describing your use case.With the addition of Jib's build to docker daemon/build tarball functions, the docker context generator is becoming less and less useful, while it becomes more of a pain to maintain as we add new features (some of which are difficult to make compatible with Dockerfiles). If this isn't a popular feature, it may be best to remove it.
The text was updated successfully, but these errors were encountered: