-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dockerBuild for native image doesn't work with remote Docker daemons #1610
Comments
I was facing this problem too while using minikube's docker deamon for quarkus. @cescoffier @geoand @Sanne : Polite ping, Could you guys please look into this issue whenever you get time? We're trying to integrate quarkus support into fabric8 magen plugin and facing issues due to this. |
I could look into this on my own time (which probably means over the weekends). Do you envision that some kind of flag that would enable the behavior you describe you use in Syndesis should control the Quarkus native image generation phase? |
@geoand : Thanks for super fast response, (well I think it's a pre-requisite for being a quarkus guy ;-) ) we have added a quarkus generator and quarkus health check enricher in fabric8io/fabric8-maven-plugin#1577 , so I was just testing this feature but I faced issues when during native build as it requires local docker deamon to be exposed during native build. If I build usually with minikube's docker deamon exposed. it fails with this error: https://pastebin.com/QYc3PViX |
LOL, although I'm a Spring Boot guy :P
Ah yes, there is also a relevant Stack Overflow question here. So the idea is to be able to seemlessly use the docker daemon of Minishift / Minikube to do the build... I would think it's definitely doable, but I would like more details for @rhuss :) |
@rhuss I think the docker tarball context would be applicable here, WDYT? |
Well, not really a its about getting access to the created binary. I think, the most elegant way would be to use multistage Docker builds (i.e. running the native build + creating the final image with one 'docker build') is the way instead of storing the (linux) binary on your local FS as intermediate step. Unfortunately Minshift's Docker daemon is too old to support multistage builds, but if we stick to minikube or more modern Docker daemon that's by far the best solution (not only for remote Docker daemons but in general). |
From the docs I thought that the tarball context could be used to "bundle" the created binary as well.
I'll hopefully have a look over the weekend. Thanks! |
@rohanKanojia hey, wasn't able to look at this over the weekend unfortunately... I saw however that FMP was released, did you work around this limitation or just ignore the issue for the time being (since it's not strictly necessary? |
@geoand : No, no. Issue is still there. We just released with whatever we had. |
@rohanKanojia thanks for the info. |
I read through this again, but I still don't understand how building the native image inside a Minishift VM would help... The reason I say this is because even if we do accomplish that, there will be no usable docker image ready for consumption by users of the cluster (the image that the users would use is something like: https://github.com/quarkusio/quarkus/blob/master/devtools/common/src/main/resources/templates/dockerfile-native.ftl#L18). Am I missing something here @rhuss @rohanKanojia ? |
The main issue with remote Docker daemons is, that you can't use volume mount in any case (the So the best answer for this is to combine the process of creating the application image and the actual compilation into one Dockerfile.
|
@rhuss For me your comment:
makes absolute sense :). We'll have to see how to best handle this use case. Thanks for the details! |
I looked into this and it should be rather easy to implement on the technical side of things using a multi-stage docker builds as @rhuss mentioned (basically we would trigger this "new" type of docker build instead of the regular build we do here). The first stage of the docker build would just copy the runner jar and dependencies lib and invoke the native binary generation. What I am struggling with is how to reconcile this new approach with what we have already - what should the UX be? |
Quarkus Quickstarts have an example of combined approach - https://github.com/quarkusio/quarkus-quickstarts/blob/master/getting-started-knative/Dockerfile building the app ++ building the native image ++ building the final Docker image building the native image is done directly via native-image command |
@rsvoboda interesting, I'll have to take a look |
I shared multi-stage build examples here: https://github.com/erkanerol/quarkus-multistage-docker-build/blob/master/README.md |
Was hit by this today as well. Especially for mac osx users then you dont want to do a native build that ends up with a native osx binary which you cant run in k8s. I think this is really important to have developer joy with quarkus and native builds for k8s. Please prfioritize and work on this. |
cc @maxandersen |
Yeah I have tried various variations of And The error you get is some weird error about a /process folder dont exists or something. No time to provide details wife yelling :( |
@davsclaus I hit that too lately (e.g. in the latest demo I sent around), and I switched to Kubernetes included in "Docker for Mac" which can do volume mounts to local directories. If you do this you can create the image already in the Docker daemon running Kubernetes so you save the roundtrip to the registry. Just switch on here: |
tl;dr - If you are Mac user and want to run directly your Quarkus images without pushing to a remote registry, use "Docker for Mac"'s Kubernetes, not minikube or minishift. |
I'm currently stuck on this issue as well. All our builds are running on Jenkins inside a Docker container to ensure consistent artifacts. However this approach does not allow creating native images in an easy way. DooD (Docker outside of Docker) does not work because of the volume mapping issue. As far as I see, there are two options:
I would rather not go with option 1, since this is custom and not very maintainer friendly. Is there an option to make the volume mapping to /project configurable, so that it can be overwritten? |
what is it you would override it to instead ? fyi, the full command done to run docker is printed to console so you can copy that command line and do the build manually with any changes you want. If you find something that works we can look at adding some support for it. |
Hmm, good question @maxandersen... I thought it would be possible to inject the corresponding working directory, but no success yet when running the command manually. Will keep you posted if I find a possible solution to this problem. |
btw. now i grok a little better what you are doing - isn't the only thing you need to do is to have a graalvm available for your jenkins to use ? |
Yes, that is indeed the first option I mentioned and what I'm currently using (which is working). |
Understood - but I would say for now you are better of using this manual install as it actually works now and will continue to do so ;) Better (semi)automatic alignment of quarkus and graalvm native-image is something we'll work on but it’s a bit out in the future on how this would work. |
Trying this again this weekend. The mutlistage build is so far the best option, but all there is is some details on this page: https://quarkus.io/guides/building-native-image Also this requires that all your dependencies can be downloaded from maven central. I wonder if maybe the quarkus maven plugin could have some goal to prepare for this and download all the dependencies and then volume mount into the multistage build so all binaries are downloaded and the build can run safely. As end users will have all sorts of http proxy, 3rd party maven repositories and whatnot that they will struggle with to setup for a multistage dockerbuild. |
@davsclaus not following why your setup require all published in maven central. If it does that’s definitely a bug. Can you give an example ? |
I was running with a SNAPSHOT of camel-quarkus that was built locally (eg master branch).
|
This is my
|
Also this requires that your project is totally standalone, eg in the error reported above then it complains about parent pom.xml cannot be found, as its not copied into the docker container in the 1st step in the multistage. That is somewhat okay as I can make a standalone example. But its the pain with that the 1st step dont have an easy way to use my local Since its a docker file I can probably thinker with this and do some tricks to setup a maven configuration for step #1 but all of that is not something I want to do. I would love quarkus to have this developer joy for me, when it comes to building native images for kuberenetes. Ideally I dont have to deal with docker daemons (maybe jbi can help with this) or anything like that but not sure how far we are there. Also its a bit of pain with docker image registry and the "magic" of which one is in use. kubernetes are really terrible at making this obvious. You just get a cannot pull image. And quarkus generated with image pull policy = Always at one time, and then it doesnt work as it would attempt to pull the image from dockerhub or whatever public repo there is, instead of using the image that was pushed to it by docker daemon that has built it. So in the
But a lot of users are on windows and osx and they will struggle a lot with this. And some of the hype and exitment around quarkus is the native build. eg to get my simple app down from 100mb RSS memory in kubernetes to 10mb (it used to be like that in earlier releases of graal/quarks on java8). The JVM example of this runs with 122 memory in my local minikube cluster right now. The osx native image of this takes approx 28mb of memory. The same app took 7-12mb in January as Java8 built - back then I also struggled to get it native compiled so I hoped it was better today. We are almost there, just need some love and attention to make this great for any normal developer. |
I can polish up the example on camel-quarkus and make it standalone and put on some personal github for people to try. But we want to put the example in camel-quarkus too with the details to build as native for kubernetes. The non kubernetes example is current at: https://github.com/apache/camel-quarkus/tree/master/examples/http-log |
I created a branch and pushed the example there |
yeah, so not sure what we can really do here that will solve all your issues besides running the full build inside the docker container so all content is available to it "locally". You comment about everything needs to be in maven central is more that your build need to be possible to do fully locally or at least be published to some accessible repo; i.e. a thirdparty repo should just work. For your example though I still don't follow why it can't build as it should have access to the same as you already have locally. About dealing with image registry and kubernetes - then jib doesn't help here; jib lets you skip the docker deamon but you still need a registry. image pull policy = Always you can override; but the reason for that is that openshift cri-o until recently would not refresh a image so incremental changes would not be picked up. In later versions of openshift we can stop that and rely on using tag+sha1 references instead. I'll need to try your example because all the native image builds I've tried so far on osx have "just worked"; but I probably just been lucky ;) |
btw. after re-reading the whole thread i start to think I got it backwards ;) your issue is not the registry being remote but that the place where container are built are remote (or at least remote enough to not necessarily have access to all it needs) |
Sorry for being silent for a couple of days. Had too many meetings and monday was also my birthday so there was more family stuff that day. I will get back to this and try to build a standalone sample project with instructions that can better be used to reproduce and go over this and see where we can improve things for end users. |
Any news on that? I'm also getting the same error. |
I just came across this issue as well when starting a containerized native image build from within a remote containers development environment in Visual Studio Code. I can get the containerized build to work using
As far as I could see, this is sufficient in all cases, or did I miss something? It would be fairly straightforward to integrate this alternative way of running a containerized native image build into the native image build step which could be controlled by an application property as already suggested above by @PieterjanDeconinck. WDYT? |
This seems to be happening when using Windows with WSL for development where Docker/Kubernetes is running in Windows and is used in WSL by exporting DOCKER_HOST, which is pretty common setup. Its not just using Docker on OSX. |
…te_docker Containerized native image build on remote docker daemons (issue #1610)
Building native images using a remote docker daemon has been implemented and merged for Quarkus 1.13 (PR #14635). Just use the flag |
Should we close this issue then and link the PR to the issue? |
The build fails, to try to create a native binary with
dockerBuild
true and accessing a remote Docker daemon which does not allow bind mounts viadocker run -v
as it is done inquarkus/core/creator/src/main/java/io/quarkus/creator/phase/nativeimage/NativeImagePhase.java
Line 286 in fea6ba9
This use is important when e.g. building against minikube/minishift's internal Docker daemon (
eval $(minikube docker-env)
) so that an image does not need to be pushed to a registry but could be directly used within minikube.For this setup, in Syndesis we avoided bind mounts but used a combination of running the actual build during a Docker build and then copying out the generated binary from the created image by running a container with
cat
:The actual build is like in
with that Dockerfile
This might not be useful in this context as it depends on the size of the source to copy over into the image when doing the build.
As an alternative, we used ssh to copy over the sources into the Minishift VM and then used a bind mount within the VM, but the current solution is (a) more generally applicable and (b) also more robust.
The text was updated successfully, but these errors were encountered: