Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

dockerBuild for native image doesn't work with remote Docker daemons #1610

Closed
rhuss opened this issue Mar 21, 2019 · 43 comments · Fixed by #14635
Closed

dockerBuild for native image doesn't work with remote Docker daemons #1610

rhuss opened this issue Mar 21, 2019 · 43 comments · Fixed by #14635
Labels
pinned Issue will never be marked as stale
Milestone

Comments

@rhuss
Copy link

rhuss commented Mar 21, 2019

The build fails, to try to create a native binary with dockerBuild true and accessing a remote Docker daemon which does not allow bind mounts via docker run -v as it is done in

Collections.addAll(nativeImage, "docker", "run", "-v", outputDir.toAbsolutePath() + ":/project:z", "--rm", image);

This use is important when e.g. building against minikube/minishift's internal Docker daemon (eval $(minikube docker-env)) so that an image does not need to be pushed to a registry but could be directly used within minikube.

For this setup, in Syndesis we avoided bind mounts but used a combination of running the actual build during a Docker build and then copying out the generated binary from the created image by running a container with cat:

The actual build is like in

cd $operator_dir
docker build -t syndesis-operator-builder . -f Dockerfile-builder
docker run syndesis-operator-builder cat /syndesis-operator > syndesis-operator
chmod a+x syndesis-operator

with that Dockerfile

FROM golang:1.11.0
RUN go get -u github.com/golang/dep/cmd/dep
WORKDIR /go/src/github.com/syndesisio/syndesis/install/operator
COPY Gopkg.toml .
COPY Gopkg.lock .
RUN dep ensure -vendor-only -v
COPY . .
RUN CGO_ENABLED=0 go build -o /syndesis-operator ./cmd/syndesis-operator

This might not be useful in this context as it depends on the size of the source to copy over into the image when doing the build.

As an alternative, we used ssh to copy over the sources into the Minishift VM and then used a bind mount within the VM, but the current solution is (a) more generally applicable and (b) also more robust.

@rohanKanojia
Copy link

I was facing this problem too while using minikube's docker deamon for quarkus.

@cescoffier @geoand @Sanne : Polite ping, Could you guys please look into this issue whenever you get time? We're trying to integrate quarkus support into fabric8 magen plugin and facing issues due to this.

@geoand
Copy link
Contributor

geoand commented Apr 9, 2019

I could look into this on my own time (which probably means over the weekends).
@rohanKanojia @rhuss To get a little more context, could you explain a little more how you envision the fmp integration would work? Which fmp goals are we talking about?

Do you envision that some kind of flag that would enable the behavior you describe you use in Syndesis should control the Quarkus native image generation phase?

@rohanKanojia
Copy link

@geoand : Thanks for super fast response, (well I think it's a pre-requisite for being a quarkus guy ;-) )

we have added a quarkus generator and quarkus health check enricher in fabric8io/fabric8-maven-plugin#1577 , so I was just testing this feature but I faced issues when during native build as it requires local docker deamon to be exposed during native build.

If I build usually with minikube's docker deamon exposed. it fails with this error: https://pastebin.com/QYc3PViX

@geoand
Copy link
Contributor

geoand commented Apr 9, 2019

@geoand : Thanks for super fast response, (well I think it's a pre-requisite for being a quarkus guy ;-) )

LOL, although I'm a Spring Boot guy :P

we have added a quarkus generator and quarkus health check enricher in fabric8io/fabric8-maven-plugin#1577 , so I was just testing this feature but I faced issues when during native build as it requires local docker deamon to be exposed during native build.

If I build usually with minikube's docker deamon exposed. it fails with this error: https://pastebin.com/QYc3PViX

Ah yes, there is also a relevant Stack Overflow question here.

So the idea is to be able to seemlessly use the docker daemon of Minishift / Minikube to do the build... I would think it's definitely doable, but I would like more details for @rhuss :)

@geoand
Copy link
Contributor

geoand commented Apr 10, 2019

@rhuss I think the docker tarball context would be applicable here, WDYT?

@rhuss
Copy link
Author

rhuss commented Apr 10, 2019

Well, not really a its about getting access to the created binary. I think, the most elegant way would be to use multistage Docker builds (i.e. running the native build + creating the final image with one 'docker build') is the way instead of storing the (linux) binary on your local FS as intermediate step.

Unfortunately Minshift's Docker daemon is too old to support multistage builds, but if we stick to minikube or more modern Docker daemon that's by far the best solution (not only for remote Docker daemons but in general).

@geoand
Copy link
Contributor

geoand commented Apr 10, 2019

Well, not really a its about getting access to the created binary. I think, the most elegant way would be to use multistage Docker builds (i.e. running the native build + creating the final image with one 'docker build') is the way instead of storing the (linux) binary on your local FS as intermediate step.

From the docs I thought that the tarball context could be used to "bundle" the created binary as well.

Unfortunately Minshift's Docker daemon is too old to support multistage builds, but if we stick to minikube or more modern Docker daemon that's by far the best solution (not only for remote Docker daemons but in general).

I'll hopefully have a look over the weekend. Thanks!

@geoand
Copy link
Contributor

geoand commented Apr 17, 2019

@rohanKanojia hey, wasn't able to look at this over the weekend unfortunately...

I saw however that FMP was released, did you work around this limitation or just ignore the issue for the time being (since it's not strictly necessary?

@rohanKanojia
Copy link

@geoand : No, no. Issue is still there. We just released with whatever we had.

@geoand
Copy link
Contributor

geoand commented Apr 17, 2019

@rohanKanojia thanks for the info.

@geoand
Copy link
Contributor

geoand commented Apr 17, 2019

I read through this again, but I still don't understand how building the native image inside a Minishift VM would help... The reason I say this is because even if we do accomplish that, there will be no usable docker image ready for consumption by users of the cluster (the image that the users would use is something like: https://github.com/quarkusio/quarkus/blob/master/devtools/common/src/main/resources/templates/dockerfile-native.ftl#L18).

Am I missing something here @rhuss @rohanKanojia ?

@rhuss
Copy link
Author

rhuss commented Apr 17, 2019

The main issue with remote Docker daemons is, that you can't use volume mount in any case (the -v option to Docker run). So we need either another way to share the generated binary between the build process creating the native Linux binary (within a container) to the final application image (another container).

So the best answer for this is to combine the process of creating the application image and the actual compilation into one Dockerfile.

FROM graalvm as build
COPY src/* workdir
// create binary image
.....

FROM fedora
COPY --from build generated-binary image /target-dir/
CMD /target-dir/generated-binary

@geoand
Copy link
Contributor

geoand commented Apr 17, 2019

@rhuss For me your comment:

So the best answer for this is to combine the process of creating the application image and the actual compilation into one Dockerfile.

makes absolute sense :). We'll have to see how to best handle this use case.

Thanks for the details!

@geoand
Copy link
Contributor

geoand commented May 6, 2019

I looked into this and it should be rather easy to implement on the technical side of things using a multi-stage docker builds as @rhuss mentioned (basically we would trigger this "new" type of docker build instead of the regular build we do here).

The first stage of the docker build would just copy the runner jar and dependencies lib and invoke the native binary generation.
The second stage would just copy the native binary output from the first stage and build a docker image with only the native binary.

What I am struggling with is how to reconcile this new approach with what we have already - what should the UX be?
Any ideas @rhuss @cescoffier @rohanKanojia ?

@rsvoboda
Copy link
Member

Quarkus Quickstarts have an example of combined approach - https://github.com/quarkusio/quarkus-quickstarts/blob/master/getting-started-knative/Dockerfile

building the app ++ building the native image ++ building the final Docker image

building the native image is done directly via native-image command
https://github.com/quarkusio/quarkus-images/tree/graalvm-1.0.0-rc16/centos-quarkus-maven could be used (as it has have both mvn and graalvm) for native build via mvn -Pnative verify

@geoand
Copy link
Contributor

geoand commented May 14, 2019

@rsvoboda interesting, I'll have to take a look

@erkanerol
Copy link

@gsmet gsmet added the pinned Issue will never be marked as stale label Nov 13, 2019
@davsclaus
Copy link
Contributor

Was hit by this today as well. Especially for mac osx users then you dont want to do a native build that ends up with a native osx binary which you cant run in k8s.

I think this is really important to have developer joy with quarkus and native builds for k8s. Please prfioritize and work on this.

@geoand
Copy link
Contributor

geoand commented Nov 23, 2019

cc @maxandersen

@davsclaus
Copy link
Contributor

Yeah I have tried various variations of
mvn package -Pnative -Dquarkus.native.container-build=true

And
./mvnw package -Pnative -Dquarkus.native.container-runtime=docker

The error you get is some weird error about a /process folder dont exists or something. No time to provide details wife yelling :(

@rhuss
Copy link
Author

rhuss commented Nov 23, 2019

Was hit by this today as well. Especially for mac osx users then you dont want to do a native build that ends up with a native osx binary which you cant run in k8s.

I think this is really important to have developer joy with quarkus and native builds for k8s. Please prioritize and work on this.

@davsclaus I hit that too lately (e.g. in the latest demo I sent around), and I switched to Kubernetes included in "Docker for Mac" which can do volume mounts to local directories. If you do this you can create the image already in the Docker daemon running Kubernetes so you save the roundtrip to the registry.

Just switch on here:

grafik

@rhuss
Copy link
Author

rhuss commented Nov 23, 2019

tl;dr - If you are Mac user and want to run directly your Quarkus images without pushing to a remote registry, use "Docker for Mac"'s Kubernetes, not minikube or minishift.

@PieterjanDeconinck
Copy link

I'm currently stuck on this issue as well. All our builds are running on Jenkins inside a Docker container to ensure consistent artifacts. However this approach does not allow creating native images in an easy way. DooD (Docker outside of Docker) does not work because of the volume mapping issue.

As far as I see, there are two options:

  1. configure & build a custom Docker image to use in Jenkins
  2. find a fix for running with quarkus.native.container-build=true (DooD)

I would rather not go with option 1, since this is custom and not very maintainer friendly.
Option 2 has my preference, but I did not find any configuration to make it work yet.
Multi-stage Docker builds are not an option, since we need the native image binary as an artifact (running in an embedded environment without Docker).

Is there an option to make the volume mapping to /project configurable, so that it can be overwritten?

@maxandersen
Copy link
Member

what is it you would override it to instead ?

fyi, the full command done to run docker is printed to console so you can copy that command line and do the build manually with any changes you want. If you find something that works we can look at adding some support for it.

@PieterjanDeconinck
Copy link

Hmm, good question @maxandersen... I thought it would be possible to inject the corresponding working directory, but no success yet when running the command manually. Will keep you posted if I find a possible solution to this problem.
Other suggestions are of course also welcome.

@maxandersen
Copy link
Member

btw. now i grok a little better what you are doing - isn't the only thing you need to do is to have a graalvm available for your jenkins to use ?

@PieterjanDeconinck
Copy link

PieterjanDeconinck commented Mar 9, 2020

Yes, that is indeed the first option I mentioned and what I'm currently using (which is working).
But it would be nice to use the container linked to the plugin, since that would ensure compatibility with the Quarkus version used in the POM file and avoid having to create a custom image :)

@maxandersen
Copy link
Member

Understood - but I would say for now you are better of using this manual install as it actually works now and will continue to do so ;)

Better (semi)automatic alignment of quarkus and graalvm native-image is something we'll work on but it’s a bit out in the future on how this would work.

@davsclaus
Copy link
Contributor

Trying this again this weekend. The mutlistage build is so far the best option, but all there is is some details on this page: https://quarkus.io/guides/building-native-image

Also this requires that all your dependencies can be downloaded from maven central. I wonder if maybe the quarkus maven plugin could have some goal to prepare for this and download all the dependencies and then volume mount into the multistage build so all binaries are downloaded and the build can run safely. As end users will have all sorts of http proxy, 3rd party maven repositories and whatnot that they will struggle with to setup for a multistage dockerbuild.

@maxandersen
Copy link
Member

@davsclaus not following why your setup require all published in maven central.

If it does that’s definitely a bug.

Can you give an example ?

@davsclaus
Copy link
Contributor

I was running with a SNAPSHOT of camel-quarkus that was built locally (eg master branch).

  ~/wo/camel-quarkus/exa/http-log   master +4 !3 ?1 ❯ docker build -f src/main/docker/Dockerfile.multistage -t davsclaus/http-log-native2 .
❯ docker build -f src/main/docker/Dockerfile.multistage -t davsclaus/http-log-native2 .
Sending build context to Docker daemon  76.67MB
Step 1/14 : FROM quay.io/quarkus/centos-quarkus-maven:20.0.0-java11 AS build
20.0.0-java11: Pulling from quarkus/centos-quarkus-maven
ab5ef0e58194: Pull complete
e233717e71a0: Pull complete
329ba4ad41b5: Pull complete
cd2a38969130: Pull complete
a7d5ffe4b96c: Pull complete
d3c3753c703c: Pull complete
56e50e670e6e: Pull complete
a23ddb6d6131: Pull complete
ef586a845e47: Pull complete
65f3bd88a611: Pull complete
3b8659aa803a: Pull complete
e0e62490e007: Pull complete
cd3d99606597: Pull complete
c0ffdaef075d: Pull complete
5c03ced2a615: Pull complete
5ae447b3c12e: Pull complete
0991e8b5594c: Pull complete
96f0b4a36e2c: Pull complete
a2bd942c5e61: Pull complete
d83d8eba9850: Pull complete
2f41b9643ce6: Pull complete
bfb7b317eb56: Pull complete
Digest: sha256:6bd4cb53a4e6f42c548be1fa5e2b6899bfcef07b493f45c3a251a5c6c5327749
Status: Downloaded newer image for quay.io/quarkus/centos-quarkus-maven:20.0.0-java11
 ---> 467418627751
Step 2/14 : COPY src /usr/src/app/src
 ---> a5c7878ae13e
Step 3/14 : COPY pom.xml /usr/src/app
 ---> ce4eab72b7fa
Step 4/14 : USER root
 ---> Running in 008a88818691
Removing intermediate container 008a88818691
 ---> ecf18df6346d
Step 5/14 : RUN chown -R quarkus /usr/src/app
 ---> Running in 3a6c493e698e
Removing intermediate container 3a6c493e698e
 ---> c7b1489dfb5a
Step 6/14 : USER quarkus
 ---> Running in c1ff3c565aa4
Removing intermediate container c1ff3c565aa4
 ---> 1988e3ca896b
Step 7/14 : RUN mvn -f /usr/src/app/pom.xml -Pnative clean package
 ---> Running in 2b19d88afdba
OpenJDK 64-Bit Server VM warning: forcing TieredStopAtLevel to full optimization because JVMCI is enabled
[INFO] Scanning for projects...
Downloading from central: https://repo1.maven.org/maven2/org/apache/camel/quarkus/camel-quarkus-build-parent/1.1.0-SNAPSHOT/maven-metadata.xml
Downloading from central: https://repo1.maven.org/maven2/org/apache/camel/quarkus/camel-quarkus-build-parent/1.1.0-SNAPSHOT/camel-quarkus-build-parent-1.1.0-SNAPSHOT.pom
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[FATAL] Non-resolvable parent POM for org.apache.camel.quarkus:camel-quarkus-examples-http-log:1.1.0-SNAPSHOT: Could not find artifact org.apache.camel.quarkus:camel-quarkus-build-parent:pom:1.1.0-SNAPSHOT in central (https://repo1.maven.org/maven2) and 'parent.relativePath' points at wrong local POM @ line 21, column 13

@davsclaus
Copy link
Contributor

This is my src/main/docker/Dockerfile.multistage file:

## Stage 1 : build with maven builder image with native capabilities
FROM quay.io/quarkus/centos-quarkus-maven:20.0.0-java11 AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
USER root
RUN chown -R quarkus /usr/src/app
USER quarkus
RUN mvn -f /usr/src/app/pom.xml -Pnative clean package

## Stage 2 : create the docker final image
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.1
WORKDIR /work/
COPY --from=build /usr/src/app/target/*-runner /work/application

# set up permissions for user `1001`
RUN chmod 775 /work /work/application \
  && chown -R 1001 /work \
  && chmod -R "g+rwX" /work \
  && chown -R 1001:root /work

EXPOSE 8080
USER 1001

CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]

@davsclaus
Copy link
Contributor

davsclaus commented Apr 19, 2020

Also this requires that your project is totally standalone, eg in the error reported above then it complains about parent pom.xml cannot be found, as its not copied into the docker container in the 1st step in the multistage. That is somewhat okay as I can make a standalone example. But its the pain with that the 1st step dont have an easy way to use my local .m2/repository as maven repo cache (so it can download the JARs I have already predownloaded) eg in this case for SNAPSHOT jars that are not released on maven central.

Since its a docker file I can probably thinker with this and do some tricks to setup a maven configuration for step #1 but all of that is not something I want to do.

I would love quarkus to have this developer joy for me, when it comes to building native images for kuberenetes. Ideally I dont have to deal with docker daemons (maybe jbi can help with this) or anything like that but not sure how far we are there.

Also its a bit of pain with docker image registry and the "magic" of which one is in use. kubernetes are really terrible at making this obvious. You just get a cannot pull image. And quarkus generated with image pull policy = Always at one time, and then it doesnt work as it would attempt to pull the image from dockerhub or whatever public repo there is, instead of using the image that was pushed to it by docker daemon that has built it.

So in the application.properties I had to set

# do not attempt to pull images as it wont work on minkube as it will contact outside registry
# so we set it to the default IfNotPresent
quarkus.kubernetes.image-pull-policy=IfNotPresent

But a lot of users are on windows and osx and they will struggle a lot with this. And some of the hype and exitment around quarkus is the native build. eg to get my simple app down from 100mb RSS memory in kubernetes to 10mb (it used to be like that in earlier releases of graal/quarks on java8). The JVM example of this runs with 122 memory in my local minikube cluster right now. The osx native image of this takes approx 28mb of memory. The same app took 7-12mb in January as Java8 built - back then I also struggled to get it native compiled so I hoped it was better today. We are almost there, just need some love and attention to make this great for any normal developer.

@davsclaus
Copy link
Contributor

I can polish up the example on camel-quarkus and make it standalone and put on some personal github for people to try. But we want to put the example in camel-quarkus too with the details to build as native for kubernetes.

The non kubernetes example is current at: https://github.com/apache/camel-quarkus/tree/master/examples/http-log

@davsclaus
Copy link
Contributor

I created a branch and pushed the example there
https://github.com/apache/camel-quarkus/tree/http-log-kubernetes/examples/http-log

@maxandersen
Copy link
Member

yeah, so not sure what we can really do here that will solve all your issues besides running the full build inside the docker container so all content is available to it "locally".

You comment about everything needs to be in maven central is more that your build need to be possible to do fully locally or at least be published to some accessible repo; i.e. a thirdparty repo should just work.

For your example though I still don't follow why it can't build as it should have access to the same as you already have locally.

About dealing with image registry and kubernetes - then jib doesn't help here; jib lets you skip the docker deamon but you still need a registry.

image pull policy = Always you can override; but the reason for that is that openshift cri-o until recently would not refresh a image so incremental changes would not be picked up. In later versions of openshift we can stop that and rely on using tag+sha1 references instead.

I'll need to try your example because all the native image builds I've tried so far on osx have "just worked"; but I probably just been lucky ;)

@maxandersen
Copy link
Member

btw. after re-reading the whole thread i start to think I got it backwards ;) your issue is not the registry being remote but that the place where container are built are remote (or at least remote enough to not necessarily have access to all it needs)

@davsclaus
Copy link
Contributor

Sorry for being silent for a couple of days. Had too many meetings and monday was also my birthday so there was more family stuff that day.

I will get back to this and try to build a standalone sample project with instructions that can better be used to reproduce and go over this and see where we can improve things for end users.

@bragadanilo
Copy link

Any news on that? I'm also getting the same error.

@jonathan-meier
Copy link
Contributor

I just came across this issue as well when starting a containerized native image build from within a remote containers development environment in Visual Studio Code. I can get the containerized build to work using docker cp in combination with a named container via a combination of docker create and docker start instead of running an anonymous container with docker run as it is currently done by the native image build step:

# create the native image build container but don't start it yet (all the same arguments as for docker run except for removing the volume mount and additionally specifying a container name)
docker create --name native-image-container --env LANG=C --user 0:0 quay.io/quarkus/ubi-quarkus-native-image:20.1.0-java11 <ALL_THE_FANCY_ARGS> quarkus-app-1.0-SNAPSHOT-runner

# copy the native image build sources to /project in the native image build container instead of mounting the volume (this creates an anonymous volume containing the native image build sources and mounts it to /project)
docker cp build/quarkus-app-1.0-SNAPSHOT-native-image-source-jar/. native-image-container:/project

# start the native image build by starting the prepared container, attach to it in order to get the output and wait until the build is finished
docker start --attach native-image-container

# copy the native image back from the build container into the native image build sources folder
docker cp native-image-container:/project/quarkus-app-1.0-SNAPSHOT-runner build

# remove the native image container and its associated anonymous volume
docker container rm --volumes native-image-container

As far as I could see, this is sufficient in all cases, or did I miss something? It would be fairly straightforward to integrate this alternative way of running a containerized native image build into the native image build step which could be controlled by an application property as already suggested above by @PieterjanDeconinck.

WDYT?

@dekstroza
Copy link

This seems to be happening when using Windows with WSL for development where Docker/Kubernetes is running in Windows and is used in WSL by exporting DOCKER_HOST, which is pretty common setup. Its not just using Docker on OSX.
Is there a way to build native image at all without writing my own Dockerfile with multistage build?
Definitely ruins development experience!

@jonathan-meier
Copy link
Contributor

Building native images using a remote docker daemon has been implemented and merged for Quarkus 1.13 (PR #14635). Just use the flag -Dquarkus.native.remote-container-build=true instead of -Dquarkus.native.container-build=true.

@geoand
Copy link
Contributor

geoand commented Feb 22, 2021

Should we close this issue then and link the PR to the issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pinned Issue will never be marked as stale
Projects
None yet
Development

Successfully merging a pull request may close this issue.