-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Containerized native image build on remote docker daemons (issue #1610) #14635
Containerized native image build on remote docker daemons (issue #1610) #14635
Conversation
@zakkak could you have a look at this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM (please see my comment though) Thanks @jonathan-meier !
@gsmet please note that this PR conflicts with #13963 so once one of them gets merged the other one will need to be rebased.
@@ -279,7 +280,8 @@ public NativeImageBuildItem build(NativeConfig nativeConfig, NativeImageSourceJa | |||
} | |||
} | |||
|
|||
private static NativeImageBuildRunner getNativeImageBuildRunner(NativeConfig nativeConfig, Path outputDir) { | |||
private static NativeImageBuildRunner getNativeImageBuildRunner(NativeConfig nativeConfig, Path outputDir, | |||
String nativeImageName) { | |||
boolean isContainerBuild = nativeConfig.containerRuntime.isPresent() || nativeConfig.containerBuild; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think nativeConfig.remoteContainerBuild
should be added as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! My intention was that the nativeConfig.remoteContainerBuild
is actually only respected when nativeConfig.remoteContainer
is true
and it would simply be ignored when nativeConfig.remoteContainer
is false
. However, thinking about it, this might be confusing behavior. I added it to the condition above as you suggest, such that it is enough to set nativeConfig.remoteContainerBuild
to true
to trigger a remote container build.
The question then is, what runner should be chosen when both nativeConfig.remoteContainer
and nativeConfig.remoteContainerBuild
are set to true
? The current code would choose the remote container build in that case, which I think is fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current code would choose the remote container build in that case, which I think is fine.
Yes, that's what I'd expect as a user as well.
I think I prefer #14635 (this PR) to be merged first and then #13963 (since I've already reviewed your code the rebase should probably be easier for me), but since we would like to get the functionality of #13963 backported to 1.11 it might not be wise to follow that order. That said I would prefer @gsmet to decide what should get merged first and whether #13754 should be fixed in a different PR than #13963 for 1.11. |
rebased on current master to resolve conflict introduced by the changes of #14655 |
@gsmet I suggest we merge this (and backport to 1.12 as requested by @jonathan-meier if possible), I will then rebase #13963 and get it ready for 1.13. If we end up needing the functionality of #13963 in 1.11 it looks like it will need to be through a finer-grained PR anyway (as you point out in #13963 (comment)). |
…in getNativeImageExecutable
…zed native image builds
…dent of containerBuild
I rebased this one, let's make progress to get this in 1.13! |
Thanks! |
Thanks everyone for the reviewing and rebasing work. Super happy that this got merged, it helps us getting quite a bit further in building a native image for our app in our CI infrastructure! I think #1610 can be closed now. |
Can we also get a small documentation update describing this feature? |
Container builds for native images on a remote docker host are currently not possible (#1610). The issue is that the native build step mounts a volume to the build container containing the sources for the native image build and to which the final native image is written back to the host. However, volume mounts are not available for remote docker daemons. Still, using remote docker daemons is a pretty common scenario.
I outlined a solution to this problem in a comment on the above mentioned issue #1610 (comment), which is to copy the build sources into the container as well as copy the final native image out of the container back to the host, instead of using the volume mount.
This PR introduces separate runners for native image builds. The
NativeImageBuildStep
class now acts as a driver for the different native image build runners (local and containerized). Moreover, thanks to this new driver/runner structure, a new runner for containerized native image builds on remote docker daemon is provided as well, implementing the above mentioned strategy fixing #1610. It can be enabled with the new propertyquarkus.native.remote-container-build=true
. Please note that due to a permission problem, the new runner does not work with the officially provided native build docker images yet (see PR quarkusio/quarkus-images#128).In the future, more runners could easily be added, e.g. a runner using the docker-java library (https://github.com/docker-java/docker-java) for communicating with docker, instead of explicitly invoking the docker CLI.
Resolves: #1610