-
Notifications
You must be signed in to change notification settings - Fork 508
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bake cache misses with multiple targets #414
Comments
Tested it again with 74f76cf but the problem still occures. |
I'm also encountering this issue. Baking one target at a time seems to prevent any erroneous cache misses. At first, I had thought my issue was related to baking targets that spanned across multiple Dockerfile, and thus the multiple contexts was throwing buildkit for a loop. However, even when consolidating all my bake tarkets into a single Dockerfile, I can occasionally catch an instance where a stage that was cached for one target is somehow missed for another target, even when both targets share the same stages and context/inputs. For example, when baking both target "runner" {
target = "runner"
}
target "prepper" {
inherits = ["runner"]
target = "prepper"
}
target "validator" {
inherits = ["prepper"]
target = "validator"
tags = ["validator"]
}
target "tooler" {
inherits = ["validator"]
target = "tooler"
tags = ["tooler"]
} I'll occasionally the layer for Note the difference between CACHED step I realize some of the stdout can be out of sync due to line buffering, but I think this hints at some kind of race condition when download and extracting layers that can be used in caching for multiple targets. @crazy-max or @tonistiigi , I can try including more telemetry if there is recommended method capturing traces, but from the log below you'll see this is using Bake Log
Bake definition{
"group": {
"default": {
"targets": [
"validator",
"tooler"
]
}
},
"target": {
"tooler": {
"attest": [
"type=provenance,disabled=true"
],
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"DOCKER_META_IMAGES": "<URL>/<REPO>",
"DOCKER_META_VERSION": "<BRANCH>-tooler"
},
"tags": [
"<URL>/<REPO>:<BRANCH>-tooler"
],
"cache-from": [
"type=registry,ref=<URL>/<REPO>:main-tooler.cache",
"type=s3,blobs_prefix=cache/<REPO>/,manifests_prefix=cache/<REPO>/,region=eu-west-2,bucket=<BUCKET>"
],
"cache-to": [
"type=s3,blobs_prefix=cache/<REPO>/,manifests_prefix=cache/<REPO>/,region=eu-west-2,bucket=<BUCKET>,mode=max"
],
"target": "tooler",
"output": [
"type=image,push=true,push=true,push=true,push=true,push=true"
],
"pull": true,
"no-cache": false
},
"validator": {
"attest": [
"type=provenance,disabled=true"
],
"context": ".",
"dockerfile": "Dockerfile",
"args": {
"DOCKER_META_IMAGES": "<URL>/<REPO>",
"DOCKER_META_VERSION": "<BRANCH>-validator"
},
"tags": [
"<URL>/<REPO>:<BRANCH>-validator"
],
"cache-from": [
"type=registry,ref=<URL>/<REPO>:main-tooler.cache",
"type=s3,blobs_prefix=cache/<REPO>/,manifests_prefix=cache/<REPO>/,region=eu-west-2,bucket=<BUCKET>"
],
"cache-to": [
"type=s3,blobs_prefix=cache/<REPO>/,manifests_prefix=cache/<REPO>/,region=eu-west-2,bucket=<BUCKET>,mode=max"
],
"target": "validator",
"output": [
"type=image,push=true,push=true,push=true,push=true"
],
"pull": true,
"no-cache": false
}
}
} |
I've stumbled on this issue as well.
That makes me understand that multiple targets steps on each other's toes when not explicitely configured to use a separate storage area. |
@dud225 or @weber-software , have you tried updating the dockerfile version used? For me, updating from v1.7 to v1.9 may have resolved the superfluous cache layer busting I've encountered!
From the change log, I suspect this may have helped in dealing with my more advanced Dockerfile staging DAG: |
Unfortunately, I see cache misses even when using a different storage location for each image. I should say my setup is slightly different in that many of my targets, use other target as contexts. Something like the following. target "s1" {
context = "./s1"
}
target "s2" {
context = "./s2"
contexts = {
s1 = "target:s1"
}
}
target "s3" {
context = "./s3"
contexts = {
s2 = "target:s2"
}
} |
I was hoping to speed up builds by using the parallelism provided by bake.
Therfore i'm running
buildx bake -f docker-bake.hcl
but nearly all of the time one (or more) of the images gets rebuilt, even if they should hit the cache.
If i specify single targets, the cache is used as i would expect:
for i in {1..21}; do buildx bake -f docker-bake.hcl s$i; done
What if found out so far:
RUN
the problem doesn't occureAre there any ideas why this is happening or how i could investigate this further?
The text was updated successfully, but these errors were encountered: