-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
aws_ecr_assets: produces invalid tasks by linking to empty "attestation" image layer #30258
Comments
What is a bit strange is it would make most sense if this resulted from a recent change in say Docker Desktop But AFAICT the adding of attestations by default dates back a lot longer than last week (to maybe Jan 2023 https://www.docker.com/blog/highlights-buildkit-v0-11-release/#1-slsa-provenance) So maybe I just got lucky the first couple of times I deployed my Fargate task |
I think this issue is also affecting Lambda functions that use the Docker image runtime I had a bunch of deployment issues the last couple of days where I would get an error like:
At first I thought there was something wrong in the meaningful part of the Dockerfile they were built from, I made a change and redeployed and the problem seemed to go away. But then later when deploying from another branch I made the same 'fix' and it didn't work. Subsequently I made a connection between my ECS/Fargate issue above and the fact that only my 'Docker image' Lambda functions seemed to have deployment problems, the 'Python runtime' one was doing ok. I tried now But that was because I hadn't changed the Dockerfile so new image was not pushed (?) Then I added a To be clear I think without the If attestations are important then I think there is something in CDK that needs to be aware of them to avoid this issue (pushing and tagging wrong part of OCI manifest as the image to use). Or else just force buildx not to create them in the first place. |
Did you redeploy it after that time? What you have changed as it seemed to be working before? And, are you able to reproduce this issue by providing a sample Dockerfile and CDK code snippets so we could reproduce that in our environment? |
I believe it's random whether right image part gets tagged and pushed so any reproduction attempt is going to need some way to repeatedly force the docker image to be rebuilt I am puzzled why it started happening now, I assumed some update to either cdk or Docker Desktop |
I encountered this issue as well after upgrading Docker Desktop and the work-around of adding Docker: Docker version 27.2.0, build 3ab4256 |
fyi, This also fixed my deployment issues on a Mac (M3) for my DockerImageFunction. My docker desktop version is: 4.34.2 (167172) |
|
Related reading as to why attestations can be disabled by users: aws/aws-cdk#30258, https://stackoverflow.com/questions/77207485/why-are-there-extra-untagged-images-in-amazon-ecr-after-doing-docker-push
Related reading as to why attestations can be disabled by users: aws/aws-cdk#30258, https://stackoverflow.com/questions/77207485/why-are-there-extra-untagged-images-in-amazon-ecr-after-doing-docker-push
Related reading as to why attestations can be disabled by users: aws/aws-cdk#30258, https://stackoverflow.com/questions/77207485/why-are-there-extra-untagged-images-in-amazon-ecr-after-doing-docker-push In short, upstream tooling is not really ready for buildx + docker desktop default outputs, and when disabling these we get a stdout which current pants parsing code was not ready for. Fixes #21729
Related reading as to why attestations can be disabled by users: aws/aws-cdk#30258, https://stackoverflow.com/questions/77207485/why-are-there-extra-untagged-images-in-amazon-ecr-after-doing-docker-push In short, upstream tooling is not really ready for buildx + docker desktop default outputs, and when disabling these we get a stdout which current pants parsing code was not ready for. Fixes #21729
Related reading as to why attestations can be disabled by users: aws/aws-cdk#30258, https://stackoverflow.com/questions/77207485/why-are-there-extra-untagged-images-in-amazon-ecr-after-doing-docker-push In short, upstream tooling is not really ready for buildx + docker desktop default outputs, and when disabling these we get a stdout which current pants parsing code was not ready for. Fixes #21729
…ry-pick of #21735) (#21738) Related reading as to why attestations can be disabled by users: aws/aws-cdk#30258, https://stackoverflow.com/questions/77207485/why-are-there-extra-untagged-images-in-amazon-ecr-after-doing-docker-push In short, upstream tooling is not really ready for buildx + docker desktop default outputs, and when disabling these we get a stdout which current pants parsing code was not ready for. Fixes #21729 Co-authored-by: Tobias Nilsson <[email protected]>
…ry-pick of #21735) (#21737) Related reading as to why attestations can be disabled by users: aws/aws-cdk#30258, https://stackoverflow.com/questions/77207485/why-are-there-extra-untagged-images-in-amazon-ecr-after-doing-docker-push In short, upstream tooling is not really ready for buildx + docker desktop default outputs, and when disabling these we get a stdout which current pants parsing code was not ready for. Fixes #21729 Co-authored-by: Tobias Nilsson <[email protected]>
Fix waiting for dependency cdklabs/cloud-assembly-schema#102 |
Any idea when this will be released? |
Describe the bug
I started getting the following error when trying to run my Fargate tasks:
If I go into AWS web ui to the task definition I can find the id of the ECR image that it points to
Then if I look at that ECR image I can see it has 0 size:
I can see in my ECR images list that since 10 May every CDK deployment has pushed a zero size image to ECR instead of the expected one:
I have the following CDK code:
(Before 10 May I had previously deployed and ran Fargate tasks successfully from this definition)
Expected Behavior
A usable ECS task definition is deployed
Current Behavior
Inscrutable error message
It appears that CDK has created the task definition against an invalid ECR image
Reproduction Steps
See above
Additional Information/Context
I have located what seems to be the cause, with help from this issue thread: moby/moby#45600
Using
aws ecr batch-get-image
I can see the following manifest in my problem zero sized image:This seems to relate to the error message and fit with the details in the moby issue linked above.
Basically when
cdk deploy
builds the image locally (via docker buildx) then extra "attestation" items are added to the root manifest (???)I guess by themselves these aren't harmful (they are part of OCI standard or whatever) but CDK is maybe not expecting them and ends up pushing and tagging the wrong thing into ECR
Possible Solution
BUILDX_NO_DEFAULT_ATTESTATIONS=1 cdk deploy
worked for me (after adding an arbitrary change to my Dockerfile to force a rebuild)I think it would be better if CDK explicitly adds
--provenance=false
in its calls todocker buildx
See https://docs.docker.com/reference/cli/docker/buildx/build/#provenance and https://docs.docker.com/build/attestations/attestation-storage/
CDK CLI Version
2.142.0 (build 289a1e3)
Framework Version
No response
Node.js Version
v18.18.0
OS
macOS 14.4.1
Language
Python
Language Version
3.11.5
Other information
No response
The text was updated successfully, but these errors were encountered: