-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CustomResource Lambda with wrong permissions depending on the umask #8233
Comments
Hi,
Thanks! |
I have the same issue when a custom resource Lambda (from FargateCluster) gets deployed from Linux with a umask of 0022. As soon as the zip arrives in s3 it has the wrong permissions (0640). On the filesystem the permissions of the files are 0644. File permissions when deploying from Windows are set to 0666. |
This is strange, because 0022 was the umask, which we previously had and the permissions were ok with it, so perhaps this depends on something else. |
Another occurrence of permission set differently once is uploaded to S3. Environment permissions on index.js after cdk synth and zipped code uploaded to S3 |
Similar issue here, the files are 0755 on the machine and end up 0640 once on the zip file on s3. |
@jogold what do you think? Should we add some support for fixing up permissions during asset staging? |
mmmh... only for assets that are going to be used for Lambda then? Not sure we should play with file permissions for Docker assets... |
Alright so it turns out this is likely caused by how the I reckon the fix is basically to have the |
In it's wisdom, `npm install` does override the process' `umask` to `0o022` before unpackging the tarball, to ensure the produced install has the kind of permissions that one would expect, regardless of the system-configured `umask`. Because `@jsii/kernel` did not reproduce this behavior, loaded libraries could be unpacked with unexpectedly tight permissions, leading to weird issues when those files were used in contexts that required those permissions. For example, this is the cause of aws/aws-cdk#8233. Fixes #1765
) In it's wisdom, `npm install` does override the process' `umask` to `0o022` before unpackging the tarball, to ensure the produced install has the kind of permissions that one would expect, regardless of the system-configured `umask`. Because `@jsii/kernel` did not reproduce this behavior, loaded libraries could be unpacked with unexpectedly tight permissions, leading to weird issues when those files were used in contexts that required those permissions. For example, this is the cause of aws/aws-cdk#8233. Fixes #1765 --- By submitting this pull request, I confirm that my contribution is made under the terms of the [Apache 2.0 license]. [Apache 2.0 license]: https://www.apache.org/licenses/LICENSE-2.0
Hi, which CDK Version will integrate the fix, we desperately waiting for this? Thanks! |
Since this is related to the |
Makes sure the latest version of the `jsii` kernel is inserted in runtime dependencies. Fixes #8233
Makes sure the latest version of the `jsii` kernel is inserted in runtime dependencies. Fixes #8233 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
description of the bug:
we have very permissive umask set on our linux EC2 instances - 0027
with this umask in place the default permissions are too restrictive for "normal" Lambda deployment that's why for our lambda functions we apply special mechanism to set them properly to 0755. Unfortunately the generated lambda function for the AwsCustomResource doesn't do any extra permission settings and takes them as they are set on the file system. Thus trying to deploy stack containing Custom Resource leads to the following error:
Reproduction Steps
on a Linux based OS set the umask to 0027
try to deploy a stack containing custom resource (directly or indirectly). One of the standard CDK methods we found out to cause the problem is registering Cognito Custom Domain, i.e.:
But of course anything like this also will fail:
Ensure that the lambda function is really created at this time and not cached from previous executions (umask is applied at the time the file or dir is created!) - it can be found on the assets bucket in S3, check the modification dates.
Download the zip file as found in S3 and check the permissions on the files:
Error Log
the permissions of the files in the zip are wrong
in CloudWatch you can find the above error repeated several times
Environment
Other
n.a.
This is 🐛 Bug Report
The text was updated successfully, but these errors were encountered: