Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CustomResource Lambda with wrong permissions depending on the umask #8233

Closed
ilkomiliev opened this issue May 27, 2020 · 10 comments · Fixed by #8968
Closed

CustomResource Lambda with wrong permissions depending on the umask #8233

ilkomiliev opened this issue May 27, 2020 · 10 comments · Fixed by #8968
Assignees
Labels
@aws-cdk/custom-resources Related to AWS CDK Custom Resources bug This issue is a bug. p1

Comments

@ilkomiliev
Copy link

description of the bug:

we have very permissive umask set on our linux EC2 instances - 0027
with this umask in place the default permissions are too restrictive for "normal" Lambda deployment that's why for our lambda functions we apply special mechanism to set them properly to 0755. Unfortunately the generated lambda function for the AwsCustomResource doesn't do any extra permission settings and takes them as they are set on the file system. Thus trying to deploy stack containing Custom Resource leads to the following error:

2020-05-27T12:02:51.288Z	undefined	ERROR	Uncaught Exception 	
{
    "errorType": "Error",
    "errorMessage": "EACCES: permission denied, open '/var/task/index.js'",
    "code": "EACCES",
    "errno": -13,
    "syscall": "open",
    "path": "/var/task/index.js",
    "stack": [
        "Error: EACCES: permission denied, open '/var/task/index.js'",
        "    at Object.openSync (fs.js:458:3)",
        "    at Object.readFileSync (fs.js:360:35)",
        "    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1152:22)",
        "    at Module.load (internal/modules/cjs/loader.js:977:32)",
        "    at Function.Module._load (internal/modules/cjs/loader.js:877:14)",
        "    at Module.require (internal/modules/cjs/loader.js:1019:19)",
        "    at require (internal/modules/cjs/helpers.js:77:18)",
        "    at _tryRequire (/var/runtime/UserFunction.js:75:12)",
        "    at _loadUserApp (/var/runtime/UserFunction.js:95:12)",
        "    at Object.module.exports.load (/var/runtime/UserFunction.js:140:17)"
    ]
}

Reproduction Steps

on a Linux based OS set the umask to 0027
try to deploy a stack containing custom resource (directly or indirectly). One of the standard CDK methods we found out to cause the problem is registering Cognito Custom Domain, i.e.:

target = r53.RecordTarget.from_alias(r53_targets.UserPoolDomainTarget(user_pool_domain))
public_zone = fr53.import_hosted_zone(scope, 'hzpublic', HostedZoneTypesEnum.PUBLIC)
r53.ARecord(
            scope, 
            id='r53rec', 
            target=target, 
            zone=public_zone, 
            comment='Cognito CloudFront Alias', 
            record_name=self.cognito_config.domain_name
)

But of course anything like this also will fail:

cr.AwsCustomResource(
            scope,
            r_id,
policy=cr.AwsCustomResourcePolicy.from_sdk_calls(resources=cr.AwsCustomResourcePolicy.ANY_RESOURCE),
            on_update=call
        )

Ensure that the lambda function is really created at this time and not cached from previous executions (umask is applied at the time the file or dir is created!) - it can be found on the assets bucket in S3, check the modification dates.

Download the zip file as found in S3 and check the permissions on the files:

35d0a3ea655835ce2bf399c19e80a38397cebc9cff491b04a9312c92d338669.zip
-rw-r----- 1 xxx xxx   336 Jan  1  1980 index.d.ts
-rw-r----- 1 xxx xxx 20679 Jan  1  1980 index.js

Error Log

  1. the permissions of the files in the zip are wrong

  2. in CloudWatch you can find the above error repeated several times

Environment

  • CLI Version : 1.41.0 (build 9e071d2)
  • Framework Version:
  • OS : Amazon Linux custom setup
  • Language : python for CDK development / JS for the lambda

Other

n.a.

This is 🐛 Bug Report

@ilkomiliev ilkomiliev added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels May 27, 2020
@SomayaB SomayaB added the @aws-cdk/custom-resources Related to AWS CDK Custom Resources label May 27, 2020
@ilkomiliev
Copy link
Author

Hi,
any info about this - what I'd like to know is, if this is really CDK issue or CF - does the CDK handle the generation of the lambda function or it just delegates this to CF? If the latter we are covered by Enterprise support and I meanwhile pushed this also there - the internal case ID is 7033616551, if someone can take a look there also. The workaround currently in place is very ugly:

  • fail once
  • download the zip file and change the permissions
  • upload it back to s3
  • hope that it doesn't change too often
  • :-(

Thanks!

@Aerixs
Copy link

Aerixs commented Jun 3, 2020

I have the same issue when a custom resource Lambda (from FargateCluster) gets deployed from Linux with a umask of 0022.

As soon as the zip arrives in s3 it has the wrong permissions (0640). On the filesystem the permissions of the files are 0644.

File permissions when deploying from Windows are set to 0666.

@ilkomiliev
Copy link
Author

This is strange, because 0022 was the umask, which we previously had and the permissions were ok with it, so perhaps this depends on something else.

@amzcarlli
Copy link

Another occurrence of permission set differently once is uploaded to S3.
internal case id #7079559831

Environment
CLI Version : 1.42.1

permissions on index.js
-rwxr-xr-x+ 1 Domain Users 1786 Jun 8 22:00 index.js

after cdk synth and zipped code uploaded to S3
-rwx------+ 1 Domain Users 1760 Jun 8 20:10 index.js

@educlos
Copy link

educlos commented Jun 12, 2020

Similar issue here, the files are 0755 on the machine and end up 0640 once on the zip file on s3.
What we noted on our side however is that the issue is occurring when running the cdk from an EC2 ubuntu instance, but not when following the exact same steps and using the same credentials on a physical ubuntu laptop.

@eladb
Copy link
Contributor

eladb commented Jun 22, 2020

@jogold what do you think? Should we add some support for fixing up permissions during asset staging?

@eladb eladb added the p2 label Jun 22, 2020
@SomayaB SomayaB removed the needs-triage This issue or PR still needs to be triaged. label Jun 22, 2020
@jogold
Copy link
Contributor

jogold commented Jun 22, 2020

@jogold what do you think? Should we add some support for fixing up permissions during asset staging?

mmmh... only for assets that are going to be used for Lambda then? Not sure we should play with file permissions for Docker assets...

@RomainMuller RomainMuller added p1 and removed p2 labels Jun 29, 2020
@RomainMuller
Copy link
Contributor

Alright so it turns out this is likely caused by how the @jsii/kernel unpacks the NPM libraries at run-time. When using npm directly (e.g: in a TypeScript app), the problem does not happen, because npm will reset permissions of extracted files (I have yet to establish if those are what is encoded in the tarball, or something else)... But the @jsii/kernel does not do that, and the umask gets applied when we untar (since we do not delegate this to npm).

I reckon the fix is basically to have the @jsii/kernel match npm's permission setting behavior. I'm going to look into this.

RomainMuller added a commit to aws/jsii that referenced this issue Jun 30, 2020
In it's wisdom, `npm install` does override the process' `umask` to
`0o022` before unpackging the tarball, to ensure the produced install
has the kind of permissions that one would expect, regardless of the
system-configured `umask`.

Because `@jsii/kernel` did not reproduce this behavior, loaded libraries
could be unpacked with unexpectedly tight permissions, leading to weird
issues when those files were used in contexts that required those
permissions. For example, this is the cause of aws/aws-cdk#8233.

Fixes #1765
mergify bot pushed a commit to aws/jsii that referenced this issue Jul 1, 2020
)

In it's wisdom, `npm install` does override the process' `umask` to
`0o022` before unpackging the tarball, to ensure the produced install
has the kind of permissions that one would expect, regardless of the
system-configured `umask`.

Because `@jsii/kernel` did not reproduce this behavior, loaded libraries
could be unpacked with unexpectedly tight permissions, leading to weird
issues when those files were used in contexts that required those
permissions. For example, this is the cause of aws/aws-cdk#8233.

Fixes #1765



---

By submitting this pull request, I confirm that my contribution is made under the terms of the [Apache 2.0 license].

[Apache 2.0 license]: https://www.apache.org/licenses/LICENSE-2.0
@ilkomiliev
Copy link
Author

Hi, which CDK Version will integrate the fix, we desperately waiting for this?

Thanks!

@RomainMuller
Copy link
Contributor

Since this is related to the @jsii/kernel, you might already be able to fix yourself by upgrading your dependencies (making sure the jsii library in your install is >= 1.8.0. The upcoming release of CDK will generate dependency requirements on this particular version.

RomainMuller added a commit that referenced this issue Jul 9, 2020
Makes sure the latest version of the `jsii` kernel is inserted in
runtime dependencies.

Fixes #8233
@mergify mergify bot closed this as completed in #8968 Jul 10, 2020
mergify bot pushed a commit that referenced this issue Jul 10, 2020
Makes sure the latest version of the `jsii` kernel is inserted in
runtime dependencies.

Fixes #8233


----

*By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
@aws-cdk/custom-resources Related to AWS CDK Custom Resources bug This issue is a bug. p1
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants