Skip to content

Commit

Permalink
Merge branch 'master' into jerry-shao/logs-query-definition
Browse files Browse the repository at this point in the history
  • Loading branch information
mergify[bot] authored Apr 20, 2022
2 parents 4256d62 + 3ce40b4 commit 4e28c56
Show file tree
Hide file tree
Showing 57 changed files with 4,327 additions and 1,391 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/close-stale-prs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ on:
# Cron format: min hr day month dow
- cron: "0 0 * * *"
jobs:
rix0rrr/close-stale-prs:
close-stale-prs:
permissions:
pull-requests: write
runs-on: ubuntu-latest
Expand Down
38 changes: 38 additions & 0 deletions INTEGRATION_TESTS.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ on what type of changes require integrations tests and how you should write inte
- [New L2 Constructs](#new-l2-constructs)
- [Existing L2 Constructs](#existing-l2-constructs)
- [Assertions](#assertions)
- [Running Integration Tests](#running-integration-tests)

## What are CDK Integration Tests

Expand Down Expand Up @@ -223,3 +224,40 @@ to deploy the Lambda Function _and_ then rerun the assertions to ensure that the

### Assertions
...Coming soon...

## Running Integration Tests

Most of the time you will only need to run integration tests for an individual module (i.e. `aws-lambda`). Other times you may need to run tests across multiple modules.
In this case I would recommend running from the root directory like below.

_Run snapshot tests only_
```bash
yarn integ-runner --directory packages/@aws-cdk
```

_Run snapshot tests and then re-run integration tests for failed snapshots_
```bash
yarn integ-runner --directory packages/@aws-cdk --update-on-failed
```

One benefit of running from the root directory like this is that it will only collect tests from "built" modules. If you have built the entire
repo it will run all integration tests, but if you have only built a couple modules it will only run tests from those.

### Running large numbers of Tests

If you need to re-run a large number of tests you can run them in parallel like this.

```bash
yarn integ-runner --directory packages/@aws-cdk --update-on-failed \
--parallel-regions us-east-1 \
--parallel-regions us-east-2 \
--parallel-regions us-west-2 \
--parallel-regions eu-west-1 \
--profiles profile1 \
--profiles profile2 \
--profiles profile3 \
--verbose
```

When using both `--parallel-regions` and `--profiles` it will execute (regions*profiles) tests in parallel (in this example 12)
If you want to execute more than 16 tests in parallel you can pass a higher value to `--max-workers`.
1 change: 0 additions & 1 deletion deprecated_apis.txt
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,6 @@
@aws-cdk/core.DefaultStackSynthesizerProps#fileAssetKeyArnExportName
@aws-cdk/core.DockerImageAssetSource#repositoryName
@aws-cdk/core.Duration#toISOString
@aws-cdk/core.FileAssetLocation#kmsKeyArn
@aws-cdk/core.FileAssetLocation#s3Url
@aws-cdk/core.ITemplateOptions#transform
@aws-cdk/core.Lazy#anyValue
Expand Down
8 changes: 4 additions & 4 deletions package.json
Original file line number Diff line number Diff line change
Expand Up @@ -21,10 +21,10 @@
"fs-extra": "^9.1.0",
"graceful-fs": "^4.2.10",
"jest-junit": "^13.1.0",
"jsii-diff": "^1.56.0",
"jsii-pacmak": "^1.56.0",
"jsii-reflect": "^1.56.0",
"jsii-rosetta": "^1.56.0",
"jsii-diff": "^1.57.0",
"jsii-pacmak": "^1.57.0",
"jsii-reflect": "^1.57.0",
"jsii-rosetta": "^1.57.0",
"lerna": "^4.0.0",
"patch-package": "^6.4.7",
"semver": "^6.3.0",
Expand Down
2 changes: 1 addition & 1 deletion packages/@aws-cdk/aws-lambda/lib/function.ts
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ export interface FunctionOptions extends EventInvokeConfigOptions {
readonly memorySize?: number;

/**
* The size of the function’s /tmp directory in MB.
* The size of the function’s /tmp directory in MiB.
*
* @default 512 MiB
*/
Expand Down
16 changes: 11 additions & 5 deletions packages/@aws-cdk/aws-s3-deployment/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,7 +139,8 @@ new s3deploy.BucketDeployment(this, 'DeployMeWithoutDeletingFilesOnDestination',
});
```

This option also enables you to specify multiple bucket deployments for the same destination bucket & prefix,
This option also enables you to
multiple bucket deployments for the same destination bucket & prefix,
each with its own characteristics. For example, you can set different cache-control headers
based on file extensions:

Expand Down Expand Up @@ -259,14 +260,19 @@ new s3deploy.BucketDeployment(this, 'DeployWithInvalidation', {
});
```

## Memory Limit
## Size Limits

The default memory limit for the deployment resource is 128MiB. If you need to
copy larger files, you can use the `memoryLimit` configuration to specify the
copy larger files, you can use the `memoryLimit` configuration to increase the
size of the AWS Lambda resource handler.

> NOTE: a new AWS Lambda handler will be created in your stack for each memory
> limit configuration.
The default ephemeral storage size for the deployment resource is 512MiB. If you
need to upload larger files, you may hit this limit. You can use the
`ephemeralStorageSize` configuration to increase the storage size of the AWS Lambda
resource handler.

> NOTE: a new AWS Lambda handler will be created in your stack for each combination
> of memory and storage size.
## EFS Support

Expand Down
35 changes: 27 additions & 8 deletions packages/@aws-cdk/aws-s3-deployment/lib/bucket-deployment.ts
Original file line number Diff line number Diff line change
Expand Up @@ -121,6 +121,13 @@ export interface BucketDeploymentProps {
*/
readonly memoryLimit?: number;

/**
* The size of the AWS Lambda function’s /tmp directory in MiB.
*
* @default 512 MiB
*/
readonly ephemeralStorageSize?: cdk.Size;

/**
* Mount an EFS file system. Enable this if your assets are large and you encounter disk space errors.
* Enabling this option will require a VPC to be specified.
Expand Down Expand Up @@ -292,7 +299,7 @@ export class BucketDeployment extends CoreConstruct {

const mountPath = `/mnt${accessPointPath}`;
const handler = new lambda.SingletonFunction(this, 'CustomResourceHandler', {
uuid: this.renderSingletonUuid(props.memoryLimit, props.vpc),
uuid: this.renderSingletonUuid(props.memoryLimit, props.ephemeralStorageSize, props.vpc),
code: lambda.Code.fromAsset(path.join(__dirname, 'lambda')),
layers: [new AwsCliLayer(this, 'AwsCliLayer')],
runtime: lambda.Runtime.PYTHON_3_7,
Expand All @@ -304,6 +311,7 @@ export class BucketDeployment extends CoreConstruct {
timeout: cdk.Duration.minutes(15),
role: props.role,
memorySize: props.memoryLimit,
ephemeralStorageSize: props.ephemeralStorageSize,
vpc: props.vpc,
vpcSubnets: props.vpcSubnets,
filesystem: accessPoint ? lambda.FileSystem.fromEfsAccessPoint(
Expand Down Expand Up @@ -331,7 +339,7 @@ export class BucketDeployment extends CoreConstruct {
// the sources actually has markers.
const hasMarkers = sources.some(source => source.markers);

const crUniqueId = `CustomResource${this.renderUniqueId(props.memoryLimit, props.vpc)}`;
const crUniqueId = `CustomResource${this.renderUniqueId(props.memoryLimit, props.ephemeralStorageSize, props.vpc)}`;
this.cr = new cdk.CustomResource(this, crUniqueId, {
serviceToken: handler.functionArn,
resourceType: 'Custom::CDKBucketDeployment',
Expand Down Expand Up @@ -426,21 +434,32 @@ export class BucketDeployment extends CoreConstruct {
return this._deployedBucket;
}

private renderUniqueId(memoryLimit?: number, vpc?: ec2.IVpc) {
private renderUniqueId(memoryLimit?: number, ephemeralStorageSize?: cdk.Size, vpc?: ec2.IVpc) {
let uuid = '';

// if user specify a custom memory limit, define another singleton handler
// if the user specifes a custom memory limit, we define another singleton handler
// with this configuration. otherwise, it won't be possible to use multiple
// configurations since we have a singleton.
if (memoryLimit) {
if (cdk.Token.isUnresolved(memoryLimit)) {
throw new Error('Can\'t use tokens when specifying "memoryLimit" since we use it to identify the singleton custom resource handler');
throw new Error("Can't use tokens when specifying 'memoryLimit' since we use it to identify the singleton custom resource handler.");
}

uuid += `-${memoryLimit.toString()}MiB`;
}

// if user specify to use VPC, define another singleton handler
// if the user specifies a custom ephemeral storage size, we define another singleton handler
// with this configuration. otherwise, it won't be possible to use multiple
// configurations since we have a singleton.
if (ephemeralStorageSize) {
if (ephemeralStorageSize.isUnresolved()) {
throw new Error("Can't use tokens when specifying 'ephemeralStorageSize' since we use it to identify the singleton custom resource handler.");
}

uuid += `-${ephemeralStorageSize.toMebibytes().toString()}MiB`;
}

// if the user specifies a VPC, we define another singleton handler
// with this configuration. otherwise, it won't be possible to use multiple
// configurations since we have a singleton.
// A VPC is a must if EFS storage is used and that's why we are only using VPC in uuid.
Expand All @@ -451,10 +470,10 @@ export class BucketDeployment extends CoreConstruct {
return uuid;
}

private renderSingletonUuid(memoryLimit?: number, vpc?: ec2.IVpc) {
private renderSingletonUuid(memoryLimit?: number, ephemeralStorageSize?: cdk.Size, vpc?: ec2.IVpc) {
let uuid = '8693BB64-9689-44B6-9AAF-B0CC9EB8756C';

uuid += this.renderUniqueId(memoryLimit, vpc);
uuid += this.renderUniqueId(memoryLimit, ephemeralStorageSize, vpc);

return uuid;
}
Expand Down
44 changes: 40 additions & 4 deletions packages/@aws-cdk/aws-s3-deployment/test/bucket-deployment.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -714,9 +714,7 @@ test('memoryLimit can be used to specify the memory limit for the deployment res
const bucket = new s3.Bucket(stack, 'Dest');

// WHEN

// we define 3 deployments with 2 different memory configurations

new s3deploy.BucketDeployment(stack, 'Deploy256-1', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket: bucket,
Expand All @@ -736,14 +734,52 @@ test('memoryLimit can be used to specify the memory limit for the deployment res
});

// THEN

// we expect to find only two handlers, one for each configuration

Template.fromStack(stack).resourceCountIs('AWS::Lambda::Function', 2);
Template.fromStack(stack).hasResourceProperties('AWS::Lambda::Function', { MemorySize: 256 });
Template.fromStack(stack).hasResourceProperties('AWS::Lambda::Function', { MemorySize: 1024 });
});

test('ephemeralStorageSize can be used to specify the storage size for the deployment resource handler', () => {
// GIVEN
const stack = new cdk.Stack();
const bucket = new s3.Bucket(stack, 'Dest');

// WHEN
// we define 3 deployments with 2 different memory configurations
new s3deploy.BucketDeployment(stack, 'Deploy256-1', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket: bucket,
ephemeralStorageSize: cdk.Size.mebibytes(512),
});

new s3deploy.BucketDeployment(stack, 'Deploy256-2', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket: bucket,
ephemeralStorageSize: cdk.Size.mebibytes(512),
});

new s3deploy.BucketDeployment(stack, 'Deploy1024', {
sources: [s3deploy.Source.asset(path.join(__dirname, 'my-website'))],
destinationBucket: bucket,
ephemeralStorageSize: cdk.Size.mebibytes(1024),
});

// THEN
// we expect to find only two handlers, one for each configuration
Template.fromStack(stack).resourceCountIs('AWS::Lambda::Function', 2);
Template.fromStack(stack).hasResourceProperties('AWS::Lambda::Function', {
EphemeralStorage: {
Size: 512,
},
});
Template.fromStack(stack).hasResourceProperties('AWS::Lambda::Function', {
EphemeralStorage: {
Size: 1024,
},
});
});

test('deployment allows custom role to be supplied', () => {

// GIVEN
Expand Down
Loading

0 comments on commit 4e28c56

Please sign in to comment.