forked from aws/aws-cdk
-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merge AWS CDK master into forked repository master #1
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…8450) When a table was deployed with `serverSideEncryption` set to `true` (by requesting `AWS_MANAGED` or `CUSTOM` server side encryption), it was not possible to switch back to `DEFAULT` as this could drop the `serverSideEncryption` configuration altogether, which CloudFormation will not allow. This changes makes `Table` continue to not set the `serverSideEncryption` configuration if nothing was configured (the user chose the implicit default behavior), but to actually set the value explicitly to `false` if the user *explicitly* requests `DEFAULT` encryption. This makes it possible to flip away from `AWS_MANAGED` and `CUSTOM` encryption to the cheaper alternative that is `DEFAULT`. Fixes #8286 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add the amzn scope to our version reporting, owned by Amazon: https://www.npmjs.com/org/amzn ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
) By default asset bundling is skipped for `cdk list` and `cdk destroy`. For `cdk deploy`, `cdk diff` and `cdk synthesize` the default is to bundle assets for all stacks unless `exclusively` is specified. In this case, only the listed stacks will have their assets bundled. Closes #9540 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Support VPC property in ShellScriptAction. Partially fixes #9982 . ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
When running `cdk deploy` the stack outputs to the terminal are currently returned in the same order as the `describe stacks` API call, which does not seem to provide a contract on ordering, per the [docs](https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Stack.html). This change sorts the keys of the stack outputs before display, which is consistent with "outputs" tab in the AWS CloudFormation console. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
This PR adds a machine image that is backed by a custom SSM parameter. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
In PR #10148, @rix0rrr made it possible to provide a custom CodePipeline pipeline instance to CdkPipeline. This also made the `sourceAction` (Source stage) and `synthAction` (Build stage) props optional. However, validation was added to ensure that if `synthAction` is not provided, the pipeline already contains at least two stages (assuming that would be Source and Build). Logically though, CdkPipeline works perfectly fine without Build stage, if an already-built cloud assembly is provided in the source stage (e.g. S3 source action). A use case for this is, for example, separating CI and CD logic, where CDK synthesis happens within the CI build and the assembly is stored as an artefact to be deployed by a pipeline. This PR makes the Build stage optional, to allow this use case without a need for a dummy build stage. Example pipeline code: ```ts export class PipelineStack extends cdk.Stack { constructor(scope: cdk.Construct, id: string, props: cdk.StackProps) { super(scope, id, props); const versionsBucket = new s3.Bucket(this, 'VersionsBucket', { bucketName: 's3pipeline-app-versions', versioned: true, }); // The CodePipeline const cloudAssemblyArtifact = new codepipeline.Artifact() const codePipeline = new codepipeline.Pipeline(this, 'CodePipeline', { pipelineName: 'S3Pipeline', restartExecutionOnUpdate: true, stages: [{ stageName: 'Source', actions: [new actions.S3SourceAction({ actionName: 'S3', bucket: versionsBucket, bucketKey: 'cloudassembly.zip', output: cloudAssemblyArtifact })] }] }); // CDK Pipeline const cdkPipeline = new pipelines.CdkPipeline(this, 'CdkPipeline', { codePipeline, cloudAssemblyArtifact, }); // Add application stage cdkPipeline.addApplicationStage(new MyAppStage(this, "PreProd")); } } ``` ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…source' resource type (#10415) The resource type 'AWS::CloudFormation::CustomResource' corresponds to the class CfnCustomResource. However, that class is automatically generated, and quite useless; it only supports one property, ServiceToken. It does not support passing in an arbitrary collection of properties, like custom resources in CloudFormation do. As a result, cfn-include would "lose" all properties of resources of type 'AWS::CloudFormation::CustomResource' other than ServiceToken. Fix the problem by handling this resource type with the CfnResource class, that does support an arbitrary collection of properties. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…cy when bucket and lambda are in different stacks (#10426) When the bucket and function are in two different stacks, the following stacks are created: ### Bucket Stack - `s3.Bucket` - `s3.BucketNotificationHandler` (creates a dependency on **lambda stack** since it configures the target of the trigger) ### Lambda Stack - `lambda.Function` - `lambda.Permission` (creates a dependency on the **bucket stack** since it configures the lambda to allow invocations from that specific bucket) The solution is to switch up the `lambda.Permission` scope and use the bucket instead of the function, so that it is added to the bucket stack, leaving the lambda stack independent. Fixes #5760 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
- Fixed incorrect comparison operator (LTE) string from '>=' to '<=' - fixes #8913 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…8828) **[ISSUE]** Imported Lambda functions unable to add new resource policy **[APPROACH]** Add a check for imported Lambda Functions between the account id and the account id from imported Lambda Function. If they match, imported function can add permissions. Fixes #7588 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
One of the contributors of longer runtimes, and we definitely don't need stack traces in it. Relates to #10213. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Fix Parcel detection for non JS/TS CDK projects. For those projects the module `@aws-cdk/aws-lambda-nodejs` is not installed in a `node_modules` folder inside the project. Change the detection logic to `require.resolve` from the project root. Also in this fix: ensure that the Parcel version that is run inside the container is the one installed at `/`. Previously, if an incorrect version of Parcel was detected bundling would happen in a container as expected but with the incorrect version because project root is mounted at `/asset-input` and in this case it contains the incorrect Parcel version at `/asset-input/node_modules`. Again change the `require.resolve` paths to avoid this. Addresses #10123 (not sure yet if it closes it) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Closes #10443 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
CloudFormation allows using short-form versions of intrinsic functions like `!GetAtt`. We handled them correctly in the `@aws-cdk/cloudformation-include` module, so extract that logic to a common package, and use it from the CLI in the `diff` command as well. Fixes #6537 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…10440) There's been some confusion around how to set `GitHubSourceActionProps`'s `oauthToken` property to a github token that was stored as a JSON key-value pair in Secrets Manager. - Updating the [Github Source](https://docs.aws.amazon.com/cdk/api/latest/docs/aws-codepipeline-actions-readme.html#github) section of the docs to clarify how to do so. Closes #8731 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
… to AWSCodeDeployRoleForLambdaLimited (#10276) The managed policy `AWSCodeDeployRoleForLambda` used for Lambda deployments has broad permissions, providing publish access to all SNS topics within the customer's accounts. This change replaces that with a new policy `AWSCodeDeployRoleForLambdaLimited` which removes those permissions. This should be safe, as the SNS publish permission is only ever used when setting up `triggers`, and we don't support that feature in `LambdaDeploymentGroup`. BREAKING CHANGE: the default policy for `LambdaDeploymentGroup` no longer contains `sns:Publish` on `*` permissions ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add function to Fn class to parse the domain name given an URL. Fixes #5433 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
When assuming a role for uploading assets in the new-style synthesized stacks, the OS username was used to build the session name out of. OS usernames have a character set that is wider than the allowed characters in `RoleSessionName` though, so we needed to sanitize them. Fixes #10401. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Deprecate `AssetHashType.BUNDLE` in favor of `AssetHashType.OUTPUT`. Improve JSDoc for `AssetHashType`. Closes #9861 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Lambda recently added support for MSK as an event source (https://aws.amazon.com/about-aws/whats-new/2020/08/aws-lambda-now-supports-amazon-managed-streaming-for-apache-kafka-as-an-event-source/), and there's now a "Topics" property on the CloudFormation resource definition (https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-lambda-eventsourcemapping.html#cfn-lambda-eventsourcemapping-topics). Closes #10138 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Add `.devcontainer.json` referencing the existing `.gitpod.yml` for supporting GitHub codespaces closes #10447 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
chore(rds): add additional aurora mysql engine versions Closes: #10476 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…#10463) When using the EVENTS trigger, an event is created based on the branch name of the event, however this is not possible if the branch name is an unresolved value. Therefore generate a unique event name if this is the case. Fixes #10263 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…anges (#10483) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
chore(release): 1.64.0
A few readme touchups and clarifications. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…odule (#10472) Introduce an environment variable - `AWSLINT_BASE_CONSTRUCT` recognized by `awslint`. This environment variable indicates that the module has [migrated][compat-rfc] away from construct classes and interfaces from `@aws-cdk/core` module to those in `constructs` module. Specific rules in the linter recognize this variable and modify their expectations. Motivation The primary motivation is to move the code base towards [removal of the construct compat layer][compat-rfc] as part of [CDKv2]. A large number of code changes to adopt "constructs" module can already be done as part of CDKv1 without incurring breaking changes to the API. This change enables these changes to be performed module-by-module. As modules are migrated, this flag will be enabled, to ensure no regression. [CDKv2]: https://github.com/aws/aws-cdk-rfcs/blob/master/text/0079-cdk-2.0.md [compat-rfc]: https://github.com/aws/aws-cdk-rfcs/blob/master/text/0192-remove-constructs-compat.md ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
#10458) See #7927 (comment) for motivation and design. The current way of specifying master user logins for `DatabaseInstance` and `DatabaseCluster` is inconsistent between the two and introduces some awkward usage when creating a login from an existing `Secret`. This change converts the existing `Login` interface (used by the `DatabaseCluster`) into a class with factory methods for username/password or secret-based logins. This also then re-uses that same interface for `DatabaseInstance`. The one exception now will be `DatabaseInstanceFromSnapshot`, which has specific requirements that deserved its own interface (`SnapshotLogin`). As a side effect of this approach, existing `DatabaseCluster` users -- in Typescript at least -- will not be broken. For example, the following are equivalent: ```ts new rds.DatabaseCluster(this, 'Cluster1', { // Existing usage masterUser: { username: 'admin', }, // New usage masterUser: Login.fromUsername('admin'), }); ``` Lastly, this change makes the whole `masterUser` prop optional, as there's no good reason why we can't default a username. fixes #7927 BREAKING CHANGE: `DatabaseInstanceProps` and `DatabaseInstanceFromSnapshotProps` - `masterUsername`, `masterUserPassword` and `masterUserPasswordEncryptionKey` moved to `credentials` as a new `Credentials` class. * **rds:** `Login` renamed to `Credentials`. Use `Credentials.fromUsername` to replace existing usage. * **rds:** `DatabaseClusterProps` `masterUser` renamed to `credentials`. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
The current ProxyTarget relied on the underlying L1s to get the engine type for a given Cluster/Instance. Change IDatabaseCluster and IInstanceEngine to add an (optional) `engine` property that is used instead. Allow the user to specify the engine when importing a Cluster or Instance. Also move the logic of determining the engine family into `IEngine`. Fixes #9195 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…build (#10502) Caused by JSII issue: aws/jsii#2040 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
See [CHANGELOG](https://github.com/aws/aws-cdk/blob/merge-back/1.64.0/CHANGELOG.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Now that we suppress output of non-failing tests, it becomes all the more important to have detailed information for failing tests. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…0507) The change introduced in #9576 did not handle the "staging disabled" case. As a consequence, when bundling the staged path was always relative. Revert to the behavior that was present before this change: when staging is disabled the staged path is absolute (whether bundling or not). Closes #10367 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Reverts #10503 We can't actually do this. There are tests that check that the output of the `cdk` command is *exactly* "some value", and adding the logging in breaks the expectation. Revert the `-v` to allow the tests to go back to passing 90% of the time.
…stances (#10324) fixes #9926 Added the following parameters to DatabaseCluster. * AutoMinorVersionUpgrade * AllowMajorVersionUpgrade * DeleteAutomatedBackups #10092 as a reference, only defined simple parameters. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ependency and breaks deployment (#10536) In version [`1.62.0`](https://github.com/aws/aws-cdk/releases/tag/v1.62.0) we introduced the ability to run `kubectl` commands on imported clusters. (See #9802). Part of this change included some refactoring with regards to how we use and create the `KubectlProvider`. Looks like we didn't consistently apply the same logic across all constructs that use it. Case in point: https://github.com/aws/aws-cdk/blob/e349004a522e2123c1e93bd3402dd7c3f9c5c17c/packages/%40aws-cdk/aws-eks/lib/k8s-manifest.ts#L58 Notice that here we use `this` as the scope to the `getOrCreate` call. Same goes for: https://github.com/aws/aws-cdk/blob/e349004a522e2123c1e93bd3402dd7c3f9c5c17c/packages/%40aws-cdk/aws-eks/lib/k8s-object-value.ts#L64 However, `KubernetesPatch` use `scope` instead. https://github.com/aws/aws-cdk/blob/e349004a522e2123c1e93bd3402dd7c3f9c5c17c/packages/%40aws-cdk/aws-eks/lib/k8s-patch.ts#L74 This means that the entire `scope` of the `KubernetesPatch` now depends, among others, on the `kubectlBarrier`. The scope will usually be either the cluster itself (when using `FargateCluster`), or the entire stack (when using `new KubernetesPatch`). In any case, the scope will most likely contain the cluster VPC. This creates the following dependency cycle: `Cluster => ClusterVpc => KubectlBarrier => Cluster`. The fix aligns the `KubernetesPatch` behavior to all other `kubectl` constructs and uses `this` as the scope, which will only add dependency on the barrier to the custom resource representing the patch. Fixes #10528 Fixes #10537 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Following up on #10503, enabling verbose logging for integ tests. opt out for tests that relies on exact match of the output: * 'cdk synth' - match the output of `synth`. * 'Two ways of shoing the version' - This one is tricker. Since `--version` is implemnted using `.version()` of `yargs` it ignores the `-v` argument, but `version` (no dash) which is our implementation respect it. ``` $cdk version -v CDK toolkit version: 1.63.0 (build 7a68125) .... blah blah ``` vs: ``` $cdk --version -v 1.63.0 (build 7a68125) ``` ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…ependency and breaks deployment (#10536) In version [`1.62.0`](https://github.com/aws/aws-cdk/releases/tag/v1.62.0) we introduced the ability to run `kubectl` commands on imported clusters. (See #9802). Part of this change included some refactoring with regards to how we use and create the `KubectlProvider`. Looks like we didn't consistently apply the same logic across all constructs that use it. Case in point: https://github.com/aws/aws-cdk/blob/e349004a522e2123c1e93bd3402dd7c3f9c5c17c/packages/%40aws-cdk/aws-eks/lib/k8s-manifest.ts#L58 Notice that here we use `this` as the scope to the `getOrCreate` call. Same goes for: https://github.com/aws/aws-cdk/blob/e349004a522e2123c1e93bd3402dd7c3f9c5c17c/packages/%40aws-cdk/aws-eks/lib/k8s-object-value.ts#L64 However, `KubernetesPatch` use `scope` instead. https://github.com/aws/aws-cdk/blob/e349004a522e2123c1e93bd3402dd7c3f9c5c17c/packages/%40aws-cdk/aws-eks/lib/k8s-patch.ts#L74 This means that the entire `scope` of the `KubernetesPatch` now depends, among others, on the `kubectlBarrier`. The scope will usually be either the cluster itself (when using `FargateCluster`), or the entire stack (when using `new KubernetesPatch`). In any case, the scope will most likely contain the cluster VPC. This creates the following dependency cycle: `Cluster => ClusterVpc => KubectlBarrier => Cluster`. The fix aligns the `KubernetesPatch` behavior to all other `kubectl` constructs and uses `this` as the scope, which will only add dependency on the barrier to the custom resource representing the patch. Fixes #10528 Fixes #10537 ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
See [CHANGELOG](https://github.com/aws/aws-cdk/blob/patch/v1.64.1/CHANGELOG.md) ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
…10546) As it turns out, `Fn::GetAtt` can be passed a string argument not only in YAML, but in JSON CloudFormation templates as well. Handle that case in our template parser for `cfn-include`. This handling allows us to stop special-casing transforming the short-form `!GetAtt` in our YAML parsing. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
One more resource attribute that we missed, and that is needed for cfn-include to be able to handle ingesting all templates. ---- *By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license*
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache-2.0 license