Specifies when the snapshot was taken in Coordinated Universal Time (UTC).
" + "documentation":"Specifies when the snapshot was taken in Coordinated Universal Time (UTC). Changes for the copy when the snapshot is copied.
" }, "Engine":{ "shape":"String", @@ -6516,7 +6516,7 @@ }, "SourceDBSnapshotIdentifier":{ "shape":"String", - "documentation":"The DB snapshot Amazon Resource Name (ARN) that the DB snapshot was copied from. It only has value in case of cross-customer or cross-region copy.
" + "documentation":"The DB snapshot Amazon Resource Name (ARN) that the DB snapshot was copied from. It only has a value in the case of a cross-account or cross-Region copy.
" }, "StorageType":{ "shape":"String", @@ -6554,7 +6554,11 @@ "shape":"String", "documentation":"The identifier for the source DB instance, which can't be changed and which is unique to an Amazon Web Services Region.
" }, - "TagList":{"shape":"TagList"} + "TagList":{"shape":"TagList"}, + "OriginalSnapshotCreateTime":{ + "shape":"TStamp", + "documentation":"Specifies the time of the CreateDBSnapshot operation in Coordinated Universal Time (UTC). Doesn't change when the snapshot is copied.
" + } }, "documentation":"Contains the details of an Amazon RDS DB snapshot.
This data type is used as a response element in the DescribeDBSnapshots
action.
A filter that specifies one or more DB clusters to describe.
Supported filters:
db-cluster-id
- Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB clusters identified by these ARNs.
A filter that specifies one or more DB clusters to describe.
Supported filters:
clone-group-id
- Accepts clone group identifiers. The results list will only include information about the DB clusters associated with these clone groups.
db-cluster-id
- Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB clusters identified by these ARNs.
domain
- Accepts Active Directory directory IDs. The results list will only include information about the DB clusters associated with these domains.
engine
- Accepts engine names. The results list will only include information about the DB clusters for these engines.
A filter that specifies one or more global DB clusters to describe.
Supported filters:
db-cluster-id
- Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB clusters identified by these ARNs.
This parameter isn't currently supported.
" }, "MaxRecords":{ "shape":"IntegerOptional", From 37d5618bff726400e51a10d44a71d1b557143910 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 21 Jul 2021 18:09:00 +0000 Subject: [PATCH 05/13] AWS CodeBuild Update: AWS CodeBuild now allows you to set the access permissions for build artifacts, project artifacts, and log files that are uploaded to an Amazon S3 bucket that is owned by another account. --- .../feature-AWSCodeBuild-cb56e95.json | 6 + .../codegen-resources/service-2.json | 288 +++++++++--------- 2 files changed, 156 insertions(+), 138 deletions(-) create mode 100644 .changes/next-release/feature-AWSCodeBuild-cb56e95.json diff --git a/.changes/next-release/feature-AWSCodeBuild-cb56e95.json b/.changes/next-release/feature-AWSCodeBuild-cb56e95.json new file mode 100644 index 000000000000..da8fc1388dc0 --- /dev/null +++ b/.changes/next-release/feature-AWSCodeBuild-cb56e95.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS CodeBuild", + "contributor": "", + "description": "AWS CodeBuild now allows you to set the access permissions for build artifacts, project artifacts, and log files that are uploaded to an Amazon S3 bucket that is owned by another account." +} diff --git a/services/codebuild/src/main/resources/codegen-resources/service-2.json b/services/codebuild/src/main/resources/codegen-resources/service-2.json index 036da44deac8..e94e574acd22 100644 --- a/services/codebuild/src/main/resources/codegen-resources/service-2.json +++ b/services/codebuild/src/main/resources/codegen-resources/service-2.json @@ -134,7 +134,7 @@ {"shape":"ResourceAlreadyExistsException"}, {"shape":"ResourceNotFoundException"} ], - "documentation":"For an existing CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, enables CodeBuild to start rebuilding the source code every time a code change is pushed to the repository.
If you enable webhooks for an CodeBuild project, and the project is used as a build step in CodePipeline, then two identical builds are created for each commit. One build is triggered through webhooks, and one through CodePipeline. Because billing is on a per-build basis, you are billed for both builds. Therefore, if you are using CodePipeline, we recommend that you disable webhooks in CodeBuild. In the CodeBuild console, clear the Webhook box. For more information, see step 5 in Change a Build Project's Settings.
For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, enables AWS CodeBuild to start rebuilding the source code every time a code change is pushed to the repository.
If you enable webhooks for an AWS CodeBuild project, and the project is used as a build step in AWS CodePipeline, then two identical builds are created for each commit. One build is triggered through webhooks, and one through AWS CodePipeline. Because billing is on a per-build basis, you are billed for both builds. Therefore, if you are using AWS CodePipeline, we recommend that you disable webhooks in AWS CodeBuild. In the AWS CodeBuild console, clear the Webhook box. For more information, see step 5 in Change a Build Project's Settings.
For an existing CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, stops CodeBuild from rebuilding the source code every time a code change is pushed to the repository.
" + "documentation":"For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, stops AWS CodeBuild from rebuilding the source code every time a code change is pushed to the repository.
" }, "DescribeCodeCoverages":{ "name":"DescribeCodeCoverages", @@ -298,7 +298,7 @@ {"shape":"AccountLimitExceededException"}, {"shape":"ResourceAlreadyExistsException"} ], - "documentation":"Imports the source repository credentials for an CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository.
" + "documentation":"Imports the source repository credentials for an AWS CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository.
" }, "InvalidateProjectCache":{ "name":"InvalidateProjectCache", @@ -376,7 +376,7 @@ }, "input":{"shape":"ListCuratedEnvironmentImagesInput"}, "output":{"shape":"ListCuratedEnvironmentImagesOutput"}, - "documentation":"Gets information about Docker images that are managed by CodeBuild.
" + "documentation":"Gets information about Docker images that are managed by AWS CodeBuild.
" }, "ListProjects":{ "name":"ListProjects", @@ -402,7 +402,7 @@ "errors":[ {"shape":"InvalidInputException"} ], - "documentation":"Gets a list ARNs for the report groups in the current Amazon Web Services account.
" + "documentation":"Gets a list ARNs for the report groups in the current AWS account.
" }, "ListReports":{ "name":"ListReports", @@ -415,7 +415,7 @@ "errors":[ {"shape":"InvalidInputException"} ], - "documentation":"Returns a list of ARNs for the reports in the current Amazon Web Services account.
" + "documentation":"Returns a list of ARNs for the reports in the current AWS account.
" }, "ListReportsForReportGroup":{ "name":"ListReportsForReportGroup", @@ -442,7 +442,7 @@ "errors":[ {"shape":"InvalidInputException"} ], - "documentation":"Gets a list of projects that are shared with other Amazon Web Services accounts or users.
" + "documentation":"Gets a list of projects that are shared with other AWS accounts or users.
" }, "ListSharedReportGroups":{ "name":"ListSharedReportGroups", @@ -455,7 +455,7 @@ "errors":[ {"shape":"InvalidInputException"} ], - "documentation":"Gets a list of report groups that are shared with other Amazon Web Services accounts or users.
" + "documentation":"Gets a list of report groups that are shared with other AWS accounts or users.
" }, "ListSourceCredentials":{ "name":"ListSourceCredentials", @@ -611,7 +611,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"OAuthProviderException"} ], - "documentation":"Updates the webhook associated with an CodeBuild build project.
If you use Bitbucket for your repository, rotateSecret
is ignored.
Updates the webhook associated with an AWS CodeBuild build project.
If you use Bitbucket for your repository, rotateSecret
is ignored.
An Amazon Web Services service limit was exceeded for the calling Amazon Web Services account.
", + "documentation":"An AWS service limit was exceeded for the calling AWS account.
", "exception":true }, "ArtifactNamespace":{ @@ -727,7 +727,7 @@ "members":{ "names":{ "shape":"ProjectNames", - "documentation":"The names or ARNs of the build projects. To get information about a project shared with your Amazon Web Services account, its ARN must be specified. You cannot specify a shared project using its name.
" + "documentation":"The names or ARNs of the build projects. To get information about a project shared with your AWS account, its ARN must be specified. You cannot specify a shared project using its name.
" } } }, @@ -799,12 +799,21 @@ }, "computeTypesAllowed":{ "shape":"ComputeTypesAllowed", - "documentation":"An array of strings that specify the compute types that are allowed for the batch build. See Build environment compute types in the CodeBuild User Guide for these values.
" + "documentation":"An array of strings that specify the compute types that are allowed for the batch build. See Build environment compute types in the AWS CodeBuild User Guide for these values.
" } }, "documentation":"Specifies restrictions for the batch build.
" }, "Boolean":{"type":"boolean"}, + "BucketOwnerAccess":{ + "type":"string", + "documentation":"Specifies the access for objects that are uploaded to an Amazon S3 bucket that is owned by another account.
By default, only the account that uploads the objects to the bucket has access to these objects. This property allows you to give the bucket owner access to these objects.
The bucket owner does not have access to the objects. This is the default.
The bucket owner has read only access to the objects. The uploading account retains ownership of the objects.
The bucket owner has full access to the objects. Object ownership is determined by the following criteria:
If the bucket is configured with the Bucket owner preferred setting, the bucket owner owns the objects. The uploading account will have object access as specified by the bucket's policy.
Otherwise, the uploading account retains ownership of the objects.
For more information about Amazon S3 object ownership, see Controlling ownership of uploaded objects using S3 Object Ownership in the Amazon Simple Storage Service User Guide.
Any version identifier for the version of the source code to be built. If sourceVersion
is specified at the project level, then this sourceVersion
(at the build level) takes precedence.
For more information, see Source Version Sample with CodeBuild in the CodeBuild User Guide.
" + "documentation":"Any version identifier for the version of the source code to be built. If sourceVersion
is specified at the project level, then this sourceVersion
(at the build level) takes precedence.
For more information, see Source Version Sample with CodeBuild in the AWS CodeBuild User Guide.
" }, "resolvedSourceVersion":{ "shape":"NonEmptyString", - "documentation":"An identifier for the version of this build's source code.
For CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit ID.
For CodePipeline, the source revision provided by CodePipeline.
For Amazon S3, this does not apply.
An identifier for the version of this build's source code.
For AWS CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit ID.
For AWS CodePipeline, the source revision provided by AWS CodePipeline.
For Amazon S3, this does not apply.
The name of the CodeBuild project.
" + "documentation":"The name of the AWS CodeBuild project.
" }, "phases":{ "shape":"BuildPhases", @@ -862,7 +871,7 @@ }, "secondarySourceVersions":{ "shape":"ProjectSecondarySourceVersions", - "documentation":" An array of ProjectSourceVersion
objects. Each ProjectSourceVersion
must be one of:
For CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example, pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
An array of ProjectSourceVersion
objects. Each ProjectSourceVersion
must be one of:
For AWS CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example, pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
Information about the build's logs in CloudWatch Logs.
" + "documentation":"Information about the build's logs in Amazon CloudWatch Logs.
" }, "timeoutInMinutes":{ "shape":"WrapperInt", - "documentation":"How long, in minutes, for CodeBuild to wait before timing out this build if it does not get marked as completed.
" + "documentation":"How long, in minutes, for AWS CodeBuild to wait before timing out this build if it does not get marked as completed.
" }, "queuedTimeoutInMinutes":{ "shape":"WrapperInt", @@ -902,11 +911,11 @@ }, "initiator":{ "shape":"String", - "documentation":"The entity that started the build. Valid values include:
If CodePipeline started the build, the pipeline's name (for example, codepipeline/my-demo-pipeline
).
If an Identity and Access Management user started the build, the user's name (for example, MyUserName
).
If the Jenkins plugin for CodeBuild started the build, the string CodeBuild-Jenkins-Plugin
.
The entity that started the build. Valid values include:
If AWS CodePipeline started the build, the pipeline's name (for example, codepipeline/my-demo-pipeline
).
If an AWS Identity and Access Management (IAM) user started the build, the user's name (for example, MyUserName
).
If the Jenkins plugin for AWS CodeBuild started the build, the string CodeBuild-Jenkins-Plugin
.
If your CodeBuild project accesses resources in an Amazon VPC, you provide this parameter that identifies the VPC ID and the list of security group IDs and subnet IDs. The security groups and subnets must belong to the same VPC. You must provide at least one security group and one subnet ID.
" + "documentation":"If your AWS CodeBuild project accesses resources in an Amazon VPC, you provide this parameter that identifies the VPC ID and the list of security group IDs and subnet IDs. The security groups and subnets must belong to the same VPC. You must provide at least one security group and one subnet ID.
" }, "networkInterface":{ "shape":"NetworkInterface", @@ -914,11 +923,11 @@ }, "encryptionKey":{ "shape":"NonEmptyString", - "documentation":"The Key Management Service customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
A list of exported environment variables for this build.
Exported environment variables are used in conjunction with CodePipeline to export environment variables from the current build stage to subsequent stages in the pipeline. For more information, see Working with variables in the CodePipeline User Guide.
" + "documentation":"A list of exported environment variables for this build.
Exported environment variables are used in conjunction with AWS CodePipeline to export environment variables from the current build stage to subsequent stages in the pipeline. For more information, see Working with variables in the AWS CodePipeline User Guide.
" }, "reportArns":{ "shape":"BuildReportArns", @@ -965,7 +974,8 @@ "artifactIdentifier":{ "shape":"String", "documentation":"An identifier for this artifact definition.
" - } + }, + "bucketOwnerAccess":{"shape":"BucketOwnerAccess"} }, "documentation":"Information about build output artifacts.
" }, @@ -1008,7 +1018,7 @@ }, "resolvedSourceVersion":{ "shape":"NonEmptyString", - "documentation":"The identifier of the resolved version of this batch build's source code.
For CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit ID.
For CodePipeline, the source revision provided by CodePipeline.
For Amazon S3, this does not apply.
The identifier of the resolved version of this batch build's source code.
For AWS CodeCommit, GitHub, GitHub Enterprise, and BitBucket, the commit ID.
For AWS CodePipeline, the source revision provided by AWS CodePipeline.
For Amazon S3, this does not apply.
An array of ProjectSourceVersion
objects. Each ProjectSourceVersion
must be one of:
For CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example, pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
An array of ProjectSourceVersion
objects. Each ProjectSourceVersion
must be one of:
For AWS CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example, pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
The entity that started the batch build. Valid values include:
If CodePipeline started the build, the pipeline's name (for example, codepipeline/my-demo-pipeline
).
If an Identity and Access Management user started the build, the user's name.
If the Jenkins plugin for CodeBuild started the build, the string CodeBuild-Jenkins-Plugin
.
The entity that started the batch build. Valid values include:
If AWS CodePipeline started the build, the pipeline's name (for example, codepipeline/my-demo-pipeline
).
If an AWS Identity and Access Management (IAM) user started the build, the user's name.
If the Jenkins plugin for AWS CodeBuild started the build, the string CodeBuild-Jenkins-Plugin
.
The Key Management Service customer master key (CMK) to be used for encrypting the batch build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the batch build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
Specifies the target url of the build status CodeBuild sends to the source provider. The usage of this parameter depends on the source provider.
This parameter is used for the url
parameter in the Bitbucket commit status. For more information, see build in the Bitbucket API documentation.
This parameter is used for the target_url
parameter in the GitHub commit status. For more information, see Create a commit status in the GitHub developer guide.
Contains information that defines how the CodeBuild build project reports the build status to the source provider.
" + "documentation":"Contains information that defines how the AWS CodeBuild build project reports the build status to the source provider.
" }, "BuildSummaries":{ "type":"list", @@ -1329,18 +1339,18 @@ "members":{ "status":{ "shape":"LogsConfigStatusType", - "documentation":"The current status of the logs in CloudWatch Logs for a build project. Valid values are:
ENABLED
: CloudWatch Logs are enabled for this build project.
DISABLED
: CloudWatch Logs are not enabled for this build project.
The current status of the logs in Amazon CloudWatch Logs for a build project. Valid values are:
ENABLED
: Amazon CloudWatch Logs are enabled for this build project.
DISABLED
: Amazon CloudWatch Logs are not enabled for this build project.
The group name of the logs in CloudWatch Logs. For more information, see Working with Log Groups and Log Streams.
" + "documentation":"The group name of the logs in Amazon CloudWatch Logs. For more information, see Working with Log Groups and Log Streams.
" }, "streamName":{ "shape":"String", - "documentation":"The prefix of the stream name of the CloudWatch Logs. For more information, see Working with Log Groups and Log Streams.
" + "documentation":"The prefix of the stream name of the Amazon CloudWatch Logs. For more information, see Working with Log Groups and Log Streams.
" } }, - "documentation":"Information about CloudWatch Logs for a build project.
" + "documentation":"Information about Amazon CloudWatch Logs for a build project.
" }, "CodeCoverage":{ "type":"structure", @@ -1463,7 +1473,7 @@ }, "sourceVersion":{ "shape":"String", - "documentation":"A version of the build input to be built for this project. If not specified, the latest version is used. If specified, it must be one of:
For CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the build level, then that version takes precedence over this sourceVersion
(at the project level).
For more information, see Source Version Sample with CodeBuild in the CodeBuild User Guide.
" + "documentation":"A version of the build input to be built for this project. If not specified, the latest version is used. If specified, it must be one of:
For AWS CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the build level, then that version takes precedence over this sourceVersion
(at the project level).
For more information, see Source Version Sample with CodeBuild in the AWS CodeBuild User Guide.
" }, "secondarySourceVersions":{ "shape":"ProjectSecondarySourceVersions", @@ -1487,11 +1497,11 @@ }, "serviceRole":{ "shape":"NonEmptyString", - "documentation":"The ARN of the Identity and Access Management role that enables CodeBuild to interact with dependent Amazon Web Services services on behalf of the Amazon Web Services account.
" + "documentation":"The ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
" }, "timeoutInMinutes":{ "shape":"TimeOut", - "documentation":"How long, in minutes, from 5 to 480 (8 hours), for CodeBuild to wait before it times out any build that has not been marked as completed. The default is 60 minutes.
" + "documentation":"How long, in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before it times out any build that has not been marked as completed. The default is 60 minutes.
" }, "queuedTimeoutInMinutes":{ "shape":"TimeOut", @@ -1499,15 +1509,15 @@ }, "encryptionKey":{ "shape":"NonEmptyString", - "documentation":"The Key Management Service customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
A list of tag key and value pairs associated with this build project.
These tags are available for use by Amazon Web Services services that support CodeBuild build project tags.
" + "documentation":"A list of tag key and value pairs associated with this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
" }, "vpcConfig":{ "shape":"VpcConfig", - "documentation":"VpcConfig enables CodeBuild to access resources in an Amazon VPC.
" + "documentation":"VpcConfig enables AWS CodeBuild to access resources in an Amazon VPC.
" }, "badgeEnabled":{ "shape":"WrapperBoolean", @@ -1515,7 +1525,7 @@ }, "logsConfig":{ "shape":"LogsConfig", - "documentation":"Information about logs for the build project. These can be logs in CloudWatch Logs, logs uploaded to a specified S3 bucket, or both.
" + "documentation":"Information about logs for the build project. These can be logs in Amazon CloudWatch Logs, logs uploaded to a specified S3 bucket, or both.
" }, "fileSystemLocations":{ "shape":"ProjectFileSystemLocations", @@ -1562,7 +1572,7 @@ }, "tags":{ "shape":"TagList", - "documentation":"A list of tag key and value pairs associated with this report group.
These tags are available for use by Amazon Web Services services that support CodeBuild report group tags.
" + "documentation":"A list of tag key and value pairs associated with this report group.
These tags are available for use by AWS services that support AWS CodeBuild report group tags.
" } } }, @@ -1581,7 +1591,7 @@ "members":{ "projectName":{ "shape":"ProjectName", - "documentation":"The name of the CodeBuild project.
" + "documentation":"The name of the AWS CodeBuild project.
" }, "branchFilter":{ "shape":"String", @@ -1602,7 +1612,7 @@ "members":{ "webhook":{ "shape":"Webhook", - "documentation":"Information about a webhook that connects repository events to a build project in CodeBuild.
" + "documentation":"Information about a webhook that connects repository events to a build project in AWS CodeBuild.
" } } }, @@ -1740,7 +1750,7 @@ "members":{ "projectName":{ "shape":"ProjectName", - "documentation":"The name of the CodeBuild project.
" + "documentation":"The name of the AWS CodeBuild project.
" } } }, @@ -1847,7 +1857,7 @@ "documentation":"A list of environment image versions.
" } }, - "documentation":"Information about a Docker image that is managed by CodeBuild.
" + "documentation":"Information about a Docker image that is managed by AWS CodeBuild.
" }, "EnvironmentImages":{ "type":"list", @@ -1865,7 +1875,7 @@ "documentation":"The list of Docker images that are related by the specified programming language.
" } }, - "documentation":"A set of Docker images that are related by programming language and are managed by CodeBuild.
" + "documentation":"A set of Docker images that are related by programming language and are managed by AWS CodeBuild.
" }, "EnvironmentLanguages":{ "type":"list", @@ -1883,7 +1893,7 @@ "documentation":"The list of programming languages that are available for the specified platform.
" } }, - "documentation":"A set of Docker images that are related by platform and are managed by CodeBuild.
" + "documentation":"A set of Docker images that are related by platform and are managed by AWS CodeBuild.
" }, "EnvironmentPlatforms":{ "type":"list", @@ -1912,11 +1922,11 @@ }, "value":{ "shape":"String", - "documentation":"The value of the environment variable.
We strongly discourage the use of PLAINTEXT
environment variables to store sensitive values, especially Amazon Web Services secret key IDs and secret access keys. PLAINTEXT
environment variables can be displayed in plain text using the CodeBuild console and the AWS Command Line Interface (AWS CLI). For sensitive values, we recommend you use an environment variable of type PARAMETER_STORE
or SECRETS_MANAGER
.
The value of the environment variable.
We strongly discourage the use of PLAINTEXT
environment variables to store sensitive values, especially AWS secret key IDs and secret access keys. PLAINTEXT
environment variables can be displayed in plain text using the AWS CodeBuild console and the AWS Command Line Interface (AWS CLI). For sensitive values, we recommend you use an environment variable of type PARAMETER_STORE
or SECRETS_MANAGER
.
The type of environment variable. Valid values include:
PARAMETER_STORE
: An environment variable stored in Systems Manager Parameter Store. To learn how to specify a parameter store environment variable, see env/parameter-store in the CodeBuild User Guide.
PLAINTEXT
: An environment variable in plain text format. This is the default value.
SECRETS_MANAGER
: An environment variable stored in Secrets Manager. To learn how to specify a secrets manager environment variable, see env/secrets-manager in the CodeBuild User Guide.
The type of environment variable. Valid values include:
PARAMETER_STORE
: An environment variable stored in Amazon EC2 Systems Manager Parameter Store. To learn how to specify a parameter store environment variable, see env/parameter-store in the AWS CodeBuild User Guide.
PLAINTEXT
: An environment variable in plain text format. This is the default value.
SECRETS_MANAGER
: An environment variable stored in AWS Secrets Manager. To learn how to specify a secrets manager environment variable, see env/secrets-manager in the AWS CodeBuild User Guide.
Information about an environment variable for a build project or a build.
" @@ -1945,7 +1955,7 @@ "documentation":"The value assigned to the exported environment variable.
" } }, - "documentation":"Contains information about an exported environment variable.
Exported environment variables are used in conjunction with CodePipeline to export environment variables from the current build stage to subsequent stages in the pipeline. For more information, see Working with variables in the CodePipeline User Guide.
During a build, the value of a variable is available starting with the install
phase. It can be updated between the start of the install
phase and the end of the post_build
phase. After the post_build
phase ends, the value of exported variables cannot change.
Contains information about an exported environment variable.
Exported environment variables are used in conjunction with AWS CodePipeline to export environment variables from the current build stage to subsequent stages in the pipeline. For more information, see Working with variables in the AWS CodePipeline User Guide.
During a build, the value of a variable is available starting with the install
phase. It can be updated between the start of the install
phase and the end of the post_build
phase. After the post_build
phase ends, the value of exported variables cannot change.
Set to true to fetch Git submodules for your CodeBuild build project.
" + "documentation":"Set to true to fetch Git submodules for your AWS CodeBuild build project.
" } }, - "documentation":"Information about the Git submodules configuration for an CodeBuild build project.
" + "documentation":"Information about the Git submodules configuration for an AWS CodeBuild build project.
" }, "Identifiers":{ "type":"list", @@ -2068,7 +2078,7 @@ }, "authType":{ "shape":"AuthType", - "documentation":"The type of authentication used to connect to a GitHub, GitHub Enterprise, or Bitbucket repository. An OAUTH connection is not supported by the API and must be created using the CodeBuild console.
" + "documentation":"The type of authentication used to connect to a GitHub, GitHub Enterprise, or Bitbucket repository. An OAUTH connection is not supported by the API and must be created using the AWS CodeBuild console.
" }, "shouldOverwrite":{ "shape":"WrapperBoolean", @@ -2098,7 +2108,7 @@ "members":{ "projectName":{ "shape":"NonEmptyString", - "documentation":"The name of the CodeBuild build project that the cache is reset for.
" + "documentation":"The name of the AWS CodeBuild build project that the cache is reset for.
" } } }, @@ -2206,7 +2216,7 @@ "members":{ "projectName":{ "shape":"NonEmptyString", - "documentation":"The name of the CodeBuild project.
" + "documentation":"The name of the AWS CodeBuild project.
" }, "sortOrder":{ "shape":"SortOrderType", @@ -2267,7 +2277,7 @@ "members":{ "platforms":{ "shape":"EnvironmentPlatforms", - "documentation":"Information about supported platforms for Docker images that are managed by CodeBuild.
" + "documentation":"Information about supported platforms for Docker images that are managed by AWS CodeBuild.
" } } }, @@ -2331,7 +2341,7 @@ }, "reportGroups":{ "shape":"ReportGroupArns", - "documentation":"The list of ARNs for the report groups in the current Amazon Web Services account.
" + "documentation":"The list of ARNs for the report groups in the current AWS account.
" } } }, @@ -2404,7 +2414,7 @@ }, "reports":{ "shape":"ReportArns", - "documentation":"The list of returned ARNs for the reports in the current Amazon Web Services account.
" + "documentation":"The list of returned ARNs for the reports in the current AWS account.
" } } }, @@ -2413,7 +2423,7 @@ "members":{ "sortBy":{ "shape":"SharedResourceSortByType", - "documentation":"The criterion to be used to list build projects shared with the current Amazon Web Services account or user. Valid values include:
ARN
: List based on the ARN.
MODIFIED_TIME
: List based on when information about the shared project was last changed.
The criterion to be used to list build projects shared with the current AWS account or user. Valid values include:
ARN
: List based on the ARN.
MODIFIED_TIME
: List based on when information about the shared project was last changed.
The list of ARNs for the build projects shared with the current Amazon Web Services account or user.
" + "documentation":"The list of ARNs for the build projects shared with the current AWS account or user.
" } } }, @@ -2451,7 +2461,7 @@ }, "sortBy":{ "shape":"SharedResourceSortByType", - "documentation":"The criterion to be used to list report groups shared with the current Amazon Web Services account or user. Valid values include:
ARN
: List based on the ARN.
MODIFIED_TIME
: List based on when information about the shared report group was last changed.
The criterion to be used to list report groups shared with the current AWS account or user. Valid values include:
ARN
: List based on the ARN.
MODIFIED_TIME
: List based on when information about the shared report group was last changed.
The list of ARNs for the report groups shared with the current Amazon Web Services account or user.
" + "documentation":"The list of ARNs for the report groups shared with the current AWS account or user.
" } } }, @@ -2495,14 +2505,14 @@ "members":{ "cloudWatchLogs":{ "shape":"CloudWatchLogsConfig", - "documentation":"Information about CloudWatch Logs for a build project. CloudWatch Logs are enabled by default.
" + "documentation":"Information about Amazon CloudWatch Logs for a build project. Amazon CloudWatch Logs are enabled by default.
" }, "s3Logs":{ "shape":"S3LogsConfig", "documentation":"Information about logs built to an S3 bucket for a build project. S3 logs are not enabled by default.
" } }, - "documentation":"Information about logs for a build project. These can be logs in CloudWatch Logs, built in a specified S3 bucket, or both.
" + "documentation":"Information about logs for a build project. These can be logs in Amazon CloudWatch Logs, built in a specified S3 bucket, or both.
" }, "LogsConfigStatusType":{ "type":"string", @@ -2516,15 +2526,15 @@ "members":{ "groupName":{ "shape":"String", - "documentation":"The name of the CloudWatch Logs group for the build logs.
" + "documentation":"The name of the Amazon CloudWatch Logs group for the build logs.
" }, "streamName":{ "shape":"String", - "documentation":"The name of the CloudWatch Logs stream for the build logs.
" + "documentation":"The name of the Amazon CloudWatch Logs stream for the build logs.
" }, "deepLink":{ "shape":"String", - "documentation":"The URL to an individual build log in CloudWatch Logs.
" + "documentation":"The URL to an individual build log in Amazon CloudWatch Logs.
" }, "s3DeepLink":{ "shape":"String", @@ -2532,7 +2542,7 @@ }, "cloudWatchLogsArn":{ "shape":"String", - "documentation":" The ARN of CloudWatch Logs for a build project. Its format is arn:${Partition}:logs:${Region}:${Account}:log-group:${LogGroupName}:log-stream:${LogStreamName}
. For more information, see Resources Defined by CloudWatch Logs.
The ARN of Amazon CloudWatch Logs for a build project. Its format is arn:${Partition}:logs:${Region}:${Account}:log-group:${LogGroupName}:log-stream:${LogStreamName}
. For more information, see Resources Defined by Amazon CloudWatch Logs.
Information about CloudWatch Logs for a build project.
" + "documentation":"Information about Amazon CloudWatch Logs for a build project.
" }, "s3Logs":{ "shape":"S3LogsConfig", "documentation":"Information about S3 logs for a build project.
" } }, - "documentation":"Information about build logs in CloudWatch Logs.
" + "documentation":"Information about build logs in Amazon CloudWatch Logs.
" }, "NetworkInterface":{ "type":"structure", @@ -2640,7 +2650,7 @@ }, "sourceVersion":{ "shape":"String", - "documentation":"A version of the build input to be built for this project. If not specified, the latest version is used. If specified, it must be one of:
For CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the build level, then that version takes precedence over this sourceVersion
(at the project level).
For more information, see Source Version Sample with CodeBuild in the CodeBuild User Guide.
" + "documentation":"A version of the build input to be built for this project. If not specified, the latest version is used. If specified, it must be one of:
For AWS CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the build level, then that version takes precedence over this sourceVersion
(at the project level).
For more information, see Source Version Sample with CodeBuild in the AWS CodeBuild User Guide.
" }, "secondarySourceVersions":{ "shape":"ProjectSecondarySourceVersions", @@ -2664,11 +2674,11 @@ }, "serviceRole":{ "shape":"NonEmptyString", - "documentation":"The ARN of the Identity and Access Management role that enables CodeBuild to interact with dependent Amazon Web Services services on behalf of the Amazon Web Services account.
" + "documentation":"The ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
" }, "timeoutInMinutes":{ "shape":"TimeOut", - "documentation":"How long, in minutes, from 5 to 480 (8 hours), for CodeBuild to wait before timing out any related build that did not get marked as completed. The default is 60 minutes.
" + "documentation":"How long, in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before timing out any related build that did not get marked as completed. The default is 60 minutes.
" }, "queuedTimeoutInMinutes":{ "shape":"TimeOut", @@ -2676,11 +2686,11 @@ }, "encryptionKey":{ "shape":"NonEmptyString", - "documentation":"The Key Management Service customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
). If you don't specify a value, CodeBuild uses the managed CMK for Amazon Simple Storage Service (Amazon S3).
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
A list of tag key and value pairs associated with this build project.
These tags are available for use by Amazon Web Services services that support CodeBuild build project tags.
" + "documentation":"A list of tag key and value pairs associated with this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
" }, "created":{ "shape":"Timestamp", @@ -2692,11 +2702,11 @@ }, "webhook":{ "shape":"Webhook", - "documentation":"Information about a webhook that connects repository events to a build project in CodeBuild.
" + "documentation":"Information about a webhook that connects repository events to a build project in AWS CodeBuild.
" }, "vpcConfig":{ "shape":"VpcConfig", - "documentation":"Information about the VPC configuration that CodeBuild accesses.
" + "documentation":"Information about the VPC configuration that AWS CodeBuild accesses.
" }, "badge":{ "shape":"ProjectBadge", @@ -2704,7 +2714,7 @@ }, "logsConfig":{ "shape":"LogsConfig", - "documentation":"Information about logs for the build project. A project can create logs in CloudWatch Logs, an S3 bucket, or both.
" + "documentation":"Information about logs for the build project. A project can create logs in Amazon CloudWatch Logs, an S3 bucket, or both.
" }, "fileSystemLocations":{ "shape":"ProjectFileSystemLocations", @@ -2733,27 +2743,27 @@ "members":{ "type":{ "shape":"ArtifactsType", - "documentation":"The type of build output artifact. Valid values include:
CODEPIPELINE
: The build project has build output generated through CodePipeline.
The CODEPIPELINE
type is not supported for secondaryArtifacts
.
NO_ARTIFACTS
: The build project does not produce any build output.
S3
: The build project stores build output in Amazon S3.
The type of build output artifact. Valid values include:
CODEPIPELINE
: The build project has build output generated through AWS CodePipeline.
The CODEPIPELINE
type is not supported for secondaryArtifacts
.
NO_ARTIFACTS
: The build project does not produce any build output.
S3
: The build project stores build output in Amazon S3.
Information about the build output artifact location:
If type
is set to CODEPIPELINE
, CodePipeline ignores this value if specified. This is because CodePipeline manages its build output locations instead of CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, this is the name of the output bucket.
Information about the build output artifact location:
If type
is set to CODEPIPELINE
, AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output locations instead of AWS CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, this is the name of the output bucket.
Along with namespaceType
and name
, the pattern that CodeBuild uses to name and store the output artifact:
If type
is set to CODEPIPELINE
, CodePipeline ignores this value if specified. This is because CodePipeline manages its build output names instead of CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, this is the path to the output artifact. If path
is not specified, path
is not used.
For example, if path
is set to MyArtifacts
, namespaceType
is set to NONE
, and name
is set to MyArtifact.zip
, the output artifact is stored in the output bucket at MyArtifacts/MyArtifact.zip
.
Along with namespaceType
and name
, the pattern that AWS CodeBuild uses to name and store the output artifact:
If type
is set to CODEPIPELINE
, AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, this is the path to the output artifact. If path
is not specified, path
is not used.
For example, if path
is set to MyArtifacts
, namespaceType
is set to NONE
, and name
is set to MyArtifact.zip
, the output artifact is stored in the output bucket at MyArtifacts/MyArtifact.zip
.
Along with path
and name
, the pattern that CodeBuild uses to determine the name and location to store the output artifact:
If type
is set to CODEPIPELINE
, CodePipeline ignores this value if specified. This is because CodePipeline manages its build output names instead of CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, valid values include:
BUILD_ID
: Include the build ID in the location of the build output artifact.
NONE
: Do not include the build ID. This is the default if namespaceType
is not specified.
For example, if path
is set to MyArtifacts
, namespaceType
is set to BUILD_ID
, and name
is set to MyArtifact.zip
, the output artifact is stored in MyArtifacts/<build-ID>/MyArtifact.zip
.
Along with path
and name
, the pattern that AWS CodeBuild uses to determine the name and location to store the output artifact:
If type
is set to CODEPIPELINE
, AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, valid values include:
BUILD_ID
: Include the build ID in the location of the build output artifact.
NONE
: Do not include the build ID. This is the default if namespaceType
is not specified.
For example, if path
is set to MyArtifacts
, namespaceType
is set to BUILD_ID
, and name
is set to MyArtifact.zip
, the output artifact is stored in MyArtifacts/<build-ID>/MyArtifact.zip
.
Along with path
and namespaceType
, the pattern that CodeBuild uses to name and store the output artifact:
If type
is set to CODEPIPELINE
, CodePipeline ignores this value if specified. This is because CodePipeline manages its build output names instead of CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
If path
is set to MyArtifacts
, namespaceType
is set to BUILD_ID
, and name
is set to MyArtifact.zip
, then the output artifact is stored in MyArtifacts/<build-ID>/MyArtifact.zip
.
If path
is empty, namespaceType
is set to NONE
, and name
is set to \"/
\", the output artifact is stored in the root of the output bucket.
If path
is set to MyArtifacts
, namespaceType
is set to BUILD_ID
, and name
is set to \"/
\", the output artifact is stored in MyArtifacts/<build-ID>
.
Along with path
and namespaceType
, the pattern that AWS CodeBuild uses to name and store the output artifact:
If type
is set to CODEPIPELINE
, AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output names instead of AWS CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, this is the name of the output artifact object. If you set the name to be a forward slash (\"/\"), the artifact is stored in the root of the output bucket.
For example:
If path
is set to MyArtifacts
, namespaceType
is set to BUILD_ID
, and name
is set to MyArtifact.zip
, then the output artifact is stored in MyArtifacts/<build-ID>/MyArtifact.zip
.
If path
is empty, namespaceType
is set to NONE
, and name
is set to \"/
\", the output artifact is stored in the root of the output bucket.
If path
is set to MyArtifacts
, namespaceType
is set to BUILD_ID
, and name
is set to \"/
\", the output artifact is stored in MyArtifacts/<build-ID>
.
The type of build output artifact to create:
If type
is set to CODEPIPELINE
, CodePipeline ignores this value if specified. This is because CodePipeline manages its build output artifacts instead of CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, valid values include:
NONE
: CodeBuild creates in the output bucket a folder that contains the build output. This is the default if packaging
is not specified.
ZIP
: CodeBuild creates in the output bucket a ZIP file that contains the build output.
The type of build output artifact to create:
If type
is set to CODEPIPELINE
, AWS CodePipeline ignores this value if specified. This is because AWS CodePipeline manages its build output artifacts instead of AWS CodeBuild.
If type
is set to NO_ARTIFACTS
, this value is ignored if specified, because no build output is produced.
If type
is set to S3
, valid values include:
NONE
: AWS CodeBuild creates in the output bucket a folder that contains the build output. This is the default if packaging
is not specified.
ZIP
: AWS CodeBuild creates in the output bucket a ZIP file that contains the build output.
An identifier for this artifact definition.
" - } + }, + "bucketOwnerAccess":{"shape":"BucketOwnerAccess"} }, "documentation":"Information about the build output artifacts for the build project.
" }, @@ -2850,15 +2861,15 @@ "members":{ "type":{ "shape":"EnvironmentType", - "documentation":"The type of build environment to use for related builds.
The environment type ARM_CONTAINER
is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), and EU (Frankfurt).
The environment type LINUX_CONTAINER
with compute type build.general1.2xlarge
is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), China (Beijing), and China (Ningxia).
The environment type LINUX_GPU_CONTAINER
is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) , China (Beijing), and China (Ningxia).
The environment types WINDOWS_CONTAINER
and WINDOWS_SERVER_2019_CONTAINER
are available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), and EU (Ireland).
For more information, see Build environment compute types in the CodeBuild user guide.
" + "documentation":"The type of build environment to use for related builds.
The environment type ARM_CONTAINER
is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), EU (Ireland), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Sydney), and EU (Frankfurt).
The environment type LINUX_CONTAINER
with compute type build.general1.2xlarge
is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), China (Beijing), and China (Ningxia).
The environment type LINUX_GPU_CONTAINER
is available only in regions US East (N. Virginia), US East (Ohio), US West (Oregon), Canada (Central), EU (Ireland), EU (London), EU (Frankfurt), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney) , China (Beijing), and China (Ningxia).
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
For an image tag: <registry>/<repository>:<tag>
. For example, in the Docker repository that CodeBuild uses to manage its Docker images, this would be aws/codebuild/standard:4.0
.
For an image digest: <registry>/<repository>@<digest>
. For example, to specify an image with the digest \"sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf,\" use <registry>/<repository>@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
.
For more information, see Docker images provided by CodeBuild in the CodeBuild user guide.
" + "documentation":"The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
For an image tag: <registry>/<repository>:<tag>
. For example, in the Docker repository that CodeBuild uses to manage its Docker images, this would be aws/codebuild/standard:4.0
.
For an image digest: <registry>/<repository>@<digest>
. For example, to specify an image with the digest \"sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf,\" use <registry>/<repository>@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
.
Information about the compute resources the build project uses. Available values include:
BUILD_GENERAL1_SMALL
: Use up to 3 GB memory and 2 vCPUs for builds.
BUILD_GENERAL1_MEDIUM
: Use up to 7 GB memory and 4 vCPUs for builds.
BUILD_GENERAL1_LARGE
: Use up to 16 GB memory and 8 vCPUs for builds, depending on your environment type.
BUILD_GENERAL1_2XLARGE
: Use up to 145 GB memory, 72 vCPUs, and 824 GB of SSD storage for builds. This compute type supports Docker images up to 100 GB uncompressed.
If you use BUILD_GENERAL1_LARGE
:
For environment type LINUX_CONTAINER
, you can use up to 15 GB memory and 8 vCPUs for builds.
For environment type LINUX_GPU_CONTAINER
, you can use up to 255 GB memory, 32 vCPUs, and 4 NVIDIA Tesla V100 GPUs for builds.
For environment type ARM_CONTAINER
, you can use up to 16 GB memory and 8 vCPUs on ARM-based processors for builds.
For more information, see Build Environment Compute Types in the CodeBuild User Guide.
" + "documentation":"Information about the compute resources the build project uses. Available values include:
BUILD_GENERAL1_SMALL
: Use up to 3 GB memory and 2 vCPUs for builds.
BUILD_GENERAL1_MEDIUM
: Use up to 7 GB memory and 4 vCPUs for builds.
BUILD_GENERAL1_LARGE
: Use up to 16 GB memory and 8 vCPUs for builds, depending on your environment type.
BUILD_GENERAL1_2XLARGE
: Use up to 145 GB memory, 72 vCPUs, and 824 GB of SSD storage for builds. This compute type supports Docker images up to 100 GB uncompressed.
If you use BUILD_GENERAL1_LARGE
:
For environment type LINUX_CONTAINER
, you can use up to 15 GB memory and 8 vCPUs for builds.
For environment type LINUX_GPU_CONTAINER
, you can use up to 255 GB memory, 32 vCPUs, and 4 NVIDIA Tesla V100 GPUs for builds.
For environment type ARM_CONTAINER
, you can use up to 16 GB memory and 8 vCPUs on ARM-based processors for builds.
For more information, see Build Environment Compute Types in the AWS CodeBuild User Guide.
" }, "environmentVariables":{ "shape":"EnvironmentVariables", @@ -2870,7 +2881,7 @@ }, "certificate":{ "shape":"String", - "documentation":"The ARN of the Amazon S3 bucket, path prefix, and object key that contains the PEM-encoded certificate for the build project. For more information, see certificate in the CodeBuild User Guide.
" + "documentation":"The ARN of the Amazon S3 bucket, path prefix, and object key that contains the PEM-encoded certificate for the build project. For more information, see certificate in the AWS CodeBuild User Guide.
" }, "registryCredential":{ "shape":"RegistryCredential", @@ -2878,7 +2889,7 @@ }, "imagePullCredentialsType":{ "shape":"ImagePullCredentialsType", - "documentation":"The type of credentials CodeBuild uses to pull images in your build. There are two valid values:
CODEBUILD
specifies that CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust CodeBuild service principal.
SERVICE_ROLE
specifies that CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an CodeBuild curated image, you must use CODEBUILD credentials.
" + "documentation":"The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
CODEBUILD
specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
SERVICE_ROLE
specifies that AWS CodeBuild uses your build project's service role.
When you use a cross-account or private registry image, you must use SERVICE_ROLE credentials. When you use an AWS CodeBuild curated image, you must use CODEBUILD credentials.
" } }, "documentation":"Information about the build environment of the build project.
" @@ -2892,7 +2903,7 @@ }, "location":{ "shape":"String", - "documentation":"A string that specifies the location of the file system created by Amazon EFS. Its format is efs-dns-name:/directory-path
. You can find the DNS name of file system when you view it in the Amazon EFS console. The directory path is a path to a directory in the file system that CodeBuild mounts. For example, if the DNS name of a file system is fs-abcd1234.efs.us-west-2.amazonaws.com
, and its mount directory is my-efs-mount-directory
, then the location
is fs-abcd1234.efs.us-west-2.amazonaws.com:/my-efs-mount-directory
.
The directory path in the format efs-dns-name:/directory-path
is optional. If you do not specify a directory path, the location is only the DNS name and CodeBuild mounts the entire file system.
A string that specifies the location of the file system created by Amazon EFS. Its format is efs-dns-name:/directory-path
. You can find the DNS name of file system when you view it in the AWS EFS console. The directory path is a path to a directory in the file system that CodeBuild mounts. For example, if the DNS name of a file system is fs-abcd1234.efs.us-west-2.amazonaws.com
, and its mount directory is my-efs-mount-directory
, then the location
is fs-abcd1234.efs.us-west-2.amazonaws.com:/my-efs-mount-directory
.
The directory path in the format efs-dns-name:/directory-path
is optional. If you do not specify a directory path, the location is only the DNS name and CodeBuild mounts the entire file system.
The mount options for a file system created by Amazon EFS. The default mount options used by CodeBuild are nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2
. For more information, see Recommended NFS Mount Options.
The mount options for a file system created by AWS EFS. The default mount options used by CodeBuild are nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2
. For more information, see Recommended NFS Mount Options.
Information about a file system created by Amazon Elastic File System (EFS). For more information, see What Is Amazon Elastic File System?
" @@ -2945,11 +2956,11 @@ "members":{ "type":{ "shape":"SourceType", - "documentation":"The type of repository that contains the source code to be built. Valid values include:
BITBUCKET
: The source code is in a Bitbucket repository.
CODECOMMIT
: The source code is in an CodeCommit repository.
CODEPIPELINE
: The source code settings are specified in the source action of a pipeline in CodePipeline.
GITHUB
: The source code is in a GitHub or GitHub Enterprise Cloud repository.
GITHUB_ENTERPRISE
: The source code is in a GitHub Enterprise Server repository.
NO_SOURCE
: The project does not have input source code.
S3
: The source code is in an Amazon S3 bucket.
The type of repository that contains the source code to be built. Valid values include:
BITBUCKET
: The source code is in a Bitbucket repository.
CODECOMMIT
: The source code is in an AWS CodeCommit repository.
CODEPIPELINE
: The source code settings are specified in the source action of a pipeline in AWS CodePipeline.
GITHUB
: The source code is in a GitHub or GitHub Enterprise Cloud repository.
GITHUB_ENTERPRISE
: The source code is in a GitHub Enterprise Server repository.
NO_SOURCE
: The project does not have input source code.
S3
: The source code is in an Amazon S3 bucket.
Information about the location of the source code to be built. Valid values include:
For source code settings that are specified in the source action of a pipeline in CodePipeline, location
should not be specified. If it is specified, CodePipeline ignores it. This is because CodePipeline uses the settings in a pipeline's source action instead of this value.
For source code in an CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the buildspec file (for example, https://git-codecommit.<region-ID>.amazonaws.com/v1/repos/<repo-name>
).
For source code in an Amazon S3 input bucket, one of the following.
The path to the ZIP file that contains the source code (for example, <bucket-name>/<path>/<object-name>.zip
).
The path to the folder that contains the source code (for example, <bucket-name>/<path-to-source-code>/<folder>/
).
For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the buildspec file. You must connect your account to your GitHub account. Use the CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub Authorize application page, for Organization access, choose Request access next to each repository you want to allow CodeBuild to have access to, and then choose Authorize application. (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the CodeBuild console.) To instruct CodeBuild to use this connection, in the source
object, set the auth
object's type
value to OAUTH
.
For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the buildspec file. You must connect your Amazon Web Services account to your Bitbucket account. Use the CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket Confirm access to your account page, choose Grant access. (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the CodeBuild console.) To instruct CodeBuild to use this connection, in the source
object, set the auth
object's type
value to OAUTH
.
If you specify CODEPIPELINE
for the Type
property, don't specify this property. For all of the other types, you must specify Location
.
Information about the location of the source code to be built. Valid values include:
For source code settings that are specified in the source action of a pipeline in AWS CodePipeline, location
should not be specified. If it is specified, AWS CodePipeline ignores it. This is because AWS CodePipeline uses the settings in a pipeline's source action instead of this value.
For source code in an AWS CodeCommit repository, the HTTPS clone URL to the repository that contains the source code and the buildspec file (for example, https://git-codecommit.<region-ID>.amazonaws.com/v1/repos/<repo-name>
).
For source code in an Amazon S3 input bucket, one of the following.
The path to the ZIP file that contains the source code (for example, <bucket-name>/<path>/<object-name>.zip
).
The path to the folder that contains the source code (for example, <bucket-name>/<path-to-source-code>/<folder>/
).
For source code in a GitHub repository, the HTTPS clone URL to the repository that contains the source and the buildspec file. You must connect your AWS account to your GitHub account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with GitHub, on the GitHub Authorize application page, for Organization access, choose Request access next to each repository you want to allow AWS CodeBuild to have access to, and then choose Authorize application. (After you have connected to your GitHub account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the source
object, set the auth
object's type
value to OAUTH
.
For source code in a Bitbucket repository, the HTTPS clone URL to the repository that contains the source and the buildspec file. You must connect your AWS account to your Bitbucket account. Use the AWS CodeBuild console to start creating a build project. When you use the console to connect (or reconnect) with Bitbucket, on the Bitbucket Confirm access to your account page, choose Grant access. (After you have connected to your Bitbucket account, you do not need to finish creating the build project. You can leave the AWS CodeBuild console.) To instruct AWS CodeBuild to use this connection, in the source
object, set the auth
object's type
value to OAUTH
.
The buildspec file declaration to use for the builds in this build project.
If this value is set, it can be either an inline buildspec definition, the path to an alternate buildspec file relative to the value of the built-in CODEBUILD_SRC_DIR
environment variable, or the path to an S3 bucket. The bucket must be in the same Region as the build project. Specify the buildspec file using its ARN (for example, arn:aws:s3:::my-codebuild-sample2/buildspec.yml
). If this value is not provided or is set to an empty string, the source code must contain a buildspec file in its root directory. For more information, see Buildspec File Name and Storage Location.
The buildspec file declaration to use for the builds in this build project.
If this value is set, it can be either an inline buildspec definition, the path to an alternate buildspec file relative to the value of the built-in CODEBUILD_SRC_DIR
environment variable, or the path to an S3 bucket. The bucket must be in the same AWS Region as the build project. Specify the buildspec file using its ARN (for example, arn:aws:s3:::my-codebuild-sample2/buildspec.yml
). If this value is not provided or is set to an empty string, the source code must contain a buildspec file in its root directory. For more information, see Buildspec File Name and Storage Location.
Information about the authorization settings for CodeBuild to access the source code to be built.
This information is for the CodeBuild console's use only. Your code should not get or set this information directly.
" + "documentation":"Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
" }, "reportBuildStatus":{ "shape":"WrapperBoolean", - "documentation":" Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException
is thrown.
To be able to report the build status to the source provider, the user associated with the source provider must have write access to the repo. If the user does not have write access, the build status cannot be updated. For more information, see Source provider access in the CodeBuild User Guide.
The status of a build triggered by a webhook is always reported to your source provider.
Set to true to report the status of a build's start and finish to your source provider. This option is valid only when your source provider is GitHub, GitHub Enterprise, or Bitbucket. If this is set and you use a different source provider, an invalidInputException
is thrown.
To be able to report the build status to the source provider, the user associated with the source provider must have write access to the repo. If the user does not have write access, the build status cannot be updated. For more information, see Source provider access in the AWS CodeBuild User Guide.
The status of a build triggered by a webhook is always reported to your source provider.
The source version for the corresponding source identifier. If specified, must be one of:
For CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example, pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
For more information, see Source Version Sample with CodeBuild in the CodeBuild User Guide.
" + "documentation":"The source version for the corresponding source identifier. If specified, must be one of:
For AWS CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example, pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
For more information, see Source Version Sample with CodeBuild in the AWS CodeBuild User Guide.
" } }, "documentation":"A source identifier and its corresponding version.
" @@ -3023,7 +3034,7 @@ "members":{ "policy":{ "shape":"NonEmptyString", - "documentation":"A JSON-formatted resource policy. For more information, see Sharing a Project and Sharing a Report Group in the CodeBuild User Guide.
" + "documentation":"A JSON-formatted resource policy. For more information, see Sharing a Project and Sharing a Report Group in the AWS CodeBuild User Guide.
" }, "resourceArn":{ "shape":"NonEmptyString", @@ -3049,14 +3060,14 @@ "members":{ "credential":{ "shape":"NonEmptyString", - "documentation":"The Amazon Resource Name (ARN) or name of credentials created using Secrets Manager.
The credential
can use the name of the credentials only if they exist in your current Region.
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
The credential
can use the name of the credentials only if they exist in your current AWS Region.
The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for Secrets Manager.
" + "documentation":"The service that created the credentials to access a private Docker registry. The valid value, SECRETS_MANAGER, is for AWS Secrets Manager.
" } }, - "documentation":"Information about credentials that provide access to a private Docker registry. When this is set:
imagePullCredentialsType
must be set to SERVICE_ROLE
.
images cannot be curated or an Amazon ECR image.
For more information, see Private Registry with Secrets Manager Sample for CodeBuild.
" + "documentation":"Information about credentials that provide access to a private Docker registry. When this is set:
imagePullCredentialsType
must be set to SERVICE_ROLE
.
images cannot be curated or an Amazon ECR image.
For more information, see Private Registry with AWS Secrets Manager Sample for AWS CodeBuild.
" }, "Report":{ "type":"structure", @@ -3185,7 +3196,7 @@ }, "tags":{ "shape":"TagList", - "documentation":"A list of tag key and value pairs associated with this report group.
These tags are available for use by Amazon Web Services services that support CodeBuild report group tags.
" + "documentation":"A list of tag key and value pairs associated with this report group.
These tags are available for use by AWS services that support AWS CodeBuild report group tags.
" }, "status":{ "shape":"ReportGroupStatusType", @@ -3337,14 +3348,14 @@ "type":"structure", "members":{ }, - "documentation":"The specified Amazon Web Services resource cannot be created, because an Amazon Web Services resource with the same settings already exists.
", + "documentation":"The specified AWS resource cannot be created, because an AWS resource with the same settings already exists.
", "exception":true }, "ResourceNotFoundException":{ "type":"structure", "members":{ }, - "documentation":"The specified Amazon Web Services resource cannot be found.
", + "documentation":"The specified AWS resource cannot be found.
", "exception":true }, "RetryBuildBatchInput":{ @@ -3356,7 +3367,7 @@ }, "idempotencyToken":{ "shape":"String", - "documentation":"A unique, case sensitive identifier you provide to ensure the idempotency of the RetryBuildBatch
request. The token is included in the RetryBuildBatch
request and is valid for five minutes. If you repeat the RetryBuildBatch
request with the same token, but change a parameter, CodeBuild returns a parameter mismatch error.
A unique, case sensitive identifier you provide to ensure the idempotency of the RetryBuildBatch
request. The token is included in the RetryBuildBatch
request and is valid for five minutes. If you repeat the RetryBuildBatch
request with the same token, but change a parameter, AWS CodeBuild returns a parameter mismatch error.
A unique, case sensitive identifier you provide to ensure the idempotency of the RetryBuild
request. The token is included in the RetryBuild
request and is valid for five minutes. If you repeat the RetryBuild
request with the same token, but change a parameter, CodeBuild returns a parameter mismatch error.
A unique, case sensitive identifier you provide to ensure the idempotency of the RetryBuild
request. The token is included in the RetryBuild
request and is valid for five minutes. If you repeat the RetryBuild
request with the same token, but change a parameter, AWS CodeBuild returns a parameter mismatch error.
Set to true if you do not want your S3 build log output encrypted. By default S3 build logs are encrypted.
" - } + }, + "bucketOwnerAccess":{"shape":"BucketOwnerAccess"} }, "documentation":"Information about S3 logs for a build project.
" }, @@ -3424,7 +3436,7 @@ }, "bucketOwner":{ "shape":"String", - "documentation":"The Amazon Web Services account identifier of the owner of the Amazon S3 bucket. This allows report data to be exported to an Amazon S3 bucket that is owned by an account other than the account running the build.
" + "documentation":"The AWS account identifier of the owner of the Amazon S3 bucket. This allows report data to be exported to an Amazon S3 bucket that is owned by an account other than the account running the build.
" }, "path":{ "shape":"String", @@ -3432,7 +3444,7 @@ }, "packaging":{ "shape":"ReportPackagingType", - "documentation":"The type of build output artifact to create. Valid values include:
NONE
: CodeBuild creates the raw data in the output bucket. This is the default if packaging is not specified.
ZIP
: CodeBuild creates a ZIP file with the raw data in the output bucket.
The type of build output artifact to create. Valid values include:
NONE
: AWS CodeBuild creates the raw data in the output bucket. This is the default if packaging is not specified.
ZIP
: AWS CodeBuild creates a ZIP file with the raw data in the output bucket.
The resource value that applies to the specified authorization type.
" } }, - "documentation":"Information about the authorization settings for CodeBuild to access the source code to be built.
This information is for the CodeBuild console's use only. Your code should not get or set this information directly.
" + "documentation":"Information about the authorization settings for AWS CodeBuild to access the source code to be built.
This information is for the AWS CodeBuild console's use only. Your code should not get or set this information directly.
" }, "SourceAuthType":{ "type":"string", @@ -3548,7 +3560,7 @@ }, "sourceVersion":{ "shape":"String", - "documentation":"The version of the batch build input to be built, for this build only. If not specified, the latest version is used. If specified, the contents depends on the source provider:
The commit ID, branch, or Git tag to use.
The commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the project level, then this sourceVersion
(at the build level) takes precedence.
For more information, see Source Version Sample with CodeBuild in the CodeBuild User Guide.
" + "documentation":"The version of the batch build input to be built, for this build only. If not specified, the latest version is used. If specified, the contents depends on the source provider:
The commit ID, branch, or Git tag to use.
The commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the project level, then this sourceVersion
(at the build level) takes precedence.
For more information, see Source Version Sample with CodeBuild in the AWS CodeBuild User Guide.
" }, "artifactsOverride":{ "shape":"ProjectArtifacts", @@ -3584,7 +3596,7 @@ }, "buildspecOverride":{ "shape":"String", - "documentation":"A buildspec file declaration that overrides, for this build only, the latest one already defined in the build project.
If this value is set, it can be either an inline buildspec definition, the path to an alternate buildspec file relative to the value of the built-in CODEBUILD_SRC_DIR
environment variable, or the path to an S3 bucket. The bucket must be in the same Region as the build project. Specify the buildspec file using its ARN (for example, arn:aws:s3:::my-codebuild-sample2/buildspec.yml
). If this value is not provided or is set to an empty string, the source code must contain a buildspec file in its root directory. For more information, see Buildspec File Name and Storage Location.
A buildspec file declaration that overrides, for this build only, the latest one already defined in the build project.
If this value is set, it can be either an inline buildspec definition, the path to an alternate buildspec file relative to the value of the built-in CODEBUILD_SRC_DIR
environment variable, or the path to an S3 bucket. The bucket must be in the same AWS Region as the build project. Specify the buildspec file using its ARN (for example, arn:aws:s3:::my-codebuild-sample2/buildspec.yml
). If this value is not provided or is set to an empty string, the source code must contain a buildspec file in its root directory. For more information, see Buildspec File Name and Storage Location.
The Key Management Service customer master key (CMK) that overrides the one specified in the batch build project. The CMK key encrypts the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
The AWS Key Management Service (AWS KMS) customer master key (CMK) that overrides the one specified in the batch build project. The CMK key encrypts the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
A unique, case sensitive identifier you provide to ensure the idempotency of the StartBuildBatch
request. The token is included in the StartBuildBatch
request and is valid for five minutes. If you repeat the StartBuildBatch
request with the same token, but change a parameter, CodeBuild returns a parameter mismatch error.
A unique, case sensitive identifier you provide to ensure the idempotency of the StartBuildBatch
request. The token is included in the StartBuildBatch
request and is valid for five minutes. If you repeat the StartBuildBatch
request with the same token, but change a parameter, AWS CodeBuild returns a parameter mismatch error.
The type of credentials CodeBuild uses to pull images in your batch build. There are two valid values:
Specifies that CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust CodeBuild's service principal.
Specifies that CodeBuild uses your build project's service role.
When using a cross-account or private registry image, you must use SERVICE_ROLE
credentials. When using an CodeBuild curated image, you must use CODEBUILD
credentials.
The type of credentials AWS CodeBuild uses to pull images in your batch build. There are two valid values:
Specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
Specifies that AWS CodeBuild uses your build project's service role.
When using a cross-account or private registry image, you must use SERVICE_ROLE
credentials. When using an AWS CodeBuild curated image, you must use CODEBUILD
credentials.
The name of the CodeBuild build project to start running a build.
" + "documentation":"The name of the AWS CodeBuild build project to start running a build.
" }, "secondarySourcesOverride":{ "shape":"ProjectSources", @@ -3687,7 +3699,7 @@ }, "sourceVersion":{ "shape":"String", - "documentation":"The version of the build input to be built, for this build only. If not specified, the latest version is used. If specified, the contents depends on the source provider:
The commit ID, branch, or Git tag to use.
The commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the project level, then this sourceVersion
(at the build level) takes precedence.
For more information, see Source Version Sample with CodeBuild in the CodeBuild User Guide.
" + "documentation":"The version of the build input to be built, for this build only. If not specified, the latest version is used. If specified, the contents depends on the source provider:
The commit ID, branch, or Git tag to use.
The commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
The version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the project level, then this sourceVersion
(at the build level) takes precedence.
For more information, see Source Version Sample with CodeBuild in the AWS CodeBuild User Guide.
" }, "artifactsOverride":{ "shape":"ProjectArtifacts", @@ -3719,11 +3731,11 @@ }, "gitSubmodulesConfigOverride":{ "shape":"GitSubmodulesConfig", - "documentation":"Information about the Git submodules configuration for this build of an CodeBuild build project.
" + "documentation":"Information about the Git submodules configuration for this build of an AWS CodeBuild build project.
" }, "buildspecOverride":{ "shape":"String", - "documentation":"A buildspec file declaration that overrides, for this build only, the latest one already defined in the build project.
If this value is set, it can be either an inline buildspec definition, the path to an alternate buildspec file relative to the value of the built-in CODEBUILD_SRC_DIR
environment variable, or the path to an S3 bucket. The bucket must be in the same Region as the build project. Specify the buildspec file using its ARN (for example, arn:aws:s3:::my-codebuild-sample2/buildspec.yml
). If this value is not provided or is set to an empty string, the source code must contain a buildspec file in its root directory. For more information, see Buildspec File Name and Storage Location.
A buildspec file declaration that overrides, for this build only, the latest one already defined in the build project.
If this value is set, it can be either an inline buildspec definition, the path to an alternate buildspec file relative to the value of the built-in CODEBUILD_SRC_DIR
environment variable, or the path to an S3 bucket. The bucket must be in the same AWS Region as the build project. Specify the buildspec file using its ARN (for example, arn:aws:s3:::my-codebuild-sample2/buildspec.yml
). If this value is not provided or is set to an empty string, the source code must contain a buildspec file in its root directory. For more information, see Buildspec File Name and Storage Location.
Set to true to report to your source provider the status of a build's start and completion. If you use this option with a source provider other than GitHub, GitHub Enterprise, or Bitbucket, an invalidInputException
is thrown.
To be able to report the build status to the source provider, the user associated with the source provider must have write access to the repo. If the user does not have write access, the build status cannot be updated. For more information, see Source provider access in the CodeBuild User Guide.
The status of a build triggered by a webhook is always reported to your source provider.
Set to true to report to your source provider the status of a build's start and completion. If you use this option with a source provider other than GitHub, GitHub Enterprise, or Bitbucket, an invalidInputException
is thrown.
To be able to report the build status to the source provider, the user associated with the source provider must have write access to the repo. If the user does not have write access, the build status cannot be updated. For more information, see Source provider access in the AWS CodeBuild User Guide.
The status of a build triggered by a webhook is always reported to your source provider.
The Key Management Service customer master key (CMK) that overrides the one specified in the build project. The CMK key encrypts the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
The AWS Key Management Service (AWS KMS) customer master key (CMK) that overrides the one specified in the build project. The CMK key encrypts the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
A unique, case sensitive identifier you provide to ensure the idempotency of the StartBuild request. The token is included in the StartBuild request and is valid for 5 minutes. If you repeat the StartBuild request with the same token, but change a parameter, CodeBuild returns a parameter mismatch error.
" + "documentation":"A unique, case sensitive identifier you provide to ensure the idempotency of the StartBuild request. The token is included in the StartBuild request and is valid for 5 minutes. If you repeat the StartBuild request with the same token, but change a parameter, AWS CodeBuild returns a parameter mismatch error.
" }, "logsConfigOverride":{ "shape":"LogsConfig", @@ -3791,7 +3803,7 @@ }, "imagePullCredentialsTypeOverride":{ "shape":"ImagePullCredentialsType", - "documentation":"The type of credentials CodeBuild uses to pull images in your build. There are two valid values:
Specifies that CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust CodeBuild's service principal.
Specifies that CodeBuild uses your build project's service role.
When using a cross-account or private registry image, you must use SERVICE_ROLE
credentials. When using an CodeBuild curated image, you must use CODEBUILD
credentials.
The type of credentials AWS CodeBuild uses to pull images in your build. There are two valid values:
Specifies that AWS CodeBuild uses its own credentials. This requires that you modify your ECR repository policy to trust AWS CodeBuild's service principal.
Specifies that AWS CodeBuild uses your build project's service role.
When using a cross-account or private registry image, you must use SERVICE_ROLE
credentials. When using an AWS CodeBuild curated image, you must use CODEBUILD
credentials.
The tag's value.
" } }, - "documentation":"A tag, consisting of a key and a value.
This tag is available for use by Amazon Web Services services that support tags in CodeBuild.
" + "documentation":"A tag, consisting of a key and a value.
This tag is available for use by AWS services that support tags in AWS CodeBuild.
" }, "TagList":{ "type":"list", @@ -3987,7 +3999,7 @@ }, "sourceVersion":{ "shape":"String", - "documentation":"A version of the build input to be built for this project. If not specified, the latest version is used. If specified, it must be one of:
For CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the build level, then that version takes precedence over this sourceVersion
(at the project level).
For more information, see Source Version Sample with CodeBuild in the CodeBuild User Guide.
" + "documentation":"A version of the build input to be built for this project. If not specified, the latest version is used. If specified, it must be one of:
For AWS CodeCommit: the commit ID, branch, or Git tag to use.
For GitHub: the commit ID, pull request ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a pull request ID is specified, it must use the format pr/pull-request-ID
(for example pr/25
). If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Bitbucket: the commit ID, branch name, or tag name that corresponds to the version of the source code you want to build. If a branch name is specified, the branch's HEAD commit ID is used. If not specified, the default branch's HEAD commit ID is used.
For Amazon S3: the version ID of the object that represents the build input ZIP file to use.
If sourceVersion
is specified at the build level, then that version takes precedence over this sourceVersion
(at the project level).
For more information, see Source Version Sample with CodeBuild in the AWS CodeBuild User Guide.
" }, "secondarySourceVersions":{ "shape":"ProjectSecondarySourceVersions", @@ -4011,11 +4023,11 @@ }, "serviceRole":{ "shape":"NonEmptyString", - "documentation":"The replacement ARN of the Identity and Access Management role that enables CodeBuild to interact with dependent Amazon Web Services services on behalf of the Amazon Web Services account.
" + "documentation":"The replacement ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
" }, "timeoutInMinutes":{ "shape":"TimeOut", - "documentation":"The replacement value in minutes, from 5 to 480 (8 hours), for CodeBuild to wait before timing out any related build that did not get marked as completed.
" + "documentation":"The replacement value in minutes, from 5 to 480 (8 hours), for AWS CodeBuild to wait before timing out any related build that did not get marked as completed.
" }, "queuedTimeoutInMinutes":{ "shape":"TimeOut", @@ -4023,15 +4035,15 @@ }, "encryptionKey":{ "shape":"NonEmptyString", - "documentation":"The Key Management Service customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/<alias-name>
).
An updated list of tag key and value pairs associated with this build project.
These tags are available for use by Amazon Web Services services that support CodeBuild build project tags.
" + "documentation":"An updated list of tag key and value pairs associated with this build project.
These tags are available for use by AWS services that support AWS CodeBuild build project tags.
" }, "vpcConfig":{ "shape":"VpcConfig", - "documentation":"VpcConfig enables CodeBuild to access resources in an Amazon VPC.
" + "documentation":"VpcConfig enables AWS CodeBuild to access resources in an Amazon VPC.
" }, "badgeEnabled":{ "shape":"WrapperBoolean", @@ -4039,7 +4051,7 @@ }, "logsConfig":{ "shape":"LogsConfig", - "documentation":"Information about logs for the build project. A project can create logs in CloudWatch Logs, logs in an S3 bucket, or both.
" + "documentation":"Information about logs for the build project. A project can create logs in Amazon CloudWatch Logs, logs in an S3 bucket, or both.
" }, "fileSystemLocations":{ "shape":"ProjectFileSystemLocations", @@ -4075,7 +4087,7 @@ }, "tags":{ "shape":"TagList", - "documentation":"An updated list of tag key and value pairs associated with this report group.
These tags are available for use by Amazon Web Services services that support CodeBuild report group tags.
" + "documentation":"An updated list of tag key and value pairs associated with this report group.
These tags are available for use by AWS services that support AWS CodeBuild report group tags.
" } } }, @@ -4094,7 +4106,7 @@ "members":{ "projectName":{ "shape":"ProjectName", - "documentation":"The name of the CodeBuild project.
" + "documentation":"The name of the AWS CodeBuild project.
" }, "branchFilter":{ "shape":"String", @@ -4119,7 +4131,7 @@ "members":{ "webhook":{ "shape":"Webhook", - "documentation":"Information about a repository's webhook that is associated with a project in CodeBuild.
" + "documentation":"Information about a repository's webhook that is associated with a project in AWS CodeBuild.
" } } }, @@ -4145,7 +4157,7 @@ "documentation":"A list of one or more security groups IDs in your Amazon VPC.
" } }, - "documentation":"Information about the VPC configuration that CodeBuild accesses.
" + "documentation":"Information about the VPC configuration that AWS CodeBuild accesses.
" }, "Webhook":{ "type":"structure", @@ -4156,7 +4168,7 @@ }, "payloadUrl":{ "shape":"NonEmptyString", - "documentation":"The CodeBuild endpoint where webhook events are sent.
" + "documentation":"The AWS CodeBuild endpoint where webhook events are sent.
" }, "secret":{ "shape":"NonEmptyString", @@ -4179,7 +4191,7 @@ "documentation":"A timestamp that indicates the last time a repository's secret token was modified.
" } }, - "documentation":"Information about a webhook that connects repository events to a build project in CodeBuild.
" + "documentation":"Information about a webhook that connects repository events to a build project in AWS CodeBuild.
" }, "WebhookBuildType":{ "type":"string", @@ -4225,5 +4237,5 @@ "WrapperInt":{"type":"integer"}, "WrapperLong":{"type":"long"} }, - "documentation":"CodeBuild is a fully managed build service in the cloud. CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools, such as Apache Maven, Gradle, and more. You can also fully customize build environments in CodeBuild to use your own build tools. CodeBuild scales automatically to meet peak build requests. You pay only for the build time you consume. For more information about CodeBuild, see the CodeBuild User Guide.
" + "documentation":"AWS CodeBuild is a fully managed build service in the cloud. AWS CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. AWS CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools, such as Apache Maven, Gradle, and more. You can also fully customize build environments in AWS CodeBuild to use your own build tools. AWS CodeBuild scales automatically to meet peak build requests. You pay only for the build time you consume. For more information about AWS CodeBuild, see the AWS CodeBuild User Guide.
" } From d9d46f8944205a42b3958e4470bfb064d6bc5bed Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 21 Jul 2021 18:08:58 +0000 Subject: [PATCH 06/13] AWSKendraFrontendService Update: Amazon Kendra now provides a data source connector for Amazon WorkDocs. For more information, see https://docs.aws.amazon.com/kendra/latest/dg/data-source-workdocs.html --- ...ture-AWSKendraFrontendService-79e6338.json | 6 ++ .../codegen-resources/service-2.json | 60 ++++++++++++++++--- 2 files changed, 57 insertions(+), 9 deletions(-) create mode 100644 .changes/next-release/feature-AWSKendraFrontendService-79e6338.json diff --git a/.changes/next-release/feature-AWSKendraFrontendService-79e6338.json b/.changes/next-release/feature-AWSKendraFrontendService-79e6338.json new file mode 100644 index 000000000000..a99d4572afe9 --- /dev/null +++ b/.changes/next-release/feature-AWSKendraFrontendService-79e6338.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWSKendraFrontendService", + "contributor": "", + "description": "Amazon Kendra now provides a data source connector for Amazon WorkDocs. For more information, see https://docs.aws.amazon.com/kendra/latest/dg/data-source-workdocs.html" +} diff --git a/services/kendra/src/main/resources/codegen-resources/service-2.json b/services/kendra/src/main/resources/codegen-resources/service-2.json index 5429a3a9b3ec..0e390427ec90 100644 --- a/services/kendra/src/main/resources/codegen-resources/service-2.json +++ b/services/kendra/src/main/resources/codegen-resources/service-2.json @@ -646,7 +646,7 @@ {"shape":"AccessDeniedException"}, {"shape":"InternalServerException"} ], - "documentation":"Enables you to provide feedback to Amazon Kendra to improve the performance of the service.
" + "documentation":"Enables you to provide feedback to Amazon Kendra to improve the performance of your index.
" }, "TagResource":{ "name":"TagResource", @@ -889,7 +889,7 @@ "documentation":"Performs a less than or equals operation on two document attributes. Use with a document attribute of type Integer
or Long
.
Provides filtering the query results based on document attributes.
When you use the AndAllFilters
or OrAllFilters
, filters you can use 2 layers under the first attribute filter. For example, you can use:
<AndAllFilters>
<OrAllFilters>
<EqualTo>
If you use more than 2 layers, you receive a ValidationException
exception with the message \"AttributeFilter
cannot have a depth of more than 2.\"
Provides filtering the query results based on document attributes.
When you use the AndAllFilters
or OrAllFilters
, filters you can use 2 layers under the first attribute filter. For example, you can use:
<AndAllFilters>
<OrAllFilters>
<EqualTo>
If you use more than 2 layers, you receive a ValidationException
exception with the message \"AttributeFilter
cannot have a depth of more than 2.\"
If you use more than 10 attribute filters, you receive a ValidationException
exception with the message \"AttributeFilter
cannot have a length of more than 10\".
The amount of extra storage capacity for an index. A single capacity unit for an index provides 150 GB of storage space or 500,000 documents, whichever is reached first.
" + "documentation":"The amount of extra storage capacity for an index. A single capacity unit provides 30 GB of storage space or 100,000 documents, whichever is reached first.
" }, "QueryCapacityUnits":{ "shape":"QueryCapacityUnit", - "documentation":"The amount of extra query capacity for an index and GetQuerySuggestions capacity.
A single extra capacity unit for an index provides 0.5 queries per second or approximately 40,000 queries per day.
GetQuerySuggestions
capacity is 5 times the provisioned query capacity for an index. For example, the base capacity for an index is 0.5 queries per second, so GetQuerySuggestions capacity is 2.5 calls per second. If adding another 0.5 queries per second to total 1 queries per second for an index, the GetQuerySuggestions
capacity is 5 calls per second.
The amount of extra query capacity for an index and GetQuerySuggestions capacity.
A single extra capacity unit for an index provides 0.1 queries per second or approximately 8,000 queries per day.
GetQuerySuggestions
capacity is five times the provisioned query capacity for an index, or the base capacity of 2.5 calls per second, whichever is higher. For example, the base capacity for an index is 0.1 queries per second, and GetQuerySuggestions
capacity has a base of 2.5 calls per second. If you add another 0.1 queries per second to total 0.2 queries per second for an index, the GetQuerySuggestions
capacity is 2.5 calls per second (higher than five times 0.2 queries per second).
Specifies capacity units configured for your enterprise edition index. You can add and remove capacity units to tune an index to your requirements.
" @@ -1510,7 +1510,7 @@ }, "SecretArn":{ "shape":"SecretArn", - "documentation":"The Amazon Resource Name (ARN) of credentials stored in AWS Secrets Manager. The credentials should be a user/password pair. For more information, see Using a Database Data Source. For more information about AWS Secrets Manager, see What Is AWS Secrets Manager in the Secrets Manager user guide.
" + "documentation":"The Amazon Resource Name (ARN) of credentials stored in Secrets Manager. The credentials should be a user/password pair. For more information, see Using a Database Data Source. For more information about Secrets Manager, see What Is Secrets Manager in the Secrets Manager user guide.
" } }, "documentation":"Provides the information necessary to connect to a database.
" @@ -1653,7 +1653,7 @@ }, "Edition":{ "shape":"IndexEdition", - "documentation":"The Amazon Kendra edition to use for the index. Choose DEVELOPER_EDITION
for indexes intended for development, testing, or proof of concept. Use ENTERPRISE_EDITION
for your production databases. Once you set the edition for an index, it can't be changed.
The Edition
parameter is optional. If you don't supply a value, the default is ENTERPRISE_EDITION
.
The Amazon Kendra edition to use for the index. Choose DEVELOPER_EDITION
for indexes intended for development, testing, or proof of concept. Use ENTERPRISE_EDITION
for your production databases. Once you set the edition for an index, it can't be changed.
The Edition
parameter is optional. If you don't supply a value, the default is ENTERPRISE_EDITION
.
For more information on quota limits for enterprise and developer editions, see Quotas.
" }, "RoleArn":{ "shape":"RoleArn", @@ -1828,7 +1828,11 @@ "shape":"GoogleDriveConfiguration", "documentation":"Provides configuration for data sources that connect to Google Drive.
" }, - "WebCrawlerConfiguration":{"shape":"WebCrawlerConfiguration"} + "WebCrawlerConfiguration":{"shape":"WebCrawlerConfiguration"}, + "WorkDocsConfiguration":{ + "shape":"WorkDocsConfiguration", + "documentation":"Provides the configuration information to connect to WorkDocs as your data source.
" + } }, "documentation":"Configuration information for a Amazon Kendra data source.
" }, @@ -2076,7 +2080,8 @@ "CUSTOM", "CONFLUENCE", "GOOGLEDRIVE", - "WEBCRAWLER" + "WEBCRAWLER", + "WORKDOCS" ] }, "DataSourceVpcConfiguration":{ @@ -3973,6 +3978,12 @@ "DESCENDING" ] }, + "OrganizationId":{ + "type":"string", + "max":12, + "min":12, + "pattern":"d-[0-9a-fA-F]{10}" + }, "Port":{ "type":"integer", "max":65535, @@ -4968,7 +4979,7 @@ }, "SecretArn":{ "shape":"SecretArn", - "documentation":"The Amazon Resource Name (ARN) of credentials stored in AWS Secrets Manager. The credentials should be a user/password pair. If you use SharePoint Sever, you also need to provide the sever domain name as part of the credentials. For more information, see Using a Microsoft SharePoint Data Source. For more information about AWS Secrets Manager, see What Is AWS Secrets Manager in the Secrets Manager user guide.
" + "documentation":"The Amazon Resource Name (ARN) of credentials stored in Secrets Manager. The credentials should be a user/password pair. If you use SharePoint Server, you also need to provide the sever domain name as part of the credentials. For more information, see Using a Microsoft SharePoint Data Source. For more information about Secrets Manager, see What Is Secrets Manager in the Secrets Manager user guide.
" }, "CrawlAttachments":{ "shape":"Boolean", @@ -5798,6 +5809,37 @@ "SUBDOMAINS", "EVERYTHING" ] + }, + "WorkDocsConfiguration":{ + "type":"structure", + "required":["OrganizationId"], + "members":{ + "OrganizationId":{ + "shape":"OrganizationId", + "documentation":"The identifier of the directory corresponding to your Amazon WorkDocs site repository.
You can find the organization ID in the AWS Directory Service by going to Active Directory, then Directories. Your Amazon WorkDocs site directory has an ID, which is the organization ID. You can also set up a new Amazon WorkDocs directory in the AWS Directory Service console and enable a Amazon WorkDocs site for the directory in the Amazon WorkDocs console.
" + }, + "CrawlComments":{ + "shape":"Boolean", + "documentation":" TRUE
to include comments on documents in your index. Including comments in your index means each comment is a document that can be searched on.
The default is set to FALSE
.
TRUE
to use the change logs to update documents in your index instead of scanning all documents.
If you are syncing your Amazon WorkDocs data source with your index for the first time, all documents are scanned. After your first sync, you can use the change logs to update your documents in your index for future syncs.
The default is set to FALSE
.
A list of regular expression patterns to include certain files in your Amazon WorkDocs site repository. Files that match the patterns are included in the index. Files that don't match the patterns are excluded from the index. If a file matches both an inclusion pattern and an exclusion pattern, the exclusion pattern takes precedence and the file isn’t included in the index.
" + }, + "ExclusionPatterns":{ + "shape":"DataSourceInclusionsExclusionsStrings", + "documentation":"A list of regular expression patterns to exclude certain files in your Amazon WorkDocs site repository. Files that match the patterns are excluded from the index. Files that don’t match the patterns are included in the index. If a file matches both an inclusion pattern and an exclusion pattern, the exclusion pattern takes precedence and the file isn’t included in the index.
" + }, + "FieldMappings":{ + "shape":"DataSourceToIndexFieldMappingList", + "documentation":"A list of DataSourceToIndexFieldMapping
objects that map Amazon WorkDocs field names to custom index field names in Amazon Kendra. You must first create the custom index fields using the UpdateIndex
operation before you map to Amazon WorkDocs fields. For more information, see Mapping Data Source Fields. The Amazon WorkDocs data source field names need to exist in your Amazon WorkDocs custom metadata.
Provides the configuration information to connect to Amazon WorkDocs as your data source.
Amazon WorkDocs connector is available in Oregon, North Virginia, Sydney, Singapore and Ireland regions.
" } }, "documentation":"Amazon Kendra is a service for indexing large document sets.
" From e0ecc5662355c62b2cd152374dce3c8577606156 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 21 Jul 2021 18:08:57 +0000 Subject: [PATCH 07/13] AWS Proton Update: Documentation updates for AWS Proton --- .../feature-AWSProton-757f399.json | 6 ++ .../codegen-resources/service-2.json | 78 +++++++++---------- 2 files changed, 45 insertions(+), 39 deletions(-) create mode 100644 .changes/next-release/feature-AWSProton-757f399.json diff --git a/.changes/next-release/feature-AWSProton-757f399.json b/.changes/next-release/feature-AWSProton-757f399.json new file mode 100644 index 000000000000..873e896e0749 --- /dev/null +++ b/.changes/next-release/feature-AWSProton-757f399.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Proton", + "contributor": "", + "description": "Documentation updates for AWS Proton" +} diff --git a/services/proton/src/main/resources/codegen-resources/service-2.json b/services/proton/src/main/resources/codegen-resources/service-2.json index 10bc9a4cf099..644df7057ccc 100644 --- a/services/proton/src/main/resources/codegen-resources/service-2.json +++ b/services/proton/src/main/resources/codegen-resources/service-2.json @@ -29,7 +29,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"In a management account, an environment account connection request is accepted. When the environment account connection request is accepted, AWS Proton can use the associated IAM role to provision environment infrastructure resources in the associated environment account.
For more information, see Environment account connections in the AWS Proton Administration guide.
", + "documentation":"In a management account, an environment account connection request is accepted. When the environment account connection request is accepted, AWS Proton can use the associated IAM role to provision environment infrastructure resources in the associated environment account.
For more information, see Environment account connections in the AWS Proton Administrator guide.
", "idempotent":true }, "CancelEnvironmentDeployment":{ @@ -48,7 +48,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"Attempts to cancel an environment deployment on an UpdateEnvironment action, if the deployment is IN_PROGRESS
. For more information, see Update an environment in the AWS Proton Administration guide.
The following list includes potential cancellation scenarios.
If the cancellation attempt succeeds, the resulting deployment state is CANCELLED
.
If the cancellation attempt fails, the resulting deployment state is FAILED
.
If the current UpdateEnvironment action succeeds before the cancellation attempt starts, the resulting deployment state is SUCCEEDED
and the cancellation attempt has no effect.
Attempts to cancel an environment deployment on an UpdateEnvironment action, if the deployment is IN_PROGRESS
. For more information, see Update an environment in the AWS Proton Administrator guide.
The following list includes potential cancellation scenarios.
If the cancellation attempt succeeds, the resulting deployment state is CANCELLED
.
If the cancellation attempt fails, the resulting deployment state is FAILED
.
If the current UpdateEnvironment action succeeds before the cancellation attempt starts, the resulting deployment state is SUCCEEDED
and the cancellation attempt has no effect.
Attempts to cancel a service instance deployment on an UpdateServiceInstance action, if the deployment is IN_PROGRESS
. For more information, see Update a service instance in the AWS Proton Administration guide or the AWS Proton User guide.
The following list includes potential cancellation scenarios.
If the cancellation attempt succeeds, the resulting deployment state is CANCELLED
.
If the cancellation attempt fails, the resulting deployment state is FAILED
.
If the current UpdateServiceInstance action succeeds before the cancellation attempt starts, the resulting deployment state is SUCCEEDED
and the cancellation attempt has no effect.
Attempts to cancel a service instance deployment on an UpdateServiceInstance action, if the deployment is IN_PROGRESS
. For more information, see Update a service instance in the AWS Proton Administrator guide or the AWS Proton User guide.
The following list includes potential cancellation scenarios.
If the cancellation attempt succeeds, the resulting deployment state is CANCELLED
.
If the cancellation attempt fails, the resulting deployment state is FAILED
.
If the current UpdateServiceInstance action succeeds before the cancellation attempt starts, the resulting deployment state is SUCCEEDED
and the cancellation attempt has no effect.
Attempts to cancel a service pipeline deployment on an UpdateServicePipeline action, if the deployment is IN_PROGRESS
. For more information, see Update a service pipeline in the AWS Proton Administration guide or the AWS Proton User guide.
The following list includes potential cancellation scenarios.
If the cancellation attempt succeeds, the resulting deployment state is CANCELLED
.
If the cancellation attempt fails, the resulting deployment state is FAILED
.
If the current UpdateServicePipeline action succeeds before the cancellation attempt starts, the resulting deployment state is SUCCEEDED
and the cancellation attempt has no effect.
Attempts to cancel a service pipeline deployment on an UpdateServicePipeline action, if the deployment is IN_PROGRESS
. For more information, see Update a service pipeline in the AWS Proton Administrator guide or the AWS Proton User guide.
The following list includes potential cancellation scenarios.
If the cancellation attempt succeeds, the resulting deployment state is CANCELLED
.
If the cancellation attempt fails, the resulting deployment state is FAILED
.
If the current UpdateServicePipeline action succeeds before the cancellation attempt starts, the resulting deployment state is SUCCEEDED
and the cancellation attempt has no effect.
Deploy a new environment. An AWS Proton environment is created from an environment template that defines infrastructure and resources that can be shared across services. For more information, see the Environments in the AWS Proton Administration Guide.
", + "documentation":"Deploy a new environment. An AWS Proton environment is created from an environment template that defines infrastructure and resources that can be shared across services. For more information, see the Environments in the AWS Proton Administrator Guide.
", "idempotent":true }, "CreateEnvironmentAccountConnection":{ @@ -122,7 +122,7 @@ {"shape":"ConflictException"}, {"shape":"InternalServerException"} ], - "documentation":"Create an environment account connection in an environment account so that environment infrastructure resources can be provisioned in the environment account from the management account.
An environment account connection is a secure bi-directional connection between a management account and an environment account that maintains authorization and permissions. For more information, see Environment account connections in the AWS Proton Administration guide.
", + "documentation":"Create an environment account connection in an environment account so that environment infrastructure resources can be provisioned in the environment account from a management account.
An environment account connection is a secure bi-directional connection between a management account and an environment account that maintains authorization and permissions. For more information, see Environment account connections in the AWS Proton Administrator guide.
", "idempotent":true }, "CreateEnvironmentTemplate":{ @@ -141,7 +141,7 @@ {"shape":"ConflictException"}, {"shape":"InternalServerException"} ], - "documentation":"Create an environment template for AWS Proton. For more information, see Environment Templates in the AWS Proton Administration Guide.
You can create an environment template in one of the two following ways:
Register and publish a standard environment template that instructs AWS Proton to deploy and manage environment infrastructure.
Register and publish a customer managed environment template that connects AWS Proton to your existing provisioned infrastructure that you manage. AWS Proton doesn't manage your existing provisioned infrastructure. To create an environment template for customer provisioned and managed infrastructure, include the provisioning
parameter and set the value to CUSTOMER_MANAGED
. For more information, see Register and publish an environment template in the AWS Proton Administration Guide.
Create an environment template for AWS Proton. For more information, see Environment Templates in the AWS Proton Administrator Guide.
You can create an environment template in one of the two following ways:
Register and publish a standard environment template that instructs AWS Proton to deploy and manage environment infrastructure.
Register and publish a customer managed environment template that connects AWS Proton to your existing provisioned infrastructure that you manage. AWS Proton doesn't manage your existing provisioned infrastructure. To create an environment template for customer provisioned and managed infrastructure, include the provisioning
parameter and set the value to CUSTOMER_MANAGED
. For more information, see Register and publish an environment template in the AWS Proton Administrator Guide.
Create an AWS Proton service. An AWS Proton service is an instantiation of a service template and often includes several service instances and pipeline. For more information, see Services in the AWS Proton Administration Guide and Services in the AWS Proton User Guide.
", + "documentation":"Create an AWS Proton service. An AWS Proton service is an instantiation of a service template and often includes several service instances and pipeline. For more information, see Services in the AWS Proton Administrator Guide and Services in the AWS Proton User Guide.
", "idempotent":true }, "CreateServiceTemplate":{ @@ -200,7 +200,7 @@ {"shape":"ConflictException"}, {"shape":"InternalServerException"} ], - "documentation":"Create a service template. The administrator creates a service template to define standardized infrastructure and an optional CICD service pipeline. Developers, in turn, select the service template from AWS Proton. If the selected service template includes a service pipeline definition, they provide a link to their source code repository. AWS Proton then deploys and manages the infrastructure defined by the selected service template. For more information, see Service Templates in the AWS Proton Administration Guide.
", + "documentation":"Create a service template. The administrator creates a service template to define standardized infrastructure and an optional CICD service pipeline. Developers, in turn, select the service template from AWS Proton. If the selected service template includes a service pipeline definition, they provide a link to their source code repository. AWS Proton then deploys and manages the infrastructure defined by the selected service template. For more information, see Service Templates in the AWS Proton Administrator Guide.
", "idempotent":true }, "CreateServiceTemplateVersion":{ @@ -258,7 +258,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"In an environment account, delete an environment account connection.
After you delete an environment account connection that’s in use by an AWS Proton environment, AWS Proton can’t manage the environment infrastructure resources until a new environment account connection is accepted for the environment account and associated environment. You're responsible for cleaning up provisioned resources that remain without an environment connection.
For more information, see Environment account connections in the AWS Proton Administration guide.
", + "documentation":"In an environment account, delete an environment account connection.
After you delete an environment account connection that’s in use by an AWS Proton environment, AWS Proton can’t manage the environment infrastructure resources until a new environment account connection is accepted for the environment account and associated environment. You're responsible for cleaning up provisioned resources that remain without an environment connection.
For more information, see Environment account connections in the AWS Proton Administrator guide.
", "idempotent":true }, "DeleteEnvironmentTemplate":{ @@ -405,7 +405,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"In an environment account, view the detail data for an environment account connection.
For more information, see Environment account connections in the AWS Proton Administration guide.
" + "documentation":"In an environment account, view the detail data for an environment account connection.
For more information, see Environment account connections in the AWS Proton Administrator guide.
" }, "GetEnvironmentTemplate":{ "name":"GetEnvironmentTemplate", @@ -523,7 +523,7 @@ {"shape":"ThrottlingException"}, {"shape":"InternalServerException"} ], - "documentation":"View a list of environment account connections.
For more information, see Environment account connections in the AWS Proton Administration guide.
" + "documentation":"View a list of environment account connections.
For more information, see Environment account connections in the AWS Proton Administrator guide.
" }, "ListEnvironmentTemplateVersions":{ "name":"ListEnvironmentTemplateVersions", @@ -656,7 +656,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"List tags for a resource. For more information, see AWS Proton resources and tagging in the AWS Proton Administration Guide or AWS Proton User Guide.
" + "documentation":"List tags for a resource. For more information, see AWS Proton resources and tagging in the AWS Proton Administrator Guide or AWS Proton User Guide.
" }, "RejectEnvironmentAccountConnection":{ "name":"RejectEnvironmentAccountConnection", @@ -674,7 +674,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"In a management account, reject an environment account connection from another environment account.
After you reject an environment account connection request, you won’t be able to accept or use the rejected environment account connection.
You can’t reject an environment account connection that is connected to an environment.
For more information, see Environment account connections in the AWS Proton Administration guide.
", + "documentation":"In a management account, reject an environment account connection from another environment account.
After you reject an environment account connection request, you won’t be able to accept or use the rejected environment account connection.
You can’t reject an environment account connection that is connected to an environment.
For more information, see Environment account connections in the AWS Proton Administrator guide.
", "idempotent":true }, "TagResource":{ @@ -693,7 +693,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"Tag a resource. For more information, see AWS Proton resources and tagging in the AWS Proton Administration Guide or AWS Proton User Guide.
", + "documentation":"Tag a resource. For more information, see AWS Proton resources and tagging in the AWS Proton Administrator Guide or AWS Proton User Guide.
", "idempotent":true }, "UntagResource":{ @@ -712,7 +712,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"Remove a tag from a resource. For more information, see AWS Proton resources and tagging in the AWS Proton Administration Guide or AWS Proton User Guide.
", + "documentation":"Remove a tag from a resource. For more information, see AWS Proton resources and tagging in the AWS Proton Administrator Guide or AWS Proton User Guide.
", "idempotent":true }, "UpdateAccountSettings":{ @@ -748,7 +748,7 @@ {"shape":"ResourceNotFoundException"}, {"shape":"InternalServerException"} ], - "documentation":"Update an environment.
If the environment is associated with an environment account connection, don't update or include the protonServiceRoleArn
parameter to update or connect to an environment account connection.
You can only update to a new environment account connection if it was created in the same environment account that the current environment account connection was created in and associated with the current environment.
If the environment isn't associated with an environment account connection, don't update or include the environmentAccountConnectionId
parameter to update or connect to an environment account connection.
You can update either the environmentAccountConnectionId
or protonServiceRoleArn
parameter and value. You can’t update both.
There are four modes for updating an environment as described in the following. The deploymentType
field defines the mode.
NONE
In this mode, a deployment doesn't occur. Only the requested metadata parameters are updated.
CURRENT_VERSION
In this mode, the environment is deployed and updated with the new spec that you provide. Only requested parameters are updated. Don’t include minor or major version parameters when you use this deployment-type
.
MINOR_VERSION
In this mode, the environment is deployed and updated with the published, recommended (latest) minor version of the current major version in use, by default. You can also specify a different minor version of the current major version in use.
MAJOR_VERSION
In this mode, the environment is deployed and updated with the published, recommended (latest) major and minor version of the current template, by default. You can also specify a different major version that's higher than the major version in use and a minor version (optional).
Update an environment.
If the environment is associated with an environment account connection, don't update or include the protonServiceRoleArn
parameter to update or connect to an environment account connection.
You can only update to a new environment account connection if it was created in the same environment account that the current environment account connection was created in and is associated with the current environment.
If the environment isn't associated with an environment account connection, don't update or include the environmentAccountConnectionId
parameter to update or connect to an environment account connection.
You can update either the environmentAccountConnectionId
or protonServiceRoleArn
parameter and value. You can’t update both.
There are four modes for updating an environment as described in the following. The deploymentType
field defines the mode.
NONE
In this mode, a deployment doesn't occur. Only the requested metadata parameters are updated.
CURRENT_VERSION
In this mode, the environment is deployed and updated with the new spec that you provide. Only requested parameters are updated. Don’t include minor or major version parameters when you use this deployment-type
.
MINOR_VERSION
In this mode, the environment is deployed and updated with the published, recommended (latest) minor version of the current major version in use, by default. You can also specify a different minor version of the current major version in use.
MAJOR_VERSION
In this mode, the environment is deployed and updated with the published, recommended (latest) major and minor version of the current template, by default. You can also specify a different major version that's higher than the major version in use and a minor version (optional).
In an environment account, update an environment account connection to use a new IAM role.
For more information, see Environment account connections in the AWS Proton Administration guide.
", + "documentation":"In an environment account, update an environment account connection to use a new IAM role.
For more information, see Environment account connections in the AWS Proton Administrator guide.
", "idempotent":true }, "UpdateEnvironmentTemplate":{ @@ -1126,7 +1126,7 @@ }, "environmentAccountConnectionId":{ "shape":"EnvironmentAccountConnectionId", - "documentation":"The ID of the environment account connection that you provide if you're provisioning your environment infrastructure resources to an environment account. You must include either the environmentAccountConnectionId
or protonServiceRoleArn
parameter and value. For more information, see Environment account connections in the AWS Proton Administration guide.
The ID of the environment account connection that you provide if you're provisioning your environment infrastructure resources to an environment account. You must include either the environmentAccountConnectionId
or protonServiceRoleArn
parameter and value. For more information, see Environment account connections in the AWS Proton Administrator guide.
A link to a YAML formatted spec file that provides inputs as defined in the environment template bundle schema file. For more information, see Environments in the AWS Proton Administration Guide.
" + "documentation":"A link to a YAML formatted spec file that provides inputs as defined in the environment template bundle schema file. For more information, see Environments in the AWS Proton Administrator Guide.
" }, "tags":{ "shape":"TagList", - "documentation":"Create tags for your environment. For more information, see AWS Proton resources and tagging in the AWS Proton Administration Guide or AWS Proton User Guide.
" + "documentation":"Create tags for your environment. For more information, see AWS Proton resources and tagging in the AWS Proton Administrator Guide or AWS Proton User Guide.
" }, "templateMajorVersion":{ "shape":"TemplateVersionPart", @@ -1154,7 +1154,7 @@ }, "templateName":{ "shape":"ResourceName", - "documentation":"The name of the environment template. For more information, see Environment Templates in the AWS Proton Administration Guide.
" + "documentation":"The name of the environment template. For more information, see Environment Templates in the AWS Proton Administrator Guide.
" } } }, @@ -1194,7 +1194,7 @@ }, "tags":{ "shape":"TagList", - "documentation":"Create tags for your environment template. For more information, see AWS Proton resources and tagging in the AWS Proton Administration Guide or AWS Proton User Guide.
" + "documentation":"Create tags for your environment template. For more information, see AWS Proton resources and tagging in the AWS Proton Administrator Guide or AWS Proton User Guide.
" } } }, @@ -1226,7 +1226,7 @@ }, "majorVersion":{ "shape":"TemplateVersionPart", - "documentation":"To create a new minor version of the environment template, include a majorVersion
.
To create a new major and minor version of the environment template, exclude majorVersion
.
To create a new minor version of the environment template, include a majorVersion
.
To create a new major and minor version of the environment template, exclude majorVersion
.
The ARN of the repository connection. For more information, see Set up repository connection in the AWS Proton Administration Guide and Getting started in the AWS Proton User Guide. Don't include this parameter if your service template doesn't include a service pipeline.
" + "documentation":"The Amazon Resource Name (ARN) of the repository connection. For more information, see Set up repository connection in the AWS Proton Administrator Guide and Setting up with AWS Proton in the AWS Proton User Guide. Don't include this parameter if your service template doesn't include a service pipeline.
" }, "repositoryId":{ "shape":"RepositoryId", @@ -1283,11 +1283,11 @@ }, "spec":{ "shape":"SpecContents", - "documentation":"A link to a spec file that provides inputs as defined in the service template bundle schema file. The spec file is in YAML format. Don’t include pipeline inputs in the spec if your service template doesn’t include a service pipeline. For more information, see Create a service in the AWS Proton Administration Guide and Create a service in the AWS Proton User Guide.
" + "documentation":"A link to a spec file that provides inputs as defined in the service template bundle schema file. The spec file is in YAML format. Don’t include pipeline inputs in the spec if your service template doesn’t include a service pipeline. For more information, see Create a service in the AWS Proton Administrator Guide and Create a service in the AWS Proton User Guide.
" }, "tags":{ "shape":"TagList", - "documentation":"Create tags for your service. For more information, see AWS Proton resources and tagging in the AWS Proton Administration Guide or AWS Proton User Guide.
" + "documentation":"Create tags for your service. For more information, see AWS Proton resources and tagging in the AWS Proton Administrator Guide or AWS Proton User Guide.
" }, "templateMajorVersion":{ "shape":"TemplateVersionPart", @@ -1335,11 +1335,11 @@ }, "pipelineProvisioning":{ "shape":"Provisioning", - "documentation":"AWS Proton includes a service pipeline for your service by default. When included, this parameter indicates that an AWS Proton service pipeline won't be included for your service. Once specified, this parameter can't be changed. For more information, see Service template bundles in the AWS Proton Administration Guide.
" + "documentation":"AWS Proton includes a service pipeline for your service by default. When included, this parameter indicates that an AWS Proton service pipeline won't be included for your service. Once specified, this parameter can't be changed. For more information, see Service template bundles in the AWS Proton Administrator Guide.
" }, "tags":{ "shape":"TagList", - "documentation":"Create tags for your service template. For more information, see AWS Proton resources and tagging in the AWS Proton Administration Guide or AWS Proton User Guide.
" + "documentation":"Create tags for your service template. For more information, see AWS Proton resources and tagging in the AWS Proton Administrator Guide or AWS Proton User Guide.
" } } }, @@ -1649,7 +1649,7 @@ }, "protonServiceRoleArn":{ "shape":"Arn", - "documentation":"The ARN of the AWS Proton service role that allows AWS Proton to make calls to other services on your behalf.
" + "documentation":"The Amazon Resource Name (ARN) of the AWS Proton service role that allows AWS Proton to make calls to other services on your behalf.
" }, "provisioning":{ "shape":"Provisioning", @@ -1669,7 +1669,7 @@ }, "templateName":{ "shape":"ResourceName", - "documentation":"The ARN of the environment template.
" + "documentation":"The Amazon Resource Name (ARN) of the environment template.
" } }, "documentation":"The environment detail data. An AWS Proton environment is a set resources shared across an AWS Proton service.
" @@ -1865,7 +1865,7 @@ }, "protonServiceRoleArn":{ "shape":"Arn", - "documentation":"The ARN of the AWS Proton service role that allows AWS Proton to make calls to other services on your behalf.
" + "documentation":"The Amazon Resource Name (ARN) of the AWS Proton service role that allows AWS Proton to make calls to other services on your behalf.
" }, "provisioning":{ "shape":"Provisioning", @@ -2627,7 +2627,7 @@ }, "resourceArn":{ "shape":"Arn", - "documentation":"The ARN of the resource for the listed tags.
" + "documentation":"The Amazon Resource Name (ARN) of the resource for the listed tags.
" } } }, @@ -2770,7 +2770,7 @@ }, "repositoryConnectionArn":{ "shape":"Arn", - "documentation":"The ARN of the repository connection. For more information, see Set up a repository connection in the AWS Proton Administration Guide and Getting started in the AWS Proton User Guide.
" + "documentation":"The Amazon Resource Name (ARN) of the repository connection. For more information, see Set up a repository connection in the AWS Proton Administrator Guide and Setting up with AWS Proton in the AWS Proton User Guide.
" }, "repositoryId":{ "shape":"RepositoryId", @@ -3001,7 +3001,7 @@ "members":{ "message":{"shape":"ErrorMessage"} }, - "documentation":"A quota was exceeded. For more information, see AWS Proton Quotas in the AWS Proton Administration Guide.
", + "documentation":"A quota was exceeded. For more information, see AWS Proton Quotas in the AWS Proton Administrator Guide.
", "exception":true }, "ServiceStatus":{ @@ -3495,7 +3495,7 @@ }, "environmentAccountConnectionId":{ "shape":"EnvironmentAccountConnectionId", - "documentation":"The ID of the environment account connection.
You can only update to a new environment account connection if it was created in the same environment account that the current environment account connection was created in and associated with the current environment.
" + "documentation":"The ID of the environment account connection.
You can only update to a new environment account connection if it was created in the same environment account that the current environment account connection was created in and is associated with the current environment.
" }, "name":{ "shape":"ResourceName", @@ -3503,7 +3503,7 @@ }, "protonServiceRoleArn":{ "shape":"Arn", - "documentation":"The ARN of the AWS Proton service role that allows AWS Proton to make API calls to other services your behalf.
" + "documentation":"The Amazon Resource Name (ARN) of the AWS Proton service role that allows AWS Proton to make API calls to other services your behalf.
" }, "spec":{ "shape":"SpecContents", @@ -3611,7 +3611,7 @@ }, "spec":{ "shape":"SpecContents", - "documentation":"Lists the service instances to add and the existing service instances to remain. Omit the existing service instances to delete from the list. Don't include edits to the existing service instances or pipeline. For more information, see Edit a service in the AWS Proton Administration Guide or the AWS Proton User Guide.
" + "documentation":"Lists the service instances to add and the existing service instances to remain. Omit the existing service instances to delete from the list. Don't include edits to the existing service instances or pipeline. For more information, see Edit a service in the AWS Proton Administrator Guide or the AWS Proton User Guide.
" } } }, @@ -3791,5 +3791,5 @@ "exception":true } }, - "documentation":"This is the AWS Proton Service API Reference. It provides descriptions, syntax and usage examples for each of the actions and data types for the AWS Proton service.
The documentation for each action shows the Query API request parameters and the XML response.
Alternatively, you can use the AWS CLI to access an API. For more information, see the AWS Command Line Interface User Guide.
The AWS Proton service is a two-pronged automation framework. Administrators create service templates to provide standardized infrastructure and deployment tooling for serverless and container based applications. Developers, in turn, select from the available service templates to automate their application or service deployments.
Because administrators define the infrastructure and tooling that AWS Proton deploys and manages, they need permissions to use all of the listed API operations.
When developers select a specific infrastructure and tooling set, AWS Proton deploys their applications. To monitor their applications that are running on AWS Proton, developers need permissions to the service create, list, update and delete API operations and the service instance list and update API operations.
To learn more about AWS Proton administration, see the AWS Proton Administration Guide.
To learn more about deploying serverless and containerized applications on AWS Proton, see the AWS Proton User Guide.
Ensuring Idempotency
When you make a mutating API request, the request typically returns a result before the asynchronous workflows of the operation are complete. Operations might also time out or encounter other server issues before they're complete, even if the request already returned a result. This might make it difficult to determine whether the request succeeded. Moreover, you might need to retry the request multiple times to ensure that the operation completes successfully. However, if the original request and the subsequent retries are successful, the operation occurs multiple times. This means that you might create more resources than you intended.
Idempotency ensures that an API request action completes no more than one time. With an idempotent request, if the original request action completes successfully, any subsequent retries complete successfully without performing any further actions. However, the result might contain updated information, such as the current creation status.
The following lists of APIs are grouped according to methods that ensure idempotency.
Idempotent create APIs with a client token
The API actions in this list support idempotency with the use of a client token. The corresponding AWS CLI commands also support idempotency using a client token. A client token is a unique, case-sensitive string of up to 64 ASCII characters. To make an idempotent API request using one of these actions, specify a client token in the request. We recommend that you don't reuse the same client token for other API requests. If you don’t provide a client token for these APIs, a default client token is automatically provided by SDKs.
Given a request action that has succeeded:
If you retry the request using the same client token and the same parameters, the retry succeeds without performing any further actions other than returning the original resource detail data in the response.
If you retry the request using the same client token, but one or more of the parameters are different, the retry throws a ValidationException
with an IdempotentParameterMismatch
error.
Client tokens expire eight hours after a request is made. If you retry the request with the expired token, a new resource is created.
If the original resource is deleted and you retry the request, a new resource is created.
Idempotent create APIs with a client token:
CreateEnvironmentTemplateVersion
CreateServiceTemplateVersion
CreateEnvironmentAccountConnection
<p> <b>Idempotent delete APIs</b> </p> <p>Given a request action that has succeeded:</p> <p>When you retry the request with an API from this group and the resource was deleted, its metadata is returned in the response.</p> <p>If you retry and the resource doesn't exist, the response is empty.</p> <p>In both cases, the retry succeeds.</p> <p>Idempotent delete APIs:</p> <ul> <li> <p>DeleteEnvironmentTemplate</p> </li> <li> <p>DeleteEnvironmentTemplateVersion</p> </li> <li> <p>DeleteServiceTemplate</p> </li> <li> <p>DeleteServiceTemplateVersion</p> </li> <li> <p>DeleteEnvironmentAccountConnection</p> </li> </ul> <p> <b>Asynchronous idempotent delete APIs</b> </p> <p>Given a request action that has succeeded:</p> <p>If you retry the request with an API from this group, if the original request delete operation status is <code>DELETE_IN_PROGRESS</code>, the retry returns the resource detail data in the response without performing any further actions.</p> <p>If the original request delete operation is complete, a retry returns an empty response.</p> <p>Asynchronous idempotent delete APIs:</p> <ul> <li> <p>DeleteEnvironment</p> </li> <li> <p>DeleteService</p> </li> </ul>
"
+ "documentation":"This is the AWS Proton Service API Reference. It provides descriptions, syntax and usage examples for each of the actions and data types for the AWS Proton service.
The documentation for each action shows the Query API request parameters and the XML response.
Alternatively, you can use the AWS CLI to access an API. For more information, see the AWS Command Line Interface User Guide.
The AWS Proton service is a two-pronged automation framework. Administrators create service templates to provide standardized infrastructure and deployment tooling for serverless and container based applications. Developers, in turn, select from the available service templates to automate their application or service deployments.
Because administrators define the infrastructure and tooling that AWS Proton deploys and manages, they need permissions to use all of the listed API operations.
When developers select a specific infrastructure and tooling set, AWS Proton deploys their applications. To monitor their applications that are running on AWS Proton, developers need permissions to the service create, list, update and delete API operations and the service instance list and update API operations.
To learn more about AWS Proton administration, see the AWS Proton Administrator Guide.
To learn more about deploying serverless and containerized applications on AWS Proton, see the AWS Proton User Guide.
Ensuring Idempotency
When you make a mutating API request, the request typically returns a result before the asynchronous workflows of the operation are complete. Operations might also time out or encounter other server issues before they're complete, even if the request already returned a result. This might make it difficult to determine whether the request succeeded. Moreover, you might need to retry the request multiple times to ensure that the operation completes successfully. However, if the original request and the subsequent retries are successful, the operation occurs multiple times. This means that you might create more resources than you intended.
Idempotency ensures that an API request action completes no more than one time. With an idempotent request, if the original request action completes successfully, any subsequent retries complete successfully without performing any further actions. However, the result might contain updated information, such as the current creation status.
The following lists of APIs are grouped according to methods that ensure idempotency.
Idempotent create APIs with a client token
The API actions in this list support idempotency with the use of a client token. The corresponding AWS CLI commands also support idempotency using a client token. A client token is a unique, case-sensitive string of up to 64 ASCII characters. To make an idempotent API request using one of these actions, specify a client token in the request. We recommend that you don't reuse the same client token for other API requests. If you don’t provide a client token for these APIs, a default client token is automatically provided by SDKs.
Given a request action that has succeeded:
If you retry the request using the same client token and the same parameters, the retry succeeds without performing any further actions other than returning the original resource detail data in the response.
If you retry the request using the same client token, but one or more of the parameters are different, the retry throws a ValidationException
with an IdempotentParameterMismatch
error.
Client tokens expire eight hours after a request is made. If you retry the request with the expired token, a new resource is created.
If the original resource is deleted and you retry the request, a new resource is created.
Idempotent create APIs with a client token:
CreateEnvironmentTemplateVersion
CreateServiceTemplateVersion
CreateEnvironmentAccountConnection
Idempotent delete APIs
Given a request action that has succeeded:
When you retry the request with an API from this group and the resource was deleted, its metadata is returned in the response.
If you retry and the resource doesn't exist, the response is empty.
In both cases, the retry succeeds.
Idempotent delete APIs:
DeleteEnvironmentTemplate
DeleteEnvironmentTemplateVersion
DeleteServiceTemplate
DeleteServiceTemplateVersion
DeleteEnvironmentAccountConnection
Asynchronous idempotent delete APIs
Given a request action that has succeeded:
If you retry the request with an API from this group, if the original request delete operation status is DELETE_IN_PROGRESS
, the retry returns the resource detail data in the response without performing any further actions.
If the original request delete operation is complete, a retry returns an empty response.
Asynchronous idempotent delete APIs:
DeleteEnvironment
DeleteService
Deletes the configuration for asynchronous invocation for a function, version, or alias.
To configure options for asynchronous invocation, use PutFunctionEventInvokeConfig.
" }, @@ -835,7 +836,8 @@ {"shape":"ServiceException"}, {"shape":"ResourceNotFoundException"}, {"shape":"InvalidParameterValueException"}, - {"shape":"TooManyRequestsException"} + {"shape":"TooManyRequestsException"}, + {"shape":"ResourceConflictException"} ], "documentation":"Configures options for asynchronous invocation on a function, version, or alias. If a configuration already exists for a function, version, or alias, this operation overwrites it. If you exclude any settings, they are removed. To set one option without affecting existing settings for other options, use UpdateFunctionEventInvokeConfig.
By default, Lambda retries an asynchronous invocation twice if the function returns an error. It retains events in a queue for up to six hours. When an event fails all processing attempts or stays in the asynchronous invocation queue for too long, Lambda discards it. To retain discarded events, configure a dead-letter queue with UpdateFunctionConfiguration.
To send an invocation record to a queue, topic, function, or event bus, specify a destination. You can configure separate destinations for successful invocations (on-success) and events that fail all processing attempts (on-failure). You can configure destinations in addition to or instead of a dead-letter queue.
" }, @@ -1037,7 +1039,8 @@ {"shape":"ServiceException"}, {"shape":"ResourceNotFoundException"}, {"shape":"InvalidParameterValueException"}, - {"shape":"TooManyRequestsException"} + {"shape":"TooManyRequestsException"}, + {"shape":"ResourceConflictException"} ], "documentation":"Updates the configuration for asynchronous invocation for a function, version, or alias.
To configure options for asynchronous invocation, use PutFunctionEventInvokeConfig.
" } @@ -1537,7 +1540,7 @@ }, "SourceAccessConfigurations":{ "shape":"SourceAccessConfigurations", - "documentation":"An array of the authentication protocol, or the VPC components to secure your event source.
" + "documentation":"An array of authentication protocols or VPC components required to secure your event source.
" }, "SelfManagedEventSource":{ "shape":"SelfManagedEventSource", @@ -1994,7 +1997,7 @@ }, "StartingPosition":{ "shape":"EventSourcePosition", - "documentation":"The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon MSK Streams sources. AT_TIMESTAMP
is only supported for Amazon Kinesis streams.
The position in a stream from which to start reading. Required for Amazon Kinesis, Amazon DynamoDB, and Amazon MSK stream sources. AT_TIMESTAMP
is supported only for Amazon Kinesis streams.
(Streams and SQS standard queues) The maximum amount of time to gather records before invoking the function, in seconds. The default value is zero.
" + "documentation":"(Streams and Amazon SQS standard queues) The maximum amount of time to gather records before invoking the function, in seconds. The default value is zero.
" }, "ParallelizationFactor":{ "shape":"ParallelizationFactor", - "documentation":"(Streams only) The number of batches to process from each shard concurrently. The default value is 1.
" + "documentation":"(Streams only) The number of batches to process concurrently from each shard. The default value is 1.
" }, "EventSourceArn":{ "shape":"Arn", @@ -2022,11 +2025,11 @@ }, "LastModified":{ "shape":"Date", - "documentation":"The date that the event source mapping was last updated, or its state changed.
" + "documentation":"The date that the event source mapping was last updated or that its state changed.
" }, "LastProcessingResult":{ "shape":"String", - "documentation":"The result of the last Lambda invocation of your Lambda function.
" + "documentation":"The result of the last Lambda invocation of your function.
" }, "State":{ "shape":"String", @@ -2034,7 +2037,7 @@ }, "StateTransitionReason":{ "shape":"String", - "documentation":"Indicates whether the last change to the event source mapping was made by a user, or by the Lambda service.
" + "documentation":"Indicates whether a user or Lambda made the last change to the event source mapping.
" }, "DestinationConfig":{ "shape":"DestinationConfig", @@ -2046,15 +2049,15 @@ }, "Queues":{ "shape":"Queues", - "documentation":"(MQ) The name of the Amazon MQ broker destination queue to consume.
" + "documentation":"(Amazon MQ) The name of the Amazon MQ broker destination queue to consume.
" }, "SourceAccessConfigurations":{ "shape":"SourceAccessConfigurations", - "documentation":"An array of the authentication protocol, or the VPC components to secure your event source.
" + "documentation":"An array of the authentication protocol, VPC components, or virtual host to secure and define your event source.
" }, "SelfManagedEventSource":{ "shape":"SelfManagedEventSource", - "documentation":"The Self-Managed Apache Kafka cluster for your event source.
" + "documentation":"The self-managed Apache Kafka cluster for your event source.
" }, "MaximumRecordAgeInSeconds":{ "shape":"MaximumRecordAgeInSeconds", @@ -2070,14 +2073,14 @@ }, "TumblingWindowInSeconds":{ "shape":"TumblingWindowInSeconds", - "documentation":"(Streams only) The duration in seconds of a processing window. The range is between 1 second up to 900 seconds.
" + "documentation":"(Streams only) The duration in seconds of a processing window. The range is 1–900 seconds.
" }, "FunctionResponseTypes":{ "shape":"FunctionResponseTypeList", "documentation":"(Streams only) A list of current response type enums applied to the event source mapping.
" } }, - "documentation":"A mapping between an Amazon Web Services resource and an Lambda function. See CreateEventSourceMapping for details.
" + "documentation":"A mapping between an Amazon Web Services resource and a Lambda function. For details, see CreateEventSourceMapping.
" }, "EventSourceMappingsList":{ "type":"list", @@ -3327,7 +3330,7 @@ }, "MaxItems":{ "shape":"MaxListItems", - "documentation":"The maximum number of event source mappings to return.
", + "documentation":"The maximum number of event source mappings to return. Note that ListEventSourceMappings returns a maximum of 100 items in each response, even if you set the number higher.
", "location":"querystring", "locationName":"MaxItems" } @@ -4301,7 +4304,7 @@ "documentation":"The list of bootstrap servers for your Kafka brokers in the following format: \"KAFKA_BOOTSTRAP_SERVERS\": [\"abc.xyz.com:xxxx\",\"abc2.xyz.com:xxxx\"]
.
The Self-Managed Apache Kafka cluster for your event source.
" + "documentation":"The self-managed Apache Kafka cluster for your event source.
" }, "SensitiveString":{ "type":"string", @@ -4328,14 +4331,14 @@ "members":{ "Type":{ "shape":"SourceAccessType", - "documentation":"The type of authentication protocol or the VPC components for your event source. For example: \"Type\":\"SASL_SCRAM_512_AUTH\"
.
BASIC_AUTH
- (MQ) The Secrets Manager secret that stores your broker credentials.
VPC_SUBNET
- The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your Self-Managed Apache Kafka cluster.
VPC_SECURITY_GROUP
- The VPC security group used to manage access to your Self-Managed Apache Kafka brokers.
SASL_SCRAM_256_AUTH
- The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your Self-Managed Apache Kafka brokers.
SASL_SCRAM_512_AUTH
- The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your Self-Managed Apache Kafka brokers.
VIRTUAL_HOST
- The name of the virtual host in your RabbitMQ broker. Lambda will use this host as the event source.
The type of authentication protocol, VPC components, or virtual host for your event source. For example: \"Type\":\"SASL_SCRAM_512_AUTH\"
.
BASIC_AUTH
- (Amazon MQ) The Secrets Manager secret that stores your broker credentials.
BASIC_AUTH
- (Self-managed Apache Kafka) The Secrets Manager ARN of your secret key used for SASL/PLAIN authentication of your Apache Kafka brokers.
VPC_SUBNET
- The subnets associated with your VPC. Lambda connects to these subnets to fetch data from your self-managed Apache Kafka cluster.
VPC_SECURITY_GROUP
- The VPC security group used to manage access to your self-managed Apache Kafka brokers.
SASL_SCRAM_256_AUTH
- The Secrets Manager ARN of your secret key used for SASL SCRAM-256 authentication of your self-managed Apache Kafka brokers.
SASL_SCRAM_512_AUTH
- The Secrets Manager ARN of your secret key used for SASL SCRAM-512 authentication of your self-managed Apache Kafka brokers.
VIRTUAL_HOST
- (Amazon MQ) The name of the virtual host in your RabbitMQ broker. Lambda uses this RabbitMQ host as the event source.
The value for your chosen configuration in Type
. For example: \"URI\": \"arn:aws:secretsmanager:us-east-1:01234567890:secret:MyBrokerSecretName\"
.
You can specify the authentication protocol, or the VPC components to secure access to your event source.
" + "documentation":"To secure and define access to your event source, you can specify the authentication protocol, VPC components, or virtual host.
" }, "SourceAccessConfigurations":{ "type":"list", @@ -4681,7 +4684,7 @@ }, "SourceAccessConfigurations":{ "shape":"SourceAccessConfigurations", - "documentation":"An array of the authentication protocol, or the VPC components to secure your event source.
" + "documentation":"An array of authentication protocols or VPC components required to secure your event source.
" }, "TumblingWindowInSeconds":{ "shape":"TumblingWindowInSeconds", From 5d7ff1515e9ce109b191284cf2b7366e1b0046eb Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 21 Jul 2021 18:08:58 +0000 Subject: [PATCH 09/13] AWS Identity and Access Management Update: Documentation updates for AWS Identity and Access Management (IAM). --- .../feature-AWSIdentityandAccessManagement-ad451d8.json | 6 ++++++ .../iam/src/main/resources/codegen-resources/service-2.json | 4 ++-- 2 files changed, 8 insertions(+), 2 deletions(-) create mode 100644 .changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json diff --git a/.changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json b/.changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json new file mode 100644 index 000000000000..4abe1cca03fc --- /dev/null +++ b/.changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "AWS Identity and Access Management", + "contributor": "", + "description": "Documentation updates for AWS Identity and Access Management (IAM)." +} diff --git a/services/iam/src/main/resources/codegen-resources/service-2.json b/services/iam/src/main/resources/codegen-resources/service-2.json index b54fae4af90c..fa18c51596ea 100644 --- a/services/iam/src/main/resources/codegen-resources/service-2.json +++ b/services/iam/src/main/resources/codegen-resources/service-2.json @@ -233,7 +233,7 @@ {"shape":"ConcurrentModificationException"}, {"shape":"ServiceFailureException"} ], - "documentation":"Creates an IAM entity to describe an identity provider (IdP) that supports OpenID Connect (OIDC).
The OIDC provider that you create with this operation can be used as a principal in a role's trust policy. Such a policy establishes a trust relationship between Amazon Web Services and the OIDC provider.
If you are using an OIDC identity provider from Google, Facebook, or Amazon Cognito, you don't need to create a separate IAM identity provider. These OIDC identity providers are already built-in to Amazon Web Services and are available for your use. Instead, you can move directly to creating new roles using your identity provider. To learn more, see Creating a role for web identity or OpenID connect federation in the IAM User Guide.
When you create the IAM OIDC provider, you specify the following:
The URL of the OIDC identity provider (IdP) to trust
A list of client IDs (also known as audiences) that identify the application or applications allowed to authenticate using the OIDC provider
A list of thumbprints of one or more server certificates that the IdP uses
You get all of this information from the OIDC IdP that you want to use to access Amazon Web Services.
The trust for the OIDC provider is derived from the IAM provider that this operation creates. Therefore, it is best to limit access to the CreateOpenIDConnectProvider operation to highly privileged users.
Creates an IAM entity to describe an identity provider (IdP) that supports OpenID Connect (OIDC).
The OIDC provider that you create with this operation can be used as a principal in a role's trust policy. Such a policy establishes a trust relationship between Amazon Web Services and the OIDC provider.
If you are using an OIDC identity provider from Google, Facebook, or Amazon Cognito, you don't need to create a separate IAM identity provider. These OIDC identity providers are already built-in to Amazon Web Services and are available for your use. Instead, you can move directly to creating new roles using your identity provider. To learn more, see Creating a role for web identity or OpenID connect federation in the IAM User Guide.
When you create the IAM OIDC provider, you specify the following:
The URL of the OIDC identity provider (IdP) to trust
A list of client IDs (also known as audiences) that identify the application or applications allowed to authenticate using the OIDC provider
A list of thumbprints of one or more server certificates that the IdP uses
You get all of this information from the OIDC IdP that you want to use to access Amazon Web Services.
Amazon Web Services secures communication with some OIDC identity providers (IdPs) through our library of trusted certificate authorities (CAs) instead of using a certificate thumbprint to verify your IdP server certificate. These OIDC IdPs include Google, and those that use an Amazon S3 bucket to host a JSON Web Key Set (JWKS) endpoint. In these cases, your legacy thumbprint remains in your configuration, but is no longer used for validation.
The trust for the OIDC provider is derived from the IAM provider that this operation creates. Therefore, it is best to limit access to the CreateOpenIDConnectProvider operation to highly privileged users.
Replaces the existing list of server certificate thumbprints associated with an OpenID Connect (OIDC) provider resource object with a new list of thumbprints.
The list that you pass with this operation completely replaces the existing list of thumbprints. (The lists are not merged.)
Typically, you need to update a thumbprint only when the identity provider's certificate changes, which occurs rarely. However, if the provider's certificate does change, any attempt to assume an IAM role that specifies the OIDC provider as a principal fails until the certificate thumbprint is updated.
Trust for the OIDC provider is derived from the provider's certificate and is validated by the thumbprint. Therefore, it is best to limit access to the UpdateOpenIDConnectProviderThumbprint
operation to highly privileged users.
Replaces the existing list of server certificate thumbprints associated with an OpenID Connect (OIDC) provider resource object with a new list of thumbprints.
The list that you pass with this operation completely replaces the existing list of thumbprints. (The lists are not merged.)
Typically, you need to update a thumbprint only when the identity provider certificate changes, which occurs rarely. However, if the provider's certificate does change, any attempt to assume an IAM role that specifies the OIDC provider as a principal fails until the certificate thumbprint is updated.
Amazon Web Services secures communication with some OIDC identity providers (IdPs) through our library of trusted certificate authorities (CAs) instead of using a certificate thumbprint to verify your IdP server certificate. These OIDC IdPs include Google, and those that use an Amazon S3 bucket to host a JSON Web Key Set (JWKS) endpoint. In these cases, your legacy thumbprint remains in your configuration, but is no longer used for validation.
Trust for the OIDC provider is derived from the provider certificate and is validated by the thumbprint. Therefore, it is best to limit access to the UpdateOpenIDConnectProviderThumbprint
operation to highly privileged users.
The name of the attribute.
The following attribute is supported by all load balancers:
deletion_protection.enabled
- Indicates whether deletion protection is enabled. The value is true
or false
. The default is false
.
The following attributes are supported by both Application Load Balancers and Network Load Balancers:
access_logs.s3.enabled
- Indicates whether access logs are enabled. The value is true
or false
. The default is false
.
access_logs.s3.bucket
- The name of the S3 bucket for the access logs. This attribute is required if access logs are enabled. The bucket must exist in the same region as the load balancer and have a bucket policy that grants Elastic Load Balancing permissions to write to the bucket.
access_logs.s3.prefix
- The prefix for the location in the S3 bucket for the access logs.
The following attributes are supported by only Application Load Balancers:
idle_timeout.timeout_seconds
- The idle timeout value, in seconds. The valid range is 1-4000 seconds. The default is 60 seconds.
routing.http.desync_mitigation_mode
- Determines how the load balancer handles requests that might pose a security risk to your application. The possible values are monitor
, defensive
, and strictest
. The default is defensive
.
routing.http.drop_invalid_header_fields.enabled
- Indicates whether HTTP headers with invalid header fields are removed by the load balancer (true
) or routed to targets (false
). The default is false
.
routing.http2.enabled
- Indicates whether HTTP/2 is enabled. The value is true
or false
. The default is true
. Elastic Load Balancing requires that message header names contain only alphanumeric characters and hyphens.
waf.fail_open.enabled
- Indicates whether to allow a WAF-enabled load balancer to route requests to targets if it is unable to forward the request to Amazon Web Services WAF. The value is true
or false
. The default is false
.
The following attribute is supported by Network Load Balancers and Gateway Load Balancers:
load_balancing.cross_zone.enabled
- Indicates whether cross-zone load balancing is enabled. The value is true
or false
. The default is false
.
The name of the attribute.
The following attribute is supported by all load balancers:
deletion_protection.enabled
- Indicates whether deletion protection is enabled. The value is true
or false
. The default is false
.
The following attributes are supported by both Application Load Balancers and Network Load Balancers:
access_logs.s3.enabled
- Indicates whether access logs are enabled. The value is true
or false
. The default is false
.
access_logs.s3.bucket
- The name of the S3 bucket for the access logs. This attribute is required if access logs are enabled. The bucket must exist in the same region as the load balancer and have a bucket policy that grants Elastic Load Balancing permissions to write to the bucket.
access_logs.s3.prefix
- The prefix for the location in the S3 bucket for the access logs.
The following attributes are supported by only Application Load Balancers:
idle_timeout.timeout_seconds
- The idle timeout value, in seconds. The valid range is 1-4000 seconds. The default is 60 seconds.
routing.http.desync_mitigation_mode
- Determines how the load balancer handles requests that might pose a security risk to your application. The possible values are monitor
, defensive
, and strictest
. The default is defensive
.
routing.http.drop_invalid_header_fields.enabled
- Indicates whether HTTP headers with invalid header fields are removed by the load balancer (true
) or routed to targets (false
). The default is false
.
Indicates whether the two headers (x-amzn-tls-version
and x-amzn-tls-cipher-suite
), which contain information about the negotiated TLS version and cipher suite, are added to the client request before sending it to the target. The x-amzn-tls-version
header has information about the TLS protocol version negotiated with the client, and the x-amzn-tls-cipher-suite
header has information about the cipher suite negotiated with the client. Both headers are in OpenSSL format. The possible values for the attribute are true
and false
. The default is false
.
routing.http2.enabled
- Indicates whether HTTP/2 is enabled. The value is true
or false
. The default is true
. Elastic Load Balancing requires that message header names contain only alphanumeric characters and hyphens.
waf.fail_open.enabled
- Indicates whether to allow a WAF-enabled load balancer to route requests to targets if it is unable to forward the request to Amazon Web Services WAF. The value is true
or false
. The default is false
.
The following attribute is supported by Network Load Balancers and Gateway Load Balancers:
load_balancing.cross_zone.enabled
- Indicates whether cross-zone load balancing is enabled. The value is true
or false
. The default is false
.
The protocol the load balancer uses when performing health checks on targets. For Application Load Balancers, the default is HTTP. For Network Load Balancers and Gateway Load Balancers, the default is TCP. The TCP protocol is not supported for health checks if the protocol of the target group is HTTP or HTTPS. It is supported for health checks only if the protocol of the target group is TCP, TLS, UDP, or TCP_UDP. The GENEVE, TLS, UDP, and TCP_UDP protocols are not supported for health checks.
With Network Load Balancers, you can't modify this setting.
" + "documentation":"The protocol the load balancer uses when performing health checks on targets. The TCP protocol is supported for health checks only if the protocol of the target group is TCP, TLS, UDP, or TCP_UDP. The GENEVE, TLS, UDP, and TCP_UDP protocols are not supported for health checks.
With Network Load Balancers, you can't modify this setting.
" }, "HealthCheckPort":{ "shape":"HealthCheckPort", From 3ed8b80aaa59402a8982b5f99539f8b49687b1c9 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 21 Jul 2021 18:09:04 +0000 Subject: [PATCH 11/13] Amazon Personalize Update: My AWS Service (placeholder) - Making minProvisionedTPS an optional parameter when creating a campaign. If not provided, it defaults to 1. --- .../feature-AmazonPersonalize-8f23858.json | 6 +++++ .../codegen-resources/service-2.json | 27 +++++++++---------- 2 files changed, 19 insertions(+), 14 deletions(-) create mode 100644 .changes/next-release/feature-AmazonPersonalize-8f23858.json diff --git a/.changes/next-release/feature-AmazonPersonalize-8f23858.json b/.changes/next-release/feature-AmazonPersonalize-8f23858.json new file mode 100644 index 000000000000..0f66e62ce55c --- /dev/null +++ b/.changes/next-release/feature-AmazonPersonalize-8f23858.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Personalize", + "contributor": "", + "description": "My AWS Service (placeholder) - Making minProvisionedTPS an optional parameter when creating a campaign. If not provided, it defaults to 1." +} diff --git a/services/personalize/src/main/resources/codegen-resources/service-2.json b/services/personalize/src/main/resources/codegen-resources/service-2.json index 69802bd99937..bdfd34eff7ed 100644 --- a/services/personalize/src/main/resources/codegen-resources/service-2.json +++ b/services/personalize/src/main/resources/codegen-resources/service-2.json @@ -81,7 +81,7 @@ {"shape":"LimitExceededException"}, {"shape":"ResourceInUseException"} ], - "documentation":" Creates a job that exports data from your dataset to an Amazon S3 bucket. To allow Amazon Personalize to export the training data, you must specify an service-linked AWS Identity and Access Management (IAM) role that gives Amazon Personalize PutObject
permissions for your Amazon S3 bucket. For information, see Exporting a dataset in the Amazon Personalize developer guide.
Status
A dataset export job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the export job, call DescribeDatasetExportJob, and specify the Amazon Resource Name (ARN) of the dataset export job. The dataset export is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Creates a job that exports data from your dataset to an Amazon S3 bucket. To allow Amazon Personalize to export the training data, you must specify an service-linked IAM role that gives Amazon Personalize PutObject
permissions for your Amazon S3 bucket. For information, see Exporting a dataset in the Amazon Personalize developer guide.
Status
A dataset export job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the export job, call DescribeDatasetExportJob, and specify the Amazon Resource Name (ARN) of the dataset export job. The dataset export is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Creates an empty dataset group. A dataset group contains related datasets that supply data for training a model. A dataset group can contain at most three datasets, one for each type of dataset:
Interactions
Items
Users
To train a model (create a solution), a dataset group that contains an Interactions
dataset is required. Call CreateDataset to add a dataset to the group.
A dataset group can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING
To get the status of the dataset group, call DescribeDatasetGroup. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the creation failed.
You must wait until the status
of the dataset group is ACTIVE
before adding a dataset to the group.
You can specify an AWS Key Management Service (KMS) key to encrypt the datasets in the group. If you specify a KMS key, you must also include an AWS Identity and Access Management (IAM) role that has permission to access the key.
APIs that require a dataset group ARN in the request
Related APIs
" + "documentation":"Creates an empty dataset group. A dataset group contains related datasets that supply data for training a model. A dataset group can contain at most three datasets, one for each type of dataset:
Interactions
Items
Users
To train a model (create a solution), a dataset group that contains an Interactions
dataset is required. Call CreateDataset to add a dataset to the group.
A dataset group can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
DELETE PENDING
To get the status of the dataset group, call DescribeDatasetGroup. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the creation failed.
You must wait until the status
of the dataset group is ACTIVE
before adding a dataset to the group.
You can specify an Key Management Service (KMS) key to encrypt the datasets in the group. If you specify a KMS key, you must also include an Identity and Access Management (IAM) role that has permission to access the key.
APIs that require a dataset group ARN in the request
Related APIs
" }, "CreateDatasetImportJob":{ "name":"CreateDatasetImportJob", @@ -114,7 +114,7 @@ {"shape":"LimitExceededException"}, {"shape":"ResourceInUseException"} ], - "documentation":"Creates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset. To allow Amazon Personalize to import the training data, you must specify an AWS Identity and Access Management (IAM) service role that has permission to read from the data source, as Amazon Personalize makes a copy of your data and processes it in an internal AWS system. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources.
The dataset import job replaces any existing data in the dataset that you imported in bulk.
Status
A dataset import job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the import job, call DescribeDatasetImportJob, providing the Amazon Resource Name (ARN) of the dataset import job. The dataset import is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Importing takes time. You must wait until the status shows as ACTIVE before training a model using the dataset.
Related APIs
" + "documentation":"Creates a job that imports training data from your data source (an Amazon S3 bucket) to an Amazon Personalize dataset. To allow Amazon Personalize to import the training data, you must specify an IAM service role that has permission to read from the data source, as Amazon Personalize makes a copy of your data and processes it internally. For information on granting access to your Amazon S3 bucket, see Giving Amazon Personalize Access to Amazon S3 Resources.
The dataset import job replaces any existing data in the dataset that you imported in bulk.
Status
A dataset import job can be in one of the following states:
CREATE PENDING > CREATE IN_PROGRESS > ACTIVE -or- CREATE FAILED
To get the status of the import job, call DescribeDatasetImportJob, providing the Amazon Resource Name (ARN) of the dataset import job. The dataset import is complete when the status shows as ACTIVE. If the status shows as CREATE FAILED, the response includes a failureReason
key, which describes why the job failed.
Importing takes time. You must wait until the status shows as ACTIVE before training a model using the dataset.
Related APIs
" }, "CreateEventTracker":{ "name":"CreateEventTracker", @@ -1207,8 +1207,7 @@ "type":"structure", "required":[ "name", - "solutionVersionArn", - "minProvisionedTPS" + "solutionVersionArn" ], "members":{ "name":{ @@ -1261,7 +1260,7 @@ }, "roleArn":{ "shape":"RoleArn", - "documentation":"The Amazon Resource Name (ARN) of the AWS Identity and Access Management service role that has permissions to add data to your output Amazon S3 bucket.
" + "documentation":"The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket.
" }, "jobOutput":{ "shape":"DatasetExportJobOutput", @@ -1288,11 +1287,11 @@ }, "roleArn":{ "shape":"RoleArn", - "documentation":"The ARN of the IAM role that has permissions to access the KMS key. Supplying an IAM role is only valid when also specifying a KMS key.
" + "documentation":"The ARN of the Identity and Access Management (IAM) role that has permissions to access the Key Management Service (KMS) key. Supplying an IAM role is only valid when also specifying a KMS key.
" }, "kmsKeyArn":{ "shape":"KmsKeyArn", - "documentation":"The Amazon Resource Name (ARN) of a KMS key used to encrypt the datasets.
" + "documentation":"The Amazon Resource Name (ARN) of a Key Management Service (KMS) key used to encrypt the datasets.
" } } }, @@ -1602,7 +1601,7 @@ }, "roleArn":{ "shape":"Arn", - "documentation":"The Amazon Resource Name (ARN) of the AWS Identity and Access Management service role that has permissions to add data to your output Amazon S3 bucket.
" + "documentation":"The Amazon Resource Name (ARN) of the IAM service role that has permissions to add data to your output Amazon S3 bucket.
" }, "status":{ "shape":"Status", @@ -1691,7 +1690,7 @@ }, "kmsKeyArn":{ "shape":"KmsKeyArn", - "documentation":"The Amazon Resource Name (ARN) of the KMS key used to encrypt the datasets.
" + "documentation":"The Amazon Resource Name (ARN) of the Key Management Service (KMS) key used to encrypt the datasets.
" }, "creationDateTime":{ "shape":"Date", @@ -1706,7 +1705,7 @@ "documentation":"If creating a dataset group fails, provides the reason why.
" } }, - "documentation":"A dataset group is a collection of related datasets (Interactions, User, and Item). You create a dataset group by calling CreateDatasetGroup. You then create a dataset and add it to a dataset group by calling CreateDataset. The dataset group is used to create and train a solution by calling CreateSolution. A dataset group can contain only one of each type of dataset.
You can specify an AWS Key Management Service (KMS) key to encrypt the datasets in the group.
" + "documentation":"A dataset group is a collection of related datasets (Interactions, User, and Item). You create a dataset group by calling CreateDatasetGroup. You then create a dataset and add it to a dataset group by calling CreateDataset. The dataset group is used to create and train a solution by calling CreateSolution. A dataset group can contain only one of each type of dataset.
You can specify an Key Management Service (KMS) key to encrypt the datasets in the group.
" }, "DatasetGroupSummary":{ "type":"structure", @@ -1764,7 +1763,7 @@ }, "roleArn":{ "shape":"Arn", - "documentation":"The ARN of the AWS Identity and Access Management (IAM) role that has permissions to read from the Amazon S3 data source.
" + "documentation":"The ARN of the IAM role that has permissions to read from the Amazon S3 data source.
" }, "status":{ "shape":"Status", @@ -2358,7 +2357,7 @@ }, "accountId":{ "shape":"AccountId", - "documentation":"The Amazon AWS account that owns the event tracker.
" + "documentation":"The Amazon Web Services account that owns the event tracker.
" }, "trackingId":{ "shape":"TrackingId", @@ -3267,7 +3266,7 @@ }, "kmsKeyArn":{ "shape":"KmsKeyArn", - "documentation":"The Amazon Resource Name (ARN) of the Amazon Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files of a batch inference job.
" + "documentation":"The Amazon Resource Name (ARN) of the Key Management Service (KMS) key that Amazon Personalize uses to encrypt or decrypt the input and output files of a batch inference job.
" } }, "documentation":"The configuration details of an Amazon S3 input or output bucket.
" From 09c9ba7bfc4ecd2133ff585a54641051aa5adef4 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 21 Jul 2021 18:08:59 +0000 Subject: [PATCH 12/13] Amazon Elastic MapReduce Update: EMR now supports new DescribeReleaseLabel and ListReleaseLabel APIs. They can provide Amazon EMR release label details. You can programmatically list available releases and applications for a specific Amazon EMR release label. --- ...eature-AmazonElasticMapReduce-50b796c.json | 6 + .../codegen-resources/paginators-1.json | 5 + .../codegen-resources/service-2.json | 237 ++++++++++++++---- 3 files changed, 194 insertions(+), 54 deletions(-) create mode 100644 .changes/next-release/feature-AmazonElasticMapReduce-50b796c.json diff --git a/.changes/next-release/feature-AmazonElasticMapReduce-50b796c.json b/.changes/next-release/feature-AmazonElasticMapReduce-50b796c.json new file mode 100644 index 000000000000..5161dedd4099 --- /dev/null +++ b/.changes/next-release/feature-AmazonElasticMapReduce-50b796c.json @@ -0,0 +1,6 @@ +{ + "type": "feature", + "category": "Amazon Elastic MapReduce", + "contributor": "", + "description": "EMR now supports new DescribeReleaseLabel and ListReleaseLabel APIs. They can provide Amazon EMR release label details. You can programmatically list available releases and applications for a specific Amazon EMR release label." +} diff --git a/services/emr/src/main/resources/codegen-resources/paginators-1.json b/services/emr/src/main/resources/codegen-resources/paginators-1.json index d5b5407b3464..5ea61f92522c 100644 --- a/services/emr/src/main/resources/codegen-resources/paginators-1.json +++ b/services/emr/src/main/resources/codegen-resources/paginators-1.json @@ -33,6 +33,11 @@ "output_token": "Marker", "result_key": "NotebookExecutions" }, + "ListReleaseLabels": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + }, "ListSecurityConfigurations": { "input_token": "Marker", "output_token": "Marker", diff --git a/services/emr/src/main/resources/codegen-resources/service-2.json b/services/emr/src/main/resources/codegen-resources/service-2.json index e905589f1ff3..640b4429dfa1 100644 --- a/services/emr/src/main/resources/codegen-resources/service-2.json +++ b/services/emr/src/main/resources/codegen-resources/service-2.json @@ -79,7 +79,7 @@ {"shape":"InternalServerError"}, {"shape":"InvalidRequestException"} ], - "documentation":"Cancels a pending step or steps in a running cluster. Available only in Amazon EMR versions 4.8.0 and later, excluding version 5.0.0. A maximum of 256 steps are allowed in each CancelSteps request. CancelSteps is idempotent but asynchronous; it does not guarantee that a step will be canceled, even if the request is successfully submitted. You can only cancel steps that are in a PENDING
state.
Cancels a pending step or steps in a running cluster. Available only in Amazon EMR versions 4.8.0 and later, excluding version 5.0.0. A maximum of 256 steps are allowed in each CancelSteps request. CancelSteps is idempotent but asynchronous; it does not guarantee that a step will be canceled, even if the request is successfully submitted. When you use Amazon EMR versions 5.28.0 and later, you can cancel steps that are in a PENDING
or RUNNING
state. In earlier versions of Amazon EMR, you can only cancel steps that are in a PENDING
state.
Provides details of a notebook execution.
" }, + "DescribeReleaseLabel":{ + "name":"DescribeReleaseLabel", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeReleaseLabelInput"}, + "output":{"shape":"DescribeReleaseLabelOutput"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"InvalidRequestException"} + ], + "documentation":"Provides EMR release label details, such as releases available the region where the API request is run, and the available applications for a specific EMR release label. Can also list EMR release versions that support a specified version of Spark.
" + }, "DescribeSecurityConfiguration":{ "name":"DescribeSecurityConfiguration", "http":{ @@ -258,7 +272,7 @@ {"shape":"InternalServerException"}, {"shape":"InvalidRequestException"} ], - "documentation":"Returns the Amazon EMR block public access configuration for your AWS account in the current Region. For more information see Configure Block Public Access for Amazon EMR in the Amazon EMR Management Guide.
" + "documentation":"Returns the Amazon EMR block public access configuration for your account in the current Region. For more information see Configure Block Public Access for Amazon EMR in the Amazon EMR Management Guide.
" }, "GetManagedScalingPolicy":{ "name":"GetManagedScalingPolicy", @@ -310,7 +324,7 @@ {"shape":"InternalServerException"}, {"shape":"InvalidRequestException"} ], - "documentation":"Provides the status of all clusters visible to this AWS account. Allows you to filter the list of clusters based on certain criteria; for example, filtering by cluster creation date and time or by status. This call returns a maximum of 50 clusters per call, but returns a marker to track the paging of the cluster list across multiple ListClusters calls.
" + "documentation":"Provides the status of all clusters visible to this account. Allows you to filter the list of clusters based on certain criteria; for example, filtering by cluster creation date and time or by status. This call returns a maximum of 50 clusters in unsorted order per call, but returns a marker to track the paging of the cluster list across multiple ListClusters calls.
" }, "ListInstanceFleets":{ "name":"ListInstanceFleets", @@ -368,6 +382,20 @@ ], "documentation":"Provides summaries of all notebook executions. You can filter the list based on multiple criteria such as status, time range, and editor id. Returns a maximum of 50 notebook executions and a marker to track the paging of a longer notebook execution list across multiple ListNotebookExecution
calls.
Retrieves release labels of EMR services in the region where the API is called.
" + }, "ListSecurityConfigurations":{ "name":"ListSecurityConfigurations", "http":{ @@ -394,7 +422,7 @@ {"shape":"InternalServerException"}, {"shape":"InvalidRequestException"} ], - "documentation":"Provides a list of steps for the cluster in reverse order unless you specify stepIds
with the request of filter by StepStates
. You can specify a maximum of 10 stepIDs
.
Provides a list of steps for the cluster in reverse order unless you specify stepIds
with the request or filter by StepStates
. You can specify a maximum of 10 stepIDs
. The CLI automatically paginates results to return a list greater than 50 steps. To return more than 50 steps using the CLI, specify a Marker
, which is a pagination token that indicates the next set of steps to retrieve.
Returns a list of all Amazon EMR Studios associated with the AWS account. The list includes details such as ID, Studio Access URL, and creation time for each Studio.
" + "documentation":"Returns a list of all Amazon EMR Studios associated with the account. The list includes details such as ID, Studio Access URL, and creation time for each Studio.
" }, "ModifyCluster":{ "name":"ModifyCluster", @@ -485,7 +513,7 @@ {"shape":"InternalServerException"}, {"shape":"InvalidRequestException"} ], - "documentation":"Creates or updates an Amazon EMR block public access configuration for your AWS account in the current Region. For more information see Configure Block Public Access for Amazon EMR in the Amazon EMR Management Guide.
" + "documentation":"Creates or updates an Amazon EMR block public access configuration for your account in the current Region. For more information see Configure Block Public Access for Amazon EMR in the Amazon EMR Management Guide.
" }, "PutManagedScalingPolicy":{ "name":"PutManagedScalingPolicy", @@ -566,7 +594,7 @@ "errors":[ {"shape":"InternalServerError"} ], - "documentation":"Sets the Cluster$VisibleToAllUsers value, which determines whether the cluster is visible to all IAM users of the AWS account associated with the cluster. Only the IAM user who created the cluster or the AWS account root user can call this action. The default value, true
, indicates that all IAM users in the AWS account can perform cluster actions if they have the proper IAM policy permissions. If set to false
, only the IAM user that created the cluster can perform actions. This action works on running clusters. You can override the default true
setting when you create a cluster by using the VisibleToAllUsers
parameter with RunJobFlow
.
Sets the Cluster$VisibleToAllUsers value for an EMR cluster. When true
, IAM principals in the account can perform EMR cluster actions that their IAM policies allow. When false
, only the IAM principal that created the cluster and the account root user can perform EMR actions on the cluster, regardless of IAM permissions policies attached to other IAM principals.
This action works on running clusters. When you create a cluster, use the RunJobFlowInput$VisibleToAllUsers parameter.
For more information, see Understanding the EMR Cluster VisibleToAllUsers Setting in the Amazon EMR Management Guide.
" }, "StartNotebookExecution":{ "name":"StartNotebookExecution", @@ -926,7 +954,7 @@ "documentation":"The Amazon Resource Name that created or last modified the configuration.
" } }, - "documentation":"Properties that describe the AWS principal that created the BlockPublicAccessConfiguration
using the PutBlockPublicAccessConfiguration
action as well as the date and time that the configuration was created. Each time a configuration for block public access is updated, Amazon EMR updates this metadata.
Properties that describe the Amazon Web Services principal that created the BlockPublicAccessConfiguration
using the PutBlockPublicAccessConfiguration
action as well as the date and time that the configuration was created. Each time a configuration for block public access is updated, Amazon EMR updates this metadata.
The AWS KMS customer master key (CMK) used for encrypting log files. This attribute is only available with EMR version 5.30.0 and later, excluding EMR 6.0.0.
" + "documentation":"The KMS key used for encrypting log files. This attribute is only available with EMR version 5.30.0 and later, excluding EMR 6.0.0.
" }, "RequestedAmiVersion":{ "shape":"String", @@ -1128,7 +1156,7 @@ }, "VisibleToAllUsers":{ "shape":"Boolean", - "documentation":"Indicates whether the cluster is visible to all IAM users of the AWS account associated with the cluster. The default value, true
, indicates that all IAM users in the AWS account can perform cluster actions if they have the proper IAM policy permissions. If this value is false
, only the IAM user that created the cluster can perform actions. This value can be changed on a running cluster by using the SetVisibleToAllUsers action. You can override the default value of true
when you create a cluster by using the VisibleToAllUsers
parameter of the RunJobFlow
action.
Indicates whether the cluster is visible to IAM principals in the account associated with the cluster. When true
, IAM principals in the account can perform EMR cluster actions on the cluster that their IAM policies allow. When false
, only the IAM principal that created the cluster and the account root user can perform EMR actions, regardless of IAM permissions policies attached to other IAM principals.
The default value is false
if a value is not provided when creating a cluster using the EMR API RunJobFlow command or the CLI create-cluster command. The default value is true
when a cluster is created using the Management Console. IAM principals that are allowed to perform actions on the cluster can use the SetVisibleToAllUsers action to change the value on a running cluster. For more information, see Understanding the EMR Cluster VisibleToAllUsers Setting in the Amazon EMR Management Guide.
The IAM role that will be assumed by the Amazon EMR service to access AWS resources on your behalf.
" + "documentation":"The IAM role that will be assumed by the Amazon EMR service to access Amazon Web Services resources on your behalf.
" }, "NormalizedInstanceHours":{ "shape":"Integer", @@ -1477,7 +1505,7 @@ }, "ServiceRole":{ "shape":"XmlString", - "documentation":"The IAM role that will be assumed by the Amazon EMR Studio. The service role provides a way for Amazon EMR Studio to interoperate with other AWS services.
" + "documentation":"The IAM role that will be assumed by the Amazon EMR Studio. The service role provides a way for Amazon EMR Studio to interoperate with other Amazon Web Services services.
" }, "UserRole":{ "shape":"XmlString", @@ -1528,11 +1556,11 @@ }, "IdentityId":{ "shape":"XmlStringMaxLen256", - "documentation":"The globally unique identifier (GUID) of the user or group from the AWS SSO Identity Store. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The globally unique identifier (GUID) of the user or group from the Amazon Web Services SSO Identity Store. For more information, see UserId and GroupId in the Amazon Web Services SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group. For more information, see UserName and DisplayName in the Amazon Web Services SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The Amazon Resource Name (ARN) for the session policy that will be applied to the user or group. Session policies refine Studio user permissions without the need to use multiple IAM user roles.
" + "documentation":"The Amazon Resource Name (ARN) for the session policy that will be applied to the user or group. You should specify the ARN for the session policy that you want to apply, not the ARN of your user role. For more information, see Create an EMR Studio User Role with Session Policies.
" } } }, @@ -1583,11 +1611,11 @@ }, "IdentityId":{ "shape":"XmlStringMaxLen256", - "documentation":"The globally unique identifier (GUID) of the user or group to remove from the Amazon EMR Studio. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The globally unique identifier (GUID) of the user or group to remove from the Amazon EMR Studio. For more information, see UserId and GroupId in the Amazon Web Services SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user name or group to remove from the Amazon EMR Studio. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user name or group to remove from the Amazon EMR Studio. For more information, see UserName and DisplayName in the Amazon Web Services SSO Store API Reference. Either IdentityName
or IdentityId
must be specified.
The target release label to be described.
" + }, + "NextToken":{ + "shape":"String", + "documentation":"The pagination token. Reserved for future use. Currently set to null.
" + }, + "MaxResults":{ + "shape":"MaxResultsNumber", + "documentation":"Reserved for future use. Currently set to null.
" + } + } + }, + "DescribeReleaseLabelOutput":{ + "type":"structure", + "members":{ + "ReleaseLabel":{ + "shape":"String", + "documentation":"The target release label described in the response.
" + }, + "Applications":{ + "shape":"SimplifiedApplicationList", + "documentation":"The list of applications available for the target release label. Name
is the name of the application. Version
is the concise version of the application.
The pagination token. Reserved for future use. Currently set to null.
" + } + } + }, "DescribeSecurityConfigurationInput":{ "type":"structure", "required":["Name"], @@ -1933,7 +1995,7 @@ }, "BlockPublicAccessConfigurationMetadata":{ "shape":"BlockPublicAccessConfigurationMetadata", - "documentation":"Properties that describe the AWS principal that created the BlockPublicAccessConfiguration
using the PutBlockPublicAccessConfiguration
action as well as the date and time that the configuration was created. Each time a configuration for block public access is updated, Amazon EMR updates this metadata.
Properties that describe the Amazon Web Services principal that created the BlockPublicAccessConfiguration
using the PutBlockPublicAccessConfiguration
action as well as the date and time that the configuration was created. Each time a configuration for block public access is updated, Amazon EMR updates this metadata.
The globally unique identifier (GUID) of the user or group. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The globally unique identifier (GUID) of the user or group. For more information, see UserId and GroupId in the Amazon Web Services SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to fetch. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to fetch. For more information, see UserName and DisplayName in the Amazon Web Services SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The list of EBS volumes that are attached to this instance.
" + "documentation":"The list of Amazon EBS volumes that are attached to this instance.
" } }, "documentation":"Represents an EC2 instance provisioned as part of cluster.
" @@ -2140,7 +2202,7 @@ }, "InstanceTypeSpecifications":{ "shape":"InstanceTypeSpecificationList", - "documentation":"The specification for the instance types that comprise an instance fleet. Up to five unique instance specifications may be defined for each instance fleet.
" + "documentation":"An array of specifications for the instance types that comprise an instance fleet.
" }, "LaunchSpecifications":{ "shape":"InstanceFleetProvisioningSpecifications", @@ -2342,7 +2404,7 @@ }, "Configurations":{ "shape":"ConfigurationList", - "documentation":"Amazon EMR releases 4.x or later.
The list of configurations supplied for an EMR cluster instance group. You can specify a separate configuration for each instance group (master, core, and task).
" + "documentation":"Amazon EMR releases 4.x or later.
The list of configurations supplied for an Amazon EMR cluster instance group. You can specify a separate configuration for each instance group (master, core, and task).
" }, "ConfigurationsVersion":{ "shape":"Long", @@ -2761,14 +2823,14 @@ }, "EbsConfiguration":{ "shape":"EbsConfiguration", - "documentation":"The configuration of Amazon Elastic Block Storage (Amazon EBS) attached to each instance as defined by InstanceType
.
The configuration of Amazon Elastic Block Store (Amazon EBS) attached to each instance as defined by InstanceType
.
A configuration classification that applies when provisioning cluster instances, which can include configurations for applications and software that run on the cluster.
" } }, - "documentation":"An instance type configuration for each instance type in an instance fleet, which determines the EC2 instances Amazon EMR attempts to provision to fulfill On-Demand and Spot target capacities. There can be a maximum of five instance type configurations in a fleet.
The instance fleet configuration is available only in Amazon EMR versions 4.8.0 and later, excluding 5.0.x versions.
An instance type configuration for each instance type in an instance fleet, which determines the EC2 instances Amazon EMR attempts to provision to fulfill On-Demand and Spot target capacities. When you use an allocation strategy, you can include a maximum of 30 instance type configurations for a fleet. For more information about how to use an allocation strategy, see Configure Instance Fleets. Without an allocation strategy, you may specify a maximum of five instance type configurations for a fleet.
The instance fleet configuration is available only in Amazon EMR versions 4.8.0 and later, excluding 5.0.x versions.
The configuration of Amazon Elastic Block Storage (Amazon EBS) attached to each instance as defined by InstanceType
.
The configuration of Amazon Elastic Block Store (Amazon EBS) attached to each instance as defined by InstanceType
.
The AWS KMS customer master key (CMK) used for encrypting log files. This attribute is only available with EMR version 5.30.0 and later, excluding EMR 6.0.0.
" + "documentation":"The KMS key used for encrypting log files. This attribute is only available with EMR version 5.30.0 and later, excluding EMR 6.0.0.
" }, "AmiVersion":{ "shape":"XmlStringMaxLen256", @@ -2898,7 +2960,7 @@ }, "VisibleToAllUsers":{ "shape":"Boolean", - "documentation":"Indicates whether the cluster is visible to all IAM users of the AWS account associated with the cluster. The default value, true
, indicates that all IAM users in the AWS account can perform cluster actions if they have the proper IAM policy permissions. If this value is false
, only the IAM user that created the cluster can perform actions. This value can be changed on a running cluster by using the SetVisibleToAllUsers action. You can override the default value of true
when you create a cluster by using the VisibleToAllUsers
parameter of the RunJobFlow
action.
Indicates whether the cluster is visible to IAM principals in the account associated with the cluster. When true
, IAM principals in the account can perform EMR cluster actions that their IAM policies allow. When false
, only the IAM principal that created the cluster and the account root user can perform EMR actions, regardless of IAM permissions policies attached to other IAM principals.
The default value is false
if a value is not provided when creating a cluster using the EMR API RunJobFlow command or the CLI create-cluster command. The default value is true
when a cluster is created using the Management Console. IAM principals that are authorized to perform actions on the cluster can use the SetVisibleToAllUsers action to change the value on a running cluster. For more information, see Understanding the EMR Cluster VisibleToAllUsers Setting in the Amazon EMR Management Guide.
The IAM role that is assumed by the Amazon EMR service to access AWS resources on your behalf.
" + "documentation":"The IAM role that is assumed by the Amazon EMR service to access Amazon Web Services resources on your behalf.
" }, "AutoScalingRole":{ "shape":"XmlString", @@ -3008,7 +3070,7 @@ }, "KeepJobFlowAliveWhenNoSteps":{ "shape":"Boolean", - "documentation":"Specifies whether the cluster should remain available after completing all steps.
" + "documentation":"Specifies whether the cluster should remain available after completing all steps. Defaults to true
. For more information about configuring cluster termination, see Control Cluster Termination in the EMR Management Guide.
The identifier of the Amazon EC2 security group for the master node.
" + "documentation":"The identifier of the Amazon EC2 security group for the master node. If you specify EmrManagedMasterSecurityGroup
, you must also specify EmrManagedSlaveSecurityGroup
.
The identifier of the Amazon EC2 security group for the core and task nodes.
" + "documentation":"The identifier of the Amazon EC2 security group for the core and task nodes. If you specify EmrManagedSlaveSecurityGroup
, you must also specify EmrManagedMasterSecurityGroup
.
The cluster state filters to apply when listing clusters.
" + "documentation":"The cluster state filters to apply when listing clusters. Clusters that change state while this action runs may be not be returned as expected in the list of clusters.
" }, "Marker":{ "shape":"Marker", @@ -3368,6 +3430,36 @@ } } }, + "ListReleaseLabelsInput":{ + "type":"structure", + "members":{ + "Filters":{ + "shape":"ReleaseLabelFilter", + "documentation":"Filters the results of the request. Prefix
specifies the prefix of release labels to return. Application
specifies the application (with/without version) of release labels to return.
Specifies the next page of results. If NextToken
is not specified, which is usually the case for the first request of ListReleaseLabels, the first page of results are determined by other filtering parameters or by the latest version. The ListReleaseLabels
request fails if the identity (AWS AccountID) and all filtering parameters are different from the original request, or if the NextToken
is expired or tampered with.
Defines the maximum number of release labels to return in a single response. The default is 100
.
The returned release labels.
" + }, + "NextToken":{ + "shape":"String", + "documentation":"Used to paginate the next page of results if specified in the next ListReleaseLabels
request.
The pagination token that indicates the next set of results to retrieve.
" + "documentation":"The maximum number of steps that a single ListSteps
action returns is 50. To return a longer list of steps, use multiple ListSteps
actions along with the Marker
parameter, which is a pagination token that indicates the next set of results to retrieve.
This input determines which steps to list.
" @@ -3422,7 +3514,7 @@ }, "Marker":{ "shape":"Marker", - "documentation":"The pagination token that indicates the next set of results to retrieve.
" + "documentation":"The maximum number of steps that a single ListSteps
action returns is 50. To return a longer list of steps, use multiple ListSteps
actions along with the Marker
parameter, which is a pagination token that indicates the next set of results to retrieve.
This output contains the list of steps returned in reverse order. This means that the last step is the first element in the list.
" @@ -3498,6 +3590,11 @@ "SPOT" ] }, + "MaxResultsNumber":{ + "type":"integer", + "max":100, + "min":1 + }, "MetricDimension":{ "type":"structure", "members":{ @@ -3526,7 +3623,7 @@ }, "StepConcurrencyLevel":{ "shape":"Integer", - "documentation":"The number of steps that can be executed concurrently. You can specify a minimum of 1 step and a maximum of 256 steps.
" + "documentation":"The number of steps that can be executed concurrently. You can specify a minimum of 1 step and a maximum of 256 steps. We recommend that you do not change this parameter while steps are running or the ActionOnFailure
setting may not behave as expected. For more information see Step$ActionOnFailure.
The unique identifier of the instance fleet.
" + "documentation":"The configuration parameters of the instance fleet.
" } } }, @@ -3679,7 +3776,7 @@ "documentation":"The timestamp when notebook execution started.
" } }, - "documentation":"" + "documentation":"Details for a notebook execution. The details include information such as the unique ID and status of the notebook execution.
" }, "NotebookExecutionSummaryList":{ "type":"list", @@ -3690,7 +3787,7 @@ "members":{ "UsageStrategy":{ "shape":"OnDemandCapacityReservationUsageStrategy", - "documentation":"Indicates whether to use unused Capacity Reservations for fulfilling On-Demand capacity.
If you specify use-capacity-reservations-first
, the fleet uses unused Capacity Reservations to fulfill On-Demand capacity up to the target On-Demand capacity. If multiple instance pools have unused Capacity Reservations, the On-Demand allocation strategy (lowest-price
) is applied. If the number of unused Capacity Reservations is less than the On-Demand target capacity, the remaining On-Demand target capacity is launched according to the On-Demand allocation strategy (lowest-price
).
If you do not specify a value, the fleet fulfils the On-Demand capacity according to the chosen On-Demand allocation strategy.
" + "documentation":"Indicates whether to use unused Capacity Reservations for fulfilling On-Demand capacity.
If you specify use-capacity-reservations-first
, the fleet uses unused Capacity Reservations to fulfill On-Demand capacity up to the target On-Demand capacity. If multiple instance pools have unused Capacity Reservations, the On-Demand allocation strategy (lowest-price
) is applied. If the number of unused Capacity Reservations is less than the On-Demand target capacity, the remaining On-Demand target capacity is launched according to the On-Demand allocation strategy (lowest-price
).
If you do not specify a value, the fleet fulfills the On-Demand capacity according to the chosen On-Demand allocation strategy.
" }, "CapacityReservationPreference":{ "shape":"OnDemandCapacityReservationPreference", @@ -3884,6 +3981,20 @@ "members":{ } }, + "ReleaseLabelFilter":{ + "type":"structure", + "members":{ + "Prefix":{ + "shape":"String", + "documentation":"Optional release label version prefix filter. For example, emr-5
.
Optional release label application filter. For example, spark@2.1.0
.
The release label filters by application or version prefix.
" + }, "RemoveAutoScalingPolicyInput":{ "type":"structure", "required":[ @@ -3970,7 +4081,7 @@ }, "LogEncryptionKmsKeyId":{ "shape":"XmlString", - "documentation":"The AWS KMS customer master key (CMK) used for encrypting log files. If a value is not provided, the logs remain encrypted by AES-256. This attribute is only available with Amazon EMR version 5.30.0 and later, excluding Amazon EMR 6.0.0.
" + "documentation":"The KMS key used for encrypting log files. If a value is not provided, the logs remain encrypted by AES-256. This attribute is only available with Amazon EMR version 5.30.0 and later, excluding Amazon EMR 6.0.0.
" }, "AdditionalInfo":{ "shape":"XmlString", @@ -4014,7 +4125,7 @@ }, "VisibleToAllUsers":{ "shape":"Boolean", - "documentation":"A value of true
indicates that all IAM users in the AWS account can perform cluster actions if they have the proper IAM policy permissions. This is the default. A value of false
indicates that only the IAM user who created the cluster can perform actions.
Set this value to true
so that IAM principals in the account associated with the cluster can perform EMR actions on the cluster that their IAM policies allow. This value defaults to false
for clusters created using the EMR API or the CLI create-cluster command.
When set to false
, only the IAM principal that created the cluster and the account root user can perform EMR actions for the cluster, regardless of the IAM permissions policies attached to other IAM principals. For more information, see Understanding the EMR Cluster VisibleToAllUsers Setting in the Amazon EMR Management Guide.
The IAM role that will be assumed by the Amazon EMR service to access AWS resources on your behalf.
" + "documentation":"The IAM role that will be assumed by the Amazon EMR service to access Amazon Web Services resources on your behalf.
" }, "Tags":{ "shape":"TagList", @@ -4173,7 +4284,7 @@ "members":{ "Path":{ "shape":"XmlString", - "documentation":"Location of the script to run during a bootstrap action. Can be either a location in Amazon S3 or on a local file system.
" + "documentation":"Location in Amazon S3 of the script to run during a bootstrap action.
" }, "Args":{ "shape":"XmlStringList", @@ -4217,7 +4328,7 @@ }, "IdentityName":{ "shape":"XmlStringMaxLen256", - "documentation":"The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference.
" + "documentation":"The name of the user or group. For more information, see UserName and DisplayName in the Amazon Web Services SSO Identity Store API Reference.
" }, "IdentityType":{ "shape":"IdentityType", @@ -4247,11 +4358,11 @@ }, "IdentityId":{ "shape":"XmlStringMaxLen256", - "documentation":"The globally unique identifier (GUID) of the user or group from the AWS SSO Identity Store.
" + "documentation":"The globally unique identifier (GUID) of the user or group from the Amazon Web Services SSO Identity Store.
" }, "IdentityName":{ "shape":"XmlStringMaxLen256", - "documentation":"The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference.
" + "documentation":"The name of the user or group. For more information, see UserName and DisplayName in the Amazon Web Services SSO Identity Store API Reference.
" }, "IdentityType":{ "shape":"IdentityType", @@ -4303,7 +4414,7 @@ }, "VisibleToAllUsers":{ "shape":"Boolean", - "documentation":"A value of true
indicates that all IAM users in the AWS account can perform cluster actions if they have the proper IAM policy permissions. This is the default. A value of false
indicates that only the IAM user who created the cluster can perform actions.
A value of true
indicates that an IAM principal in the account can perform EMR actions on the cluster that the IAM policies attached to the principal allow. A value of false
indicates that only the IAM principal that created the cluster and the Amazon Web Services root user can perform EMR actions on the cluster.
The input to the SetVisibleToAllUsers action.
" @@ -4341,6 +4452,24 @@ }, "documentation":"An automatic scaling configuration, which describes how the policy adds or removes instances, the cooldown period, and the number of EC2 instances that will be added each time the CloudWatch metric alarm condition is satisfied.
" }, + "SimplifiedApplication":{ + "type":"structure", + "members":{ + "Name":{ + "shape":"String", + "documentation":"The returned release label application name. For example, hadoop
.
The returned release label application version. For example, 3.2.1
.
The returned release label application names or versions.
" + }, + "SimplifiedApplicationList":{ + "type":"list", + "member":{"shape":"SimplifiedApplication"} + }, "SpotProvisioningAllocationStrategy":{ "type":"string", "enum":["capacity-optimized"] @@ -4457,7 +4586,7 @@ }, "ActionOnFailure":{ "shape":"ActionOnFailure", - "documentation":"The action to take when the cluster step fails. Possible values are TERMINATE_CLUSTER, CANCEL_AND_WAIT, and CONTINUE. TERMINATE_JOB_FLOW is provided for backward compatibility. We recommend using TERMINATE_CLUSTER instead.
" + "documentation":"The action to take when the cluster step fails. Possible values are TERMINATE_CLUSTER
, CANCEL_AND_WAIT
, and CONTINUE
. TERMINATE_JOB_FLOW
is provided for backward compatibility. We recommend using TERMINATE_CLUSTER
instead.
If a cluster's StepConcurrencyLevel
is greater than 1
, do not use AddJobFlowSteps
to submit a step with this parameter set to CANCEL_AND_WAIT
or TERMINATE_CLUSTER
. The step is not submitted and the action fails with a message that the ActionOnFailure
setting is not valid.
If you change a cluster's StepConcurrencyLevel
to be greater than 1 while a step is running, the ActionOnFailure
parameter may not behave as you expect. In this case, for a step that fails with this parameter set to CANCEL_AND_WAIT
, pending steps and the running step are not canceled; for a step that fails with this parameter set to TERMINATE_CLUSTER
, the cluster does not terminate.
The action to take when the cluster step fails. Possible values are TERMINATE_CLUSTER, CANCEL_AND_WAIT, and CONTINUE. TERMINATE_JOB_FLOW is provided for backward compatibility. We recommend using TERMINATE_CLUSTER instead.
" + "documentation":"The action to take when the step fails. Use one of the following values:
TERMINATE_CLUSTER
- Shuts down the cluster.
CANCEL_AND_WAIT
- Cancels any pending steps and returns the cluster to the WAITING
state.
CONTINUE
- Continues to the next step in the queue.
TERMINATE_JOB_FLOW
- Shuts down the cluster. TERMINATE_JOB_FLOW
is provided for backward compatibility. We recommend using TERMINATE_CLUSTER
instead.
If a cluster's StepConcurrencyLevel
is greater than 1
, do not use AddJobFlowSteps
to submit a step with this parameter set to CANCEL_AND_WAIT
or TERMINATE_CLUSTER
. The step is not submitted and the action fails with a message that the ActionOnFailure
setting is not valid.
If you change a cluster's StepConcurrencyLevel
to be greater than 1 while a step is running, the ActionOnFailure
parameter may not behave as you expect. In this case, for a step that fails with this parameter set to CANCEL_AND_WAIT
, pending steps and the running step are not canceled; for a step that fails with this parameter set to TERMINATE_CLUSTER
, the cluster does not terminate.
The JAR file used for the step.
" } }, - "documentation":"Specification of a cluster (job flow) step.
" + "documentation":"Specification for a cluster (job flow) step.
" }, "StepConfigList":{ "type":"list", @@ -4641,7 +4770,7 @@ }, "ActionOnFailure":{ "shape":"ActionOnFailure", - "documentation":"The action to take when the cluster step fails. Possible values are TERMINATE_CLUSTER, CANCEL_AND_WAIT, and CONTINUE. TERMINATE_JOB_FLOW is available for backward compatibility. We recommend using TERMINATE_CLUSTER instead.
" + "documentation":"The action to take when the cluster step fails. Possible values are TERMINATE_CLUSTER, CANCEL_AND_WAIT, and CONTINUE. TERMINATE_JOB_FLOW is available for backward compatibility.
" }, "Status":{ "shape":"StepStatus", @@ -4915,11 +5044,11 @@ }, "IdentityId":{ "shape":"XmlStringMaxLen256", - "documentation":"The globally unique identifier (GUID) of the user or group. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The globally unique identifier (GUID) of the user or group. For more information, see UserId and GroupId in the Amazon Web Services SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to update. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to update. For more information, see UserName and DisplayName in the Amazon Web Services SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
Amazon EMR is a web service that makes it easier to process large amounts of data efficiently. Amazon EMR uses Hadoop processing combined with several AWS services to do tasks such as web indexing, data mining, log file analysis, machine learning, scientific simulation, and data warehouse management.
" + "documentation":"Amazon EMR is a web service that makes it easier to process large amounts of data efficiently. Amazon EMR uses Hadoop processing combined with several Amazon Web Services services to do tasks such as web indexing, data mining, log file analysis, machine learning, scientific simulation, and data warehouse management.
" } From 42f04f5c53a3a09d940818b22da7545c844f2cf3 Mon Sep 17 00:00:00 2001 From: AWS <> Date: Wed, 21 Jul 2021 18:11:45 +0000 Subject: [PATCH 13/13] Release 2.17.2. Updated CHANGELOG.md, README.md and all pom.xml. --- .changes/2.17.2.json | 60 +++++++++++++++++++ .../feature-AWSCodeBuild-cb56e95.json | 6 -- ...WSIdentityandAccessManagement-ad451d8.json | 6 -- ...ture-AWSKendraFrontendService-79e6338.json | 6 -- .../feature-AWSLambda-7a8ff7e.json | 6 -- .../feature-AWSProton-757f399.json | 6 -- ...eature-AmazonElasticMapReduce-50b796c.json | 6 -- .../feature-AmazonPersonalize-8f23858.json | 6 -- ...azonRelationalDatabaseService-8be6be5.json | 6 -- .../feature-ElasticLoadBalancing-9212956.json | 6 -- CHANGELOG.md | 37 ++++++++++++ README.md | 8 +-- archetypes/archetype-app-quickstart/pom.xml | 2 +- archetypes/archetype-lambda/pom.xml | 2 +- archetypes/archetype-tools/pom.xml | 2 +- archetypes/pom.xml | 2 +- aws-sdk-java/pom.xml | 2 +- bom-internal/pom.xml | 2 +- bom/pom.xml | 2 +- bundle/pom.xml | 2 +- codegen-lite-maven-plugin/pom.xml | 2 +- codegen-lite/pom.xml | 2 +- codegen-maven-plugin/pom.xml | 2 +- codegen/pom.xml | 2 +- core/annotations/pom.xml | 2 +- core/arns/pom.xml | 2 +- core/auth/pom.xml | 2 +- core/aws-core/pom.xml | 2 +- core/json-utils/pom.xml | 2 +- core/metrics-spi/pom.xml | 2 +- core/pom.xml | 2 +- core/profiles/pom.xml | 2 +- core/protocols/aws-cbor-protocol/pom.xml | 2 +- core/protocols/aws-json-protocol/pom.xml | 2 +- core/protocols/aws-query-protocol/pom.xml | 2 +- core/protocols/aws-xml-protocol/pom.xml | 2 +- core/protocols/pom.xml | 2 +- core/protocols/protocol-core/pom.xml | 2 +- core/regions/pom.xml | 2 +- core/sdk-core/pom.xml | 2 +- http-client-spi/pom.xml | 2 +- http-clients/apache-client/pom.xml | 2 +- http-clients/aws-crt-client/pom.xml | 2 +- http-clients/netty-nio-client/pom.xml | 2 +- http-clients/pom.xml | 2 +- http-clients/url-connection-client/pom.xml | 2 +- .../cloudwatch-metric-publisher/pom.xml | 2 +- metric-publishers/pom.xml | 2 +- pom.xml | 2 +- release-scripts/pom.xml | 2 +- services-custom/dynamodb-enhanced/pom.xml | 2 +- services-custom/pom.xml | 2 +- services/accessanalyzer/pom.xml | 2 +- services/acm/pom.xml | 2 +- services/acmpca/pom.xml | 2 +- services/alexaforbusiness/pom.xml | 2 +- services/amp/pom.xml | 2 +- services/amplify/pom.xml | 2 +- services/amplifybackend/pom.xml | 2 +- services/apigateway/pom.xml | 2 +- services/apigatewaymanagementapi/pom.xml | 2 +- services/apigatewayv2/pom.xml | 2 +- services/appconfig/pom.xml | 2 +- services/appflow/pom.xml | 2 +- services/appintegrations/pom.xml | 2 +- services/applicationautoscaling/pom.xml | 2 +- services/applicationcostprofiler/pom.xml | 2 +- services/applicationdiscovery/pom.xml | 2 +- services/applicationinsights/pom.xml | 2 +- services/appmesh/pom.xml | 2 +- services/apprunner/pom.xml | 2 +- services/appstream/pom.xml | 2 +- services/appsync/pom.xml | 2 +- services/athena/pom.xml | 2 +- services/auditmanager/pom.xml | 2 +- services/autoscaling/pom.xml | 2 +- services/autoscalingplans/pom.xml | 2 +- services/backup/pom.xml | 2 +- services/batch/pom.xml | 2 +- services/braket/pom.xml | 2 +- services/budgets/pom.xml | 2 +- services/chime/pom.xml | 2 +- services/cloud9/pom.xml | 2 +- services/clouddirectory/pom.xml | 2 +- services/cloudformation/pom.xml | 2 +- services/cloudfront/pom.xml | 2 +- services/cloudhsm/pom.xml | 2 +- services/cloudhsmv2/pom.xml | 2 +- services/cloudsearch/pom.xml | 2 +- services/cloudsearchdomain/pom.xml | 2 +- services/cloudtrail/pom.xml | 2 +- services/cloudwatch/pom.xml | 2 +- services/cloudwatchevents/pom.xml | 2 +- services/cloudwatchlogs/pom.xml | 2 +- services/codeartifact/pom.xml | 2 +- services/codebuild/pom.xml | 2 +- services/codecommit/pom.xml | 2 +- services/codedeploy/pom.xml | 2 +- services/codeguruprofiler/pom.xml | 2 +- services/codegurureviewer/pom.xml | 2 +- services/codepipeline/pom.xml | 2 +- services/codestar/pom.xml | 2 +- services/codestarconnections/pom.xml | 2 +- services/codestarnotifications/pom.xml | 2 +- services/cognitoidentity/pom.xml | 2 +- services/cognitoidentityprovider/pom.xml | 2 +- services/cognitosync/pom.xml | 2 +- services/comprehend/pom.xml | 2 +- services/comprehendmedical/pom.xml | 2 +- services/computeoptimizer/pom.xml | 2 +- services/config/pom.xml | 2 +- services/connect/pom.xml | 2 +- services/connectcontactlens/pom.xml | 2 +- services/connectparticipant/pom.xml | 2 +- services/costandusagereport/pom.xml | 2 +- services/costexplorer/pom.xml | 2 +- services/customerprofiles/pom.xml | 2 +- services/databasemigration/pom.xml | 2 +- services/databrew/pom.xml | 2 +- services/dataexchange/pom.xml | 2 +- services/datapipeline/pom.xml | 2 +- services/datasync/pom.xml | 2 +- services/dax/pom.xml | 2 +- services/detective/pom.xml | 2 +- services/devicefarm/pom.xml | 2 +- services/devopsguru/pom.xml | 2 +- services/directconnect/pom.xml | 2 +- services/directory/pom.xml | 2 +- services/dlm/pom.xml | 2 +- services/docdb/pom.xml | 2 +- services/dynamodb/pom.xml | 2 +- services/ebs/pom.xml | 2 +- services/ec2/pom.xml | 2 +- services/ec2instanceconnect/pom.xml | 2 +- services/ecr/pom.xml | 2 +- services/ecrpublic/pom.xml | 2 +- services/ecs/pom.xml | 2 +- services/efs/pom.xml | 2 +- services/eks/pom.xml | 2 +- services/elasticache/pom.xml | 2 +- services/elasticbeanstalk/pom.xml | 2 +- services/elasticinference/pom.xml | 2 +- services/elasticloadbalancing/pom.xml | 2 +- services/elasticloadbalancingv2/pom.xml | 2 +- services/elasticsearch/pom.xml | 2 +- services/elastictranscoder/pom.xml | 2 +- services/emr/pom.xml | 2 +- services/emrcontainers/pom.xml | 2 +- services/eventbridge/pom.xml | 2 +- services/finspace/pom.xml | 2 +- services/finspacedata/pom.xml | 2 +- services/firehose/pom.xml | 2 +- services/fis/pom.xml | 2 +- services/fms/pom.xml | 2 +- services/forecast/pom.xml | 2 +- services/forecastquery/pom.xml | 2 +- services/frauddetector/pom.xml | 2 +- services/fsx/pom.xml | 2 +- services/gamelift/pom.xml | 2 +- services/glacier/pom.xml | 2 +- services/globalaccelerator/pom.xml | 2 +- services/glue/pom.xml | 2 +- services/greengrass/pom.xml | 2 +- services/greengrassv2/pom.xml | 2 +- services/groundstation/pom.xml | 2 +- services/guardduty/pom.xml | 2 +- services/health/pom.xml | 2 +- services/healthlake/pom.xml | 2 +- services/honeycode/pom.xml | 2 +- services/iam/pom.xml | 2 +- services/identitystore/pom.xml | 2 +- services/imagebuilder/pom.xml | 2 +- services/inspector/pom.xml | 2 +- services/iot/pom.xml | 2 +- services/iot1clickdevices/pom.xml | 2 +- services/iot1clickprojects/pom.xml | 2 +- services/iotanalytics/pom.xml | 2 +- services/iotdataplane/pom.xml | 2 +- services/iotdeviceadvisor/pom.xml | 2 +- services/iotevents/pom.xml | 2 +- services/ioteventsdata/pom.xml | 2 +- services/iotfleethub/pom.xml | 2 +- services/iotjobsdataplane/pom.xml | 2 +- services/iotsecuretunneling/pom.xml | 2 +- services/iotsitewise/pom.xml | 2 +- services/iotthingsgraph/pom.xml | 2 +- services/iotwireless/pom.xml | 2 +- services/ivs/pom.xml | 2 +- services/kafka/pom.xml | 2 +- services/kendra/pom.xml | 2 +- services/kinesis/pom.xml | 2 +- services/kinesisanalytics/pom.xml | 2 +- services/kinesisanalyticsv2/pom.xml | 2 +- services/kinesisvideo/pom.xml | 2 +- services/kinesisvideoarchivedmedia/pom.xml | 2 +- services/kinesisvideomedia/pom.xml | 2 +- services/kinesisvideosignaling/pom.xml | 2 +- services/kms/pom.xml | 2 +- services/lakeformation/pom.xml | 2 +- services/lambda/pom.xml | 2 +- services/lexmodelbuilding/pom.xml | 2 +- services/lexmodelsv2/pom.xml | 2 +- services/lexruntime/pom.xml | 2 +- services/lexruntimev2/pom.xml | 2 +- services/licensemanager/pom.xml | 2 +- services/lightsail/pom.xml | 2 +- services/location/pom.xml | 2 +- services/lookoutequipment/pom.xml | 2 +- services/lookoutmetrics/pom.xml | 2 +- services/lookoutvision/pom.xml | 2 +- services/machinelearning/pom.xml | 2 +- services/macie/pom.xml | 2 +- services/macie2/pom.xml | 2 +- services/managedblockchain/pom.xml | 2 +- services/marketplacecatalog/pom.xml | 2 +- services/marketplacecommerceanalytics/pom.xml | 2 +- services/marketplaceentitlement/pom.xml | 2 +- services/marketplacemetering/pom.xml | 2 +- services/mediaconnect/pom.xml | 2 +- services/mediaconvert/pom.xml | 2 +- services/medialive/pom.xml | 2 +- services/mediapackage/pom.xml | 2 +- services/mediapackagevod/pom.xml | 2 +- services/mediastore/pom.xml | 2 +- services/mediastoredata/pom.xml | 2 +- services/mediatailor/pom.xml | 2 +- services/mgn/pom.xml | 2 +- services/migrationhub/pom.xml | 2 +- services/migrationhubconfig/pom.xml | 2 +- services/mobile/pom.xml | 2 +- services/mq/pom.xml | 2 +- services/mturk/pom.xml | 2 +- services/mwaa/pom.xml | 2 +- services/neptune/pom.xml | 2 +- services/networkfirewall/pom.xml | 2 +- services/networkmanager/pom.xml | 2 +- services/nimble/pom.xml | 2 +- services/opsworks/pom.xml | 2 +- services/opsworkscm/pom.xml | 2 +- services/organizations/pom.xml | 2 +- services/outposts/pom.xml | 2 +- services/personalize/pom.xml | 2 +- services/personalizeevents/pom.xml | 2 +- services/personalizeruntime/pom.xml | 2 +- services/pi/pom.xml | 2 +- services/pinpoint/pom.xml | 2 +- services/pinpointemail/pom.xml | 2 +- services/pinpointsmsvoice/pom.xml | 2 +- services/polly/pom.xml | 2 +- services/pom.xml | 2 +- services/pricing/pom.xml | 2 +- services/proton/pom.xml | 2 +- services/qldb/pom.xml | 2 +- services/qldbsession/pom.xml | 2 +- services/quicksight/pom.xml | 2 +- services/ram/pom.xml | 2 +- services/rds/pom.xml | 2 +- services/rdsdata/pom.xml | 2 +- services/redshift/pom.xml | 2 +- services/redshiftdata/pom.xml | 2 +- services/rekognition/pom.xml | 2 +- services/resourcegroups/pom.xml | 2 +- services/resourcegroupstaggingapi/pom.xml | 2 +- services/robomaker/pom.xml | 2 +- services/route53/pom.xml | 2 +- services/route53domains/pom.xml | 2 +- services/route53resolver/pom.xml | 2 +- services/s3/pom.xml | 2 +- services/s3control/pom.xml | 2 +- services/s3outposts/pom.xml | 2 +- services/sagemaker/pom.xml | 2 +- services/sagemakera2iruntime/pom.xml | 2 +- services/sagemakeredge/pom.xml | 2 +- services/sagemakerfeaturestoreruntime/pom.xml | 2 +- services/sagemakerruntime/pom.xml | 2 +- services/savingsplans/pom.xml | 2 +- services/schemas/pom.xml | 2 +- services/secretsmanager/pom.xml | 2 +- services/securityhub/pom.xml | 2 +- .../serverlessapplicationrepository/pom.xml | 2 +- services/servicecatalog/pom.xml | 2 +- services/servicecatalogappregistry/pom.xml | 2 +- services/servicediscovery/pom.xml | 2 +- services/servicequotas/pom.xml | 2 +- services/ses/pom.xml | 2 +- services/sesv2/pom.xml | 2 +- services/sfn/pom.xml | 2 +- services/shield/pom.xml | 2 +- services/signer/pom.xml | 2 +- services/sms/pom.xml | 2 +- services/snowball/pom.xml | 2 +- services/sns/pom.xml | 2 +- services/sqs/pom.xml | 2 +- services/ssm/pom.xml | 2 +- services/ssmcontacts/pom.xml | 2 +- services/ssmincidents/pom.xml | 2 +- services/sso/pom.xml | 2 +- services/ssoadmin/pom.xml | 2 +- services/ssooidc/pom.xml | 2 +- services/storagegateway/pom.xml | 2 +- services/sts/pom.xml | 2 +- services/support/pom.xml | 2 +- services/swf/pom.xml | 2 +- services/synthetics/pom.xml | 2 +- services/textract/pom.xml | 2 +- services/timestreamquery/pom.xml | 2 +- services/timestreamwrite/pom.xml | 2 +- services/transcribe/pom.xml | 2 +- services/transcribestreaming/pom.xml | 2 +- services/transfer/pom.xml | 2 +- services/translate/pom.xml | 2 +- services/waf/pom.xml | 2 +- services/wafv2/pom.xml | 2 +- services/wellarchitected/pom.xml | 2 +- services/workdocs/pom.xml | 2 +- services/worklink/pom.xml | 2 +- services/workmail/pom.xml | 2 +- services/workmailmessageflow/pom.xml | 2 +- services/workspaces/pom.xml | 2 +- services/xray/pom.xml | 2 +- test/auth-sts-testing/pom.xml | 2 +- test/codegen-generated-classes-test/pom.xml | 2 +- test/http-client-tests/pom.xml | 2 +- test/module-path-tests/pom.xml | 2 +- test/protocol-tests-core/pom.xml | 2 +- test/protocol-tests/pom.xml | 2 +- test/sdk-benchmarks/pom.xml | 2 +- test/sdk-native-image-test/pom.xml | 2 +- test/service-test-utils/pom.xml | 2 +- test/stability-tests/pom.xml | 2 +- test/test-utils/pom.xml | 2 +- test/tests-coverage-reporting/pom.xml | 2 +- third-party/pom.xml | 2 +- third-party/third-party-jackson-core/pom.xml | 2 +- .../pom.xml | 2 +- utils/pom.xml | 2 +- 336 files changed, 425 insertions(+), 382 deletions(-) create mode 100644 .changes/2.17.2.json delete mode 100644 .changes/next-release/feature-AWSCodeBuild-cb56e95.json delete mode 100644 .changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json delete mode 100644 .changes/next-release/feature-AWSKendraFrontendService-79e6338.json delete mode 100644 .changes/next-release/feature-AWSLambda-7a8ff7e.json delete mode 100644 .changes/next-release/feature-AWSProton-757f399.json delete mode 100644 .changes/next-release/feature-AmazonElasticMapReduce-50b796c.json delete mode 100644 .changes/next-release/feature-AmazonPersonalize-8f23858.json delete mode 100644 .changes/next-release/feature-AmazonRelationalDatabaseService-8be6be5.json delete mode 100644 .changes/next-release/feature-ElasticLoadBalancing-9212956.json diff --git a/.changes/2.17.2.json b/.changes/2.17.2.json new file mode 100644 index 000000000000..c7b8a3c72d19 --- /dev/null +++ b/.changes/2.17.2.json @@ -0,0 +1,60 @@ +{ + "version": "2.17.2", + "date": "2021-07-21", + "entries": [ + { + "type": "feature", + "category": "AWS Identity and Access Management", + "contributor": "", + "description": "Documentation updates for AWS Identity and Access Management (IAM)." + }, + { + "type": "feature", + "category": "Amazon Elastic MapReduce", + "contributor": "", + "description": "EMR now supports new DescribeReleaseLabel and ListReleaseLabel APIs. They can provide Amazon EMR release label details. You can programmatically list available releases and applications for a specific Amazon EMR release label." + }, + { + "type": "feature", + "category": "AWS CodeBuild", + "contributor": "", + "description": "AWS CodeBuild now allows you to set the access permissions for build artifacts, project artifacts, and log files that are uploaded to an Amazon S3 bucket that is owned by another account." + }, + { + "type": "feature", + "category": "Elastic Load Balancing", + "contributor": "", + "description": "Adds support for enabling TLS protocol version and cipher suite headers to be sent to backend targets for Application Load Balancers." + }, + { + "type": "feature", + "category": "AWS Lambda", + "contributor": "", + "description": "New ResourceConflictException error code for PutFunctionEventInvokeConfig, UpdateFunctionEventInvokeConfig, and DeleteFunctionEventInvokeConfig operations." + }, + { + "type": "feature", + "category": "Amazon Relational Database Service", + "contributor": "", + "description": "Adds the OriginalSnapshotCreateTime field to the DBSnapshot response object. This field timestamps the underlying data of a snapshot and doesn't change when the snapshot is copied." + }, + { + "type": "feature", + "category": "Amazon Personalize", + "contributor": "", + "description": "My AWS Service (placeholder) - Making minProvisionedTPS an optional parameter when creating a campaign. If not provided, it defaults to 1." + }, + { + "type": "feature", + "category": "AWSKendraFrontendService", + "contributor": "", + "description": "Amazon Kendra now provides a data source connector for Amazon WorkDocs. For more information, see https://docs.aws.amazon.com/kendra/latest/dg/data-source-workdocs.html" + }, + { + "type": "feature", + "category": "AWS Proton", + "contributor": "", + "description": "Documentation updates for AWS Proton" + } + ] +} \ No newline at end of file diff --git a/.changes/next-release/feature-AWSCodeBuild-cb56e95.json b/.changes/next-release/feature-AWSCodeBuild-cb56e95.json deleted file mode 100644 index da8fc1388dc0..000000000000 --- a/.changes/next-release/feature-AWSCodeBuild-cb56e95.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS CodeBuild", - "contributor": "", - "description": "AWS CodeBuild now allows you to set the access permissions for build artifacts, project artifacts, and log files that are uploaded to an Amazon S3 bucket that is owned by another account." -} diff --git a/.changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json b/.changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json deleted file mode 100644 index 4abe1cca03fc..000000000000 --- a/.changes/next-release/feature-AWSIdentityandAccessManagement-ad451d8.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Identity and Access Management", - "contributor": "", - "description": "Documentation updates for AWS Identity and Access Management (IAM)." -} diff --git a/.changes/next-release/feature-AWSKendraFrontendService-79e6338.json b/.changes/next-release/feature-AWSKendraFrontendService-79e6338.json deleted file mode 100644 index a99d4572afe9..000000000000 --- a/.changes/next-release/feature-AWSKendraFrontendService-79e6338.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWSKendraFrontendService", - "contributor": "", - "description": "Amazon Kendra now provides a data source connector for Amazon WorkDocs. For more information, see https://docs.aws.amazon.com/kendra/latest/dg/data-source-workdocs.html" -} diff --git a/.changes/next-release/feature-AWSLambda-7a8ff7e.json b/.changes/next-release/feature-AWSLambda-7a8ff7e.json deleted file mode 100644 index d7d044d59705..000000000000 --- a/.changes/next-release/feature-AWSLambda-7a8ff7e.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Lambda", - "contributor": "", - "description": "New ResourceConflictException error code for PutFunctionEventInvokeConfig, UpdateFunctionEventInvokeConfig, and DeleteFunctionEventInvokeConfig operations." -} diff --git a/.changes/next-release/feature-AWSProton-757f399.json b/.changes/next-release/feature-AWSProton-757f399.json deleted file mode 100644 index 873e896e0749..000000000000 --- a/.changes/next-release/feature-AWSProton-757f399.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "AWS Proton", - "contributor": "", - "description": "Documentation updates for AWS Proton" -} diff --git a/.changes/next-release/feature-AmazonElasticMapReduce-50b796c.json b/.changes/next-release/feature-AmazonElasticMapReduce-50b796c.json deleted file mode 100644 index 5161dedd4099..000000000000 --- a/.changes/next-release/feature-AmazonElasticMapReduce-50b796c.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Elastic MapReduce", - "contributor": "", - "description": "EMR now supports new DescribeReleaseLabel and ListReleaseLabel APIs. They can provide Amazon EMR release label details. You can programmatically list available releases and applications for a specific Amazon EMR release label." -} diff --git a/.changes/next-release/feature-AmazonPersonalize-8f23858.json b/.changes/next-release/feature-AmazonPersonalize-8f23858.json deleted file mode 100644 index 0f66e62ce55c..000000000000 --- a/.changes/next-release/feature-AmazonPersonalize-8f23858.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Personalize", - "contributor": "", - "description": "My AWS Service (placeholder) - Making minProvisionedTPS an optional parameter when creating a campaign. If not provided, it defaults to 1." -} diff --git a/.changes/next-release/feature-AmazonRelationalDatabaseService-8be6be5.json b/.changes/next-release/feature-AmazonRelationalDatabaseService-8be6be5.json deleted file mode 100644 index 2f84f0b33fdb..000000000000 --- a/.changes/next-release/feature-AmazonRelationalDatabaseService-8be6be5.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Amazon Relational Database Service", - "contributor": "", - "description": "Adds the OriginalSnapshotCreateTime field to the DBSnapshot response object. This field timestamps the underlying data of a snapshot and doesn't change when the snapshot is copied." -} diff --git a/.changes/next-release/feature-ElasticLoadBalancing-9212956.json b/.changes/next-release/feature-ElasticLoadBalancing-9212956.json deleted file mode 100644 index bf373946155f..000000000000 --- a/.changes/next-release/feature-ElasticLoadBalancing-9212956.json +++ /dev/null @@ -1,6 +0,0 @@ -{ - "type": "feature", - "category": "Elastic Load Balancing", - "contributor": "", - "description": "Adds support for enabling TLS protocol version and cipher suite headers to be sent to backend targets for Application Load Balancers." -} diff --git a/CHANGELOG.md b/CHANGELOG.md index f3f6557094d1..8ffa80ad708b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,40 @@ +# __2.17.2__ __2021-07-21__ +## __AWS CodeBuild__ + - ### Features + - AWS CodeBuild now allows you to set the access permissions for build artifacts, project artifacts, and log files that are uploaded to an Amazon S3 bucket that is owned by another account. + +## __AWS Identity and Access Management__ + - ### Features + - Documentation updates for AWS Identity and Access Management (IAM). + +## __AWS Lambda__ + - ### Features + - New ResourceConflictException error code for PutFunctionEventInvokeConfig, UpdateFunctionEventInvokeConfig, and DeleteFunctionEventInvokeConfig operations. + +## __AWS Proton__ + - ### Features + - Documentation updates for AWS Proton + +## __AWSKendraFrontendService__ + - ### Features + - Amazon Kendra now provides a data source connector for Amazon WorkDocs. For more information, see https://docs.aws.amazon.com/kendra/latest/dg/data-source-workdocs.html + +## __Amazon Elastic MapReduce__ + - ### Features + - EMR now supports new DescribeReleaseLabel and ListReleaseLabel APIs. They can provide Amazon EMR release label details. You can programmatically list available releases and applications for a specific Amazon EMR release label. + +## __Amazon Personalize__ + - ### Features + - My AWS Service (placeholder) - Making minProvisionedTPS an optional parameter when creating a campaign. If not provided, it defaults to 1. + +## __Amazon Relational Database Service__ + - ### Features + - Adds the OriginalSnapshotCreateTime field to the DBSnapshot response object. This field timestamps the underlying data of a snapshot and doesn't change when the snapshot is copied. + +## __Elastic Load Balancing__ + - ### Features + - Adds support for enabling TLS protocol version and cipher suite headers to be sent to backend targets for Application Load Balancers. + # __2.17.1__ __2021-07-20__ ## __AWS Compute Optimizer__ - ### Features diff --git a/README.md b/README.md index 5ce64d1f3d4b..ed7517bb00f7 100644 --- a/README.md +++ b/README.md @@ -52,7 +52,7 @@ To automatically manage module versions (currently all modules have the same ver