diff --git a/CHANGELOG.md b/CHANGELOG.md index f451c497f5c..fe355b762d8 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,50 @@ +Release v0.19.0 (2020-01-30) +=== + +Breaking Change +--- +* `service`: Add generated service for wafregional and dynamodbstreams #463 + * Updates the wafregional and dynamodbstreams API clients to include all API operations, and types that were previously shared between waf and dynamodb API clients respectively. This update ensures that all API clients include all operations and types needed for that client, and shares no types with another client package. + * To migrate your applications to use the updated wafregional and dynamodbstreams you'll need to update the package the impacted type is imported from to match the client the type is being used with. +* `aws`: Context has been added to EC2Metadata operations.([#461](https://github.com/aws/aws-sdk-go-v2/pull/461)) + * Also updates utilities that directly or indirectly depend on EC2Metadata client. Signer utilities, credential providers now take in context. +* `private/model`: Add utility for validating shape names for structs and enums for the service packages ([#471](https://github.com/aws/aws-sdk-go-v2/pull/471)) + * Fixes bug which allowed service package structs, enums to start with non alphabetic character + * Fixes the incorrect enum types in mediapackage service package, changing enum types __AdTriggersElement, __PeriodTriggersElement to AdTriggersElement, PeriodTriggersElement respectively. +* `aws`: Client, Metadata, and Request structures have been refactored to simplify the usage of resolved endpoints ([#473](https://github.com/aws/aws-sdk-go-v2/pull/473)) + * `aws.Client.Endpoint` struct member has been removed, and `aws.Request.Endpoint` struct member has been added of type `aws.Endpoint` + * `aws.Client.Region` structure member has been removed + +Services +--- +* Synced the V2 SDK with latest AWS service API definitions. + +SDK Features +--- +* `aws`: `PartitionID` has been added to `aws.Endpoint` structure, and is used by the endpoint resolver to indicate which AWS partition an endpoint was resolved for ([#473](https://github.com/aws/aws-sdk-go-v2/pull/473)) +* `aws/endpoints`: Updated resolvers to populate `PartitionID` for a resolved `aws.Endpoint` ([#473](https://github.com/aws/aws-sdk-go-v2/pull/473)) +* `service/s3`: Add support for Access Point resources + * Adds support for using Access Point resource with Amazon S3 API operation calls. The Access Point resource are identified by an Amazon Resource Name (ARN). + * To make operation calls to an S3 Access Point instead of a S3 Bucket, provide the Access Point ARN string as the value of the Bucket parameter. You can create an Access Point for your bucket with the Amazon S3 Control API. The Access Point ARN can be obtained from the S3 Control API. You should avoid building the ARN directly. + +SDK Enhancements +--- +* `internal/sdkio`: Adds RingBuffer data structure to the sdk [#417](https://github.com/aws/aws-sdk-go-v2/pull/417) + * Adds an implementation of RingBuffer data structure which acts as a revolving buffer of a predefined length. The RingBuffer implements io.ReadWriter interface. + * Adds unit tests to test the behavior of the ring buffer. +* `aws/ec2metadata`: Adds support for EC2Metadata client to use secure tokens provided by the IMDS ([#453](https://github.com/aws/aws-sdk-go-v2/pull/453)) + * Modifies EC2Metadata client to use request context within its operations ([#462](https://github.com/aws/aws-sdk-go-v2/pull/462)) + * Reduces the default dialer timeout and response header timeout to help reduce latency for known issues with EC2Metadata client running inside a container + * Modifies and adds tests to verify the behavior of the EC2Metadata client. +* `service/dynamodb/dynamodbattribute`: Adds clarifying docs on dynamodbattribute.UnixTime ([#464](https://github.com/aws/aws-sdk-go-v2/pull/464)) +* `example/service/sts/assumeRole`: added sts assume role example ([#224](https://github.com/aws/aws-sdk-go-v2/pull/224)) + * Fixes [#157](https://github.com/aws/aws-sdk-go-v2/issues/157) by adding an example for Amazon STS assume role to retrieve credentials. + +SDK Bugs +--- +* `service/dynamodb/dynamodbattribute`: Fixes a panic when decoding into a map with a key string type alias. ([#465](https://github.com/aws/aws-sdk-go-v2/pull/465)) + * Fixes [#410](https://github.com/aws/aws-sdk-go-v2/issues/410), by adding support for keys that are string aliases. + Release v0.18.0 (2019-12-12) === diff --git a/CHANGELOG_PENDING.md b/CHANGELOG_PENDING.md index 366f1a9fac3..4edd1a7d1b5 100644 --- a/CHANGELOG_PENDING.md +++ b/CHANGELOG_PENDING.md @@ -1,42 +1,14 @@ Breaking Change --- -* `service`: Add generated service for wafregional and dynamodbstreams #463 - * Updates the wafregional and dynamodbstreams API clients to include all API operations, and types that were previously shared between waf and dynamodb API clients respectively. This update ensures that all API clients include all operations and types needed for that client, and shares no types with another client package. - * To migrate your applications to use the updated wafregional and dynamodbstreams you'll need to update the package the impacted type is imported from to match the client the type is being used with. -* `aws`: Context has been added to EC2Metadata operations.([#461](https://github.com/aws/aws-sdk-go-v2/pull/461)) - * Also updates utilities that directly or indirectly depend on EC2Metadata client. Signer utilities, credential providers now take in context. -* `private/model`: Add utility for validating shape names for structs and enums for the service packages ([#471](https://github.com/aws/aws-sdk-go-v2/pull/471)) - * Fixes bug which allowed service package structs, enums to start with non alphabetic character - * Fixes the incorrect enum types in mediapackage service package, changing enum types __AdTriggersElement, __PeriodTriggersElement to AdTriggersElement, PeriodTriggersElement respectively. -* `aws`: Client, Metadata, and Request structures have been refactored to simplify the usage of resolved endpoints ([#473](https://github.com/aws/aws-sdk-go-v2/pull/473)) - * `aws.Client.Endpoint` struct member has been removed, and `aws.Request.Endpoint` struct member has been added of type `aws.Endpoint` - * `aws.Client.Region` structure member has been removed Services --- SDK Features --- -* `aws`: `PartitionID` has been added to `aws.Endpoint` structure, and is used by the endpoint resolver to indicate which AWS partition an endpoint was resolved for ([#473](https://github.com/aws/aws-sdk-go-v2/pull/473)) -* `aws/endpoints`: Updated resolvers to populate `PartitionID` for a resolved `aws.Endpoint` ([#473](https://github.com/aws/aws-sdk-go-v2/pull/473)) -* `service/s3`: Add support for Access Point resources - * Adds support for using Access Point resource with Amazon S3 API operation calls. The Access Point resource are identified by an Amazon Resource Name (ARN). - * To make operation calls to an S3 Access Point instead of a S3 Bucket, provide the Access Point ARN string as the value of the Bucket parameter. You can create an Access Point for your bucket with the Amazon S3 Control API. The Access Point ARN can be obtained from the S3 Control API. You should avoid building the ARN directly. SDK Enhancements --- -* `internal/sdkio`: Adds RingBuffer data structure to the sdk [#417](https://github.com/aws/aws-sdk-go-v2/pull/417) - * Adds an implementation of RingBuffer data structure which acts as a revolving buffer of a predefined length. The RingBuffer implements io.ReadWriter interface. - * Adds unit tests to test the behavior of the ring buffer. -* `aws/ec2metadata`: Adds support for EC2Metadata client to use secure tokens provided by the IMDS ([#453](https://github.com/aws/aws-sdk-go-v2/pull/453)) - * Modifies EC2Metadata client to use request context within its operations ([#462](https://github.com/aws/aws-sdk-go-v2/pull/462)) - * Reduces the default dialer timeout and response header timeout to help reduce latency for known issues with EC2Metadata client running inside a container - * Modifies and adds tests to verify the behavior of the EC2Metadata client. -* `service/dynamodb/dynamodbattribute`: Adds clarifying docs on dynamodbattribute.UnixTime ([#464](https://github.com/aws/aws-sdk-go-v2/pull/464)) -* `example/service/sts/assumeRole`: added sts assume role example ([#224](https://github.com/aws/aws-sdk-go-v2/pull/224)) - * Fixes [#157](https://github.com/aws/aws-sdk-go-v2/issues/157) by adding an example for Amazon STS assume role to retrieve credentials. SDK Bugs --- -* `service/dynamodb/dynamodbattribute`: Fixes a panic when decoding into a map with a key string type alias. ([#465](https://github.com/aws/aws-sdk-go/pull/465)) - * Fixes [#410](https://github.com/aws/aws-sdk-go-v2/issues/410), by adding support for keys that are string aliases. diff --git a/aws/endpoints/defaults.go b/aws/endpoints/defaults.go index 7c24ada1a80..04359a62378 100644 --- a/aws/endpoints/defaults.go +++ b/aws/endpoints/defaults.go @@ -2219,6 +2219,7 @@ var awsPartition = partition{ "eu-west-1": endpoint{}, "eu-west-2": endpoint{}, "eu-west-3": endpoint{}, + "me-south-1": endpoint{}, "sa-east-1": endpoint{}, "us-east-1": endpoint{}, "us-east-2": endpoint{}, @@ -4540,6 +4541,13 @@ var awscnPartition = partition{ "cn-northwest-1": endpoint{}, }, }, + "elasticfilesystem": service{ + + Endpoints: endpoints{ + "cn-north-1": endpoint{}, + "cn-northwest-1": endpoint{}, + }, + }, "elasticloadbalancing": service{ Defaults: endpoint{ Protocols: []string{"https"}, @@ -5027,6 +5035,15 @@ var awsusgovPartition = partition{ }, }, }, + "autoscaling-plans": service{ + Defaults: endpoint{ + Protocols: []string{"http", "https"}, + }, + Endpoints: endpoints{ + "us-gov-east-1": endpoint{}, + "us-gov-west-1": endpoint{}, + }, + }, "clouddirectory": service{ Endpoints: endpoints{ diff --git a/aws/version.go b/aws/version.go index 15eb7972e0a..b98fb10bab6 100644 --- a/aws/version.go +++ b/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "0.18.0" +const SDKVersion = "0.19.0" diff --git a/internal/awstesting/cmd/op_crawler/create_service.go b/internal/awstesting/cmd/op_crawler/create_service.go index cec6f7f4add..b0b5745e9f5 100644 --- a/internal/awstesting/cmd/op_crawler/create_service.go +++ b/internal/awstesting/cmd/op_crawler/create_service.go @@ -48,6 +48,7 @@ import ( "github.com/aws/aws-sdk-go-v2/service/codegurureviewer" "github.com/aws/aws-sdk-go-v2/service/codepipeline" "github.com/aws/aws-sdk-go-v2/service/codestar" + "github.com/aws/aws-sdk-go-v2/service/codestarconnections" "github.com/aws/aws-sdk-go-v2/service/codestarnotifications" "github.com/aws/aws-sdk-go-v2/service/cognitoidentity" "github.com/aws/aws-sdk-go-v2/service/cognitoidentityprovider" @@ -65,6 +66,7 @@ import ( "github.com/aws/aws-sdk-go-v2/service/datapipeline" "github.com/aws/aws-sdk-go-v2/service/datasync" "github.com/aws/aws-sdk-go-v2/service/dax" + "github.com/aws/aws-sdk-go-v2/service/detective" "github.com/aws/aws-sdk-go-v2/service/devicefarm" "github.com/aws/aws-sdk-go-v2/service/directconnect" "github.com/aws/aws-sdk-go-v2/service/directoryservice" @@ -279,6 +281,7 @@ func createServices(cfg aws.Config) []service { {name: "codegurureviewer", value: reflect.ValueOf(codegurureviewer.New(cfg))}, {name: "codepipeline", value: reflect.ValueOf(codepipeline.New(cfg))}, {name: "codestar", value: reflect.ValueOf(codestar.New(cfg))}, + {name: "codestarconnections", value: reflect.ValueOf(codestarconnections.New(cfg))}, {name: "codestarnotifications", value: reflect.ValueOf(codestarnotifications.New(cfg))}, {name: "cognitoidentity", value: reflect.ValueOf(cognitoidentity.New(cfg))}, {name: "cognitoidentityprovider", value: reflect.ValueOf(cognitoidentityprovider.New(cfg))}, @@ -296,6 +299,7 @@ func createServices(cfg aws.Config) []service { {name: "datapipeline", value: reflect.ValueOf(datapipeline.New(cfg))}, {name: "datasync", value: reflect.ValueOf(datasync.New(cfg))}, {name: "dax", value: reflect.ValueOf(dax.New(cfg))}, + {name: "detective", value: reflect.ValueOf(detective.New(cfg))}, {name: "devicefarm", value: reflect.ValueOf(devicefarm.New(cfg))}, {name: "directconnect", value: reflect.ValueOf(directconnect.New(cfg))}, {name: "directoryservice", value: reflect.ValueOf(directoryservice.New(cfg))}, diff --git a/models/apis/AWSMigrationHub/2017-05-31/api-2.json b/models/apis/AWSMigrationHub/2017-05-31/api-2.json index 3eb3a228df7..4c7f29c5f19 100644 --- a/models/apis/AWSMigrationHub/2017-05-31/api-2.json +++ b/models/apis/AWSMigrationHub/2017-05-31/api-2.json @@ -180,6 +180,22 @@ {"shape":"HomeRegionNotSetException"} ] }, + "ListApplicationStates":{ + "name":"ListApplicationStates", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListApplicationStatesRequest"}, + "output":{"shape":"ListApplicationStatesResult"}, + "errors":[ + {"shape":"AccessDeniedException"}, + {"shape":"InternalServerError"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"InvalidInputException"}, + {"shape":"HomeRegionNotSetException"} + ] + }, "ListCreatedArtifacts":{ "name":"ListCreatedArtifacts", "http":{ @@ -320,6 +336,26 @@ "max":1600, "min":1 }, + "ApplicationIds":{ + "type":"list", + "member":{"shape":"ApplicationId"}, + "max":100, + "min":1 + }, + "ApplicationState":{ + "type":"structure", + "members":{ + "ApplicationId":{"shape":"ApplicationId"}, + "ApplicationStatus":{"shape":"ApplicationStatus"}, + "LastUpdatedTime":{"shape":"UpdateDateTime"} + } + }, + "ApplicationStateList":{ + "type":"list", + "member":{"shape":"ApplicationState"}, + "max":1000, + "min":0 + }, "ApplicationStatus":{ "type":"string", "enum":[ @@ -559,6 +595,21 @@ "max":100, "min":0 }, + "ListApplicationStatesRequest":{ + "type":"structure", + "members":{ + "ApplicationIds":{"shape":"ApplicationIds"}, + "NextToken":{"shape":"Token"}, + "MaxResults":{"shape":"MaxResults"} + } + }, + "ListApplicationStatesResult":{ + "type":"structure", + "members":{ + "ApplicationStateList":{"shape":"ApplicationStateList"}, + "NextToken":{"shape":"Token"} + } + }, "ListCreatedArtifactsRequest":{ "type":"structure", "required":[ diff --git a/models/apis/AWSMigrationHub/2017-05-31/docs-2.json b/models/apis/AWSMigrationHub/2017-05-31/docs-2.json index a085c8ea343..88d8b4a4860 100644 --- a/models/apis/AWSMigrationHub/2017-05-31/docs-2.json +++ b/models/apis/AWSMigrationHub/2017-05-31/docs-2.json @@ -11,6 +11,7 @@ "DisassociateCreatedArtifact": "
Disassociates a created artifact of an AWS resource with a migration task performed by a migration tool that was previously associated. This API has the following traits:
A migration user can call the DisassociateCreatedArtifacts
operation to disassociate a created AWS Artifact from a migration task.
The created artifact name must be provided in ARN (Amazon Resource Name) format which will contain information about type and region; for example: arn:aws:ec2:us-east-1:488216288981:image/ami-6d0ba87b
.
Examples of the AWS resource behind the created artifact are, AMI's, EC2 instance, or RDS instance, etc.
Disassociate an Application Discovery Service discovered resource from a migration task.
", "ImportMigrationTask": "Registers a new migration task which represents a server, database, etc., being migrated to AWS by a migration tool.
This API is a prerequisite to calling the NotifyMigrationTaskState
API as the migration tool must first register the migration task with Migration Hub.
Lists all the migration statuses for your applications. If you use the optional ApplicationIds
parameter, only the migration statuses for those applications will be returned.
Lists the created artifacts attached to a given migration task in an update stream. This API has the following traits:
Gets the list of the created artifacts while migration is taking place.
Shows the artifacts created by the migration tool that was associated by the AssociateCreatedArtifact
API.
Lists created artifacts in a paginated interface.
Lists discovered resources associated with the given MigrationTask
.
Lists all, or filtered by resource name, migration tasks associated with the user account making this call. This API has the following traits:
Can show a summary list of the most recent migration tasks.
Can show a summary list of migration tasks associated with a given discovered resource.
Lists migration tasks in a paginated interface.
The configurationId from the Application Discovery Service that uniquely identifies an application.
", "DescribeApplicationStateRequest$ApplicationId": "The configurationId in Application Discovery Service that uniquely identifies the grouped application.
", "NotifyApplicationStateRequest$ApplicationId": "The configurationId in Application Discovery Service that uniquely identifies the grouped application.
" } }, + "ApplicationIds": { + "base": null, + "refs": { + "ListApplicationStatesRequest$ApplicationIds": "The configurationIds from the Application Discovery Service that uniquely identifies your applications.
" + } + }, + "ApplicationState": { + "base": "The state of an application discovered through Migration Hub import, the AWS Agentless Discovery Connector, or the AWS Application Discovery Agent.
", + "refs": { + "ApplicationStateList$member": null + } + }, + "ApplicationStateList": { + "base": null, + "refs": { + "ListApplicationStatesResult$ApplicationStateList": "A list of Applications that exist in Application Discovery Service.
" + } + }, "ApplicationStatus": { "base": null, "refs": { + "ApplicationState$ApplicationStatus": "The current status of an application.
", "DescribeApplicationStateResult$ApplicationStatus": "Status of the application - Not Started, In-Progress, Complete.
", "NotifyApplicationStateRequest$Status": "Status of the application - Not Started, In-Progress, Complete.
" } @@ -236,6 +258,16 @@ "MigrationTask$ResourceAttributeList": "Information about the resource that is being migrated. This data will be used to map the task to a resource in the Application Discovery Service repository.
" } }, + "ListApplicationStatesRequest": { + "base": null, + "refs": { + } + }, + "ListApplicationStatesResult": { + "base": null, + "refs": { + } + }, "ListCreatedArtifactsRequest": { "base": null, "refs": { @@ -279,6 +311,7 @@ "MaxResults": { "base": null, "refs": { + "ListApplicationStatesRequest$MaxResults": "Maximum number of results to be returned per page.
", "ListMigrationTasksRequest$MaxResults": "Value to specify how many results are returned per page.
", "ListProgressUpdateStreamsRequest$MaxResults": "Filter to limit the maximum number of results to list per page.
" } @@ -475,6 +508,8 @@ "Token": { "base": null, "refs": { + "ListApplicationStatesRequest$NextToken": "If a NextToken
was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in NextToken
.
If a NextToken
was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in NextToken
.
If a NextToken
was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in NextToken
.
If there are more created artifacts than the max result, return the next token to be passed to the next call as a bookmark of where to start from.
", "ListDiscoveredResourcesRequest$NextToken": "If a NextToken
was returned by a previous call, there are more results available. To retrieve the next page of results, make the call again using the returned token in NextToken
.
The timestamp when the application status was last updated.
", "DescribeApplicationStateResult$LastUpdatedTime": "The timestamp when the application status was last updated.
", "MigrationTask$UpdateDateTime": "The timestamp when the task was gathered.
", "MigrationTaskSummary$UpdateDateTime": "The timestamp when the task was gathered.
", diff --git a/models/apis/AWSMigrationHub/2017-05-31/paginators-1.json b/models/apis/AWSMigrationHub/2017-05-31/paginators-1.json index 2d785c8af28..3f2d290acf4 100644 --- a/models/apis/AWSMigrationHub/2017-05-31/paginators-1.json +++ b/models/apis/AWSMigrationHub/2017-05-31/paginators-1.json @@ -1,5 +1,11 @@ { "pagination": { + "ListApplicationStates": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken", + "result_key": "ApplicationStateList" + }, "ListCreatedArtifacts": { "input_token": "NextToken", "limit_key": "MaxResults", diff --git a/models/apis/alexaforbusiness/2017-11-09/api-2.json b/models/apis/alexaforbusiness/2017-11-09/api-2.json index f9acdfcf927..23fa38ea02d 100644 --- a/models/apis/alexaforbusiness/2017-11-09/api-2.json +++ b/models/apis/alexaforbusiness/2017-11-09/api-2.json @@ -1508,6 +1508,7 @@ "OFFLINE" ] }, + "ConnectionStatusUpdatedTime":{"type":"timestamp"}, "Contact":{ "type":"structure", "members":{ @@ -2063,9 +2064,11 @@ "NetworkProfileName":{"shape":"NetworkProfileName"}, "RoomArn":{"shape":"Arn"}, "RoomName":{"shape":"RoomName"}, - "DeviceStatusInfo":{"shape":"DeviceStatusInfo"} + "DeviceStatusInfo":{"shape":"DeviceStatusInfo"}, + "CreatedTime":{"shape":"DeviceDataCreatedTime"} } }, + "DeviceDataCreatedTime":{"type":"timestamp"}, "DeviceDataList":{ "type":"list", "member":{"shape":"DeviceData"} @@ -2170,7 +2173,8 @@ "type":"structure", "members":{ "DeviceStatusDetails":{"shape":"DeviceStatusDetails"}, - "ConnectionStatus":{"shape":"ConnectionStatus"} + "ConnectionStatus":{"shape":"ConnectionStatus"}, + "ConnectionStatusUpdatedTime":{"shape":"ConnectionStatusUpdatedTime"} } }, "DeviceType":{ diff --git a/models/apis/alexaforbusiness/2017-11-09/docs-2.json b/models/apis/alexaforbusiness/2017-11-09/docs-2.json index 407d697bca0..20aa3069a1e 100644 --- a/models/apis/alexaforbusiness/2017-11-09/docs-2.json +++ b/models/apis/alexaforbusiness/2017-11-09/docs-2.json @@ -623,6 +623,12 @@ "DeviceStatusInfo$ConnectionStatus": "The latest available information about the connection status of a device.
" } }, + "ConnectionStatusUpdatedTime": { + "base": null, + "refs": { + "DeviceStatusInfo$ConnectionStatusUpdatedTime": "The time (in epoch) when the device connection status changed.
" + } + }, "Contact": { "base": "A contact with attributes.
", "refs": { @@ -981,6 +987,12 @@ "DeviceDataList$member": null } }, + "DeviceDataCreatedTime": { + "base": null, + "refs": { + "DeviceData$CreatedTime": "The time (in epoch) when the device data was created.
" + } + }, "DeviceDataList": { "base": null, "refs": { diff --git a/models/apis/application-insights/2018-11-25/api-2.json b/models/apis/application-insights/2018-11-25/api-2.json index 27ef9a7b5c0..3807a830618 100644 --- a/models/apis/application-insights/2018-11-25/api-2.json +++ b/models/apis/application-insights/2018-11-25/api-2.json @@ -243,6 +243,20 @@ {"shape":"InternalServerException"} ] }, + "ListConfigurationHistory":{ + "name":"ListConfigurationHistory", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListConfigurationHistoryRequest"}, + "output":{"shape":"ListConfigurationHistoryResponse"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ] + }, "ListLogPatternSets":{ "name":"ListLogPatternSets", "http":{ @@ -431,6 +445,41 @@ "min":1 }, "ComponentName":{"type":"string"}, + "ConfigurationEvent":{ + "type":"structure", + "members":{ + "MonitoredResourceARN":{"shape":"ConfigurationEventMonitoredResourceARN"}, + "EventStatus":{"shape":"ConfigurationEventStatus"}, + "EventResourceType":{"shape":"ConfigurationEventResourceType"}, + "EventTime":{"shape":"ConfigurationEventTime"}, + "EventDetail":{"shape":"ConfigurationEventDetail"}, + "EventResourceName":{"shape":"ConfigurationEventResourceName"} + } + }, + "ConfigurationEventDetail":{"type":"string"}, + "ConfigurationEventList":{ + "type":"list", + "member":{"shape":"ConfigurationEvent"} + }, + "ConfigurationEventMonitoredResourceARN":{"type":"string"}, + "ConfigurationEventResourceName":{"type":"string"}, + "ConfigurationEventResourceType":{ + "type":"string", + "enum":[ + "CLOUDWATCH_ALARM", + "CLOUDFORMATION", + "SSM_ASSOCIATION" + ] + }, + "ConfigurationEventStatus":{ + "type":"string", + "enum":[ + "INFO", + "WARN", + "ERROR" + ] + }, + "ConfigurationEventTime":{"type":"timestamp"}, "CreateApplicationRequest":{ "type":"structure", "required":["ResourceGroupName"], @@ -724,6 +773,24 @@ "NextToken":{"shape":"PaginationToken"} } }, + "ListConfigurationHistoryRequest":{ + "type":"structure", + "members":{ + "ResourceGroupName":{"shape":"ResourceGroupName"}, + "StartTime":{"shape":"StartTime"}, + "EndTime":{"shape":"EndTime"}, + "EventStatus":{"shape":"ConfigurationEventStatus"}, + "MaxResults":{"shape":"MaxEntities"}, + "NextToken":{"shape":"PaginationToken"} + } + }, + "ListConfigurationHistoryResponse":{ + "type":"structure", + "members":{ + "EventList":{"shape":"ConfigurationEventList"}, + "NextToken":{"shape":"PaginationToken"} + } + }, "ListLogPatternSetsRequest":{ "type":"structure", "required":["ResourceGroupName"], diff --git a/models/apis/application-insights/2018-11-25/docs-2.json b/models/apis/application-insights/2018-11-25/docs-2.json index 3194148023e..aa834bae0bc 100644 --- a/models/apis/application-insights/2018-11-25/docs-2.json +++ b/models/apis/application-insights/2018-11-25/docs-2.json @@ -18,6 +18,7 @@ "DescribeProblemObservations": "Describes the anomalies or errors associated with the problem.
", "ListApplications": "Lists the IDs of the applications that you are monitoring.
", "ListComponents": "Lists the auto-grouped, standalone, and custom components of the application.
", + "ListConfigurationHistory": "Lists the INFO, WARN, and ERROR events for periodic configuration updates performed by Application Insights. Examples of events represented are:
INFO: creating a new alarm or updating an alarm threshold.
WARN: alarm not created due to insufficient data points used to predict thresholds.
ERROR: alarm not created due to permission errors or exceeding quotas.
Lists the log pattern sets in the specific application.
", "ListLogPatterns": "Lists the log patterns in the specific log LogPatternSet
.
Lists the problems with your application.
", @@ -99,6 +100,55 @@ "UpdateComponentRequest$ComponentName": "The name of the component.
" } }, + "ConfigurationEvent": { + "base": "The event information.
", + "refs": { + "ConfigurationEventList$member": null + } + }, + "ConfigurationEventDetail": { + "base": null, + "refs": { + "ConfigurationEvent$EventDetail": "The details of the event in plain text.
" + } + }, + "ConfigurationEventList": { + "base": null, + "refs": { + "ListConfigurationHistoryResponse$EventList": "The list of configuration events and their corresponding details.
" + } + }, + "ConfigurationEventMonitoredResourceARN": { + "base": null, + "refs": { + "ConfigurationEvent$MonitoredResourceARN": "The resource monitored by Application Insights.
" + } + }, + "ConfigurationEventResourceName": { + "base": null, + "refs": { + "ConfigurationEvent$EventResourceName": "The name of the resource Application Insights attempted to configure.
" + } + }, + "ConfigurationEventResourceType": { + "base": null, + "refs": { + "ConfigurationEvent$EventResourceType": "The resource type that Application Insights attempted to configure, for example, CLOUDWATCH_ALARM.
" + } + }, + "ConfigurationEventStatus": { + "base": null, + "refs": { + "ConfigurationEvent$EventStatus": "The status of the configuration update event. Possible values include INFO, WARN, and ERROR.
", + "ListConfigurationHistoryRequest$EventStatus": "The status of the configuration update event. Possible values include INFO, WARN, and ERROR.
" + } + }, + "ConfigurationEventTime": { + "base": null, + "refs": { + "ConfigurationEvent$EventTime": "The timestamp of the event.
" + } + }, "CreateApplicationRequest": { "base": null, "refs": { @@ -242,6 +292,7 @@ "EndTime": { "base": null, "refs": { + "ListConfigurationHistoryRequest$EndTime": "The end time of the event.
", "ListProblemsRequest$EndTime": "The time when the problem ended, in epoch seconds. If not specified, problems within the past seven days are returned.
", "Observation$EndTime": "The time when the observation ended, in epoch seconds.
", "Problem$EndTime": "The time when the problem ended, in epoch seconds.
" @@ -325,6 +376,16 @@ "refs": { } }, + "ListConfigurationHistoryRequest": { + "base": null, + "refs": { + } + }, + "ListConfigurationHistoryResponse": { + "base": null, + "refs": { + } + }, "ListLogPatternSetsRequest": { "base": null, "refs": { @@ -447,6 +508,7 @@ "refs": { "ListApplicationsRequest$MaxResults": "The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned NextToken
value.
The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned NextToken
value.
The maximum number of results returned by ListConfigurationHistory
in paginated output. When this parameter is used, ListConfigurationHistory
returns only MaxResults
in a single page along with a NextToken
response element. The remaining results of the initial request can be seen by sending another ListConfigurationHistory
request with the returned NextToken
value. If this parameter is not used, then ListConfigurationHistory
returns all results.
The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned NextToken
value.
The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned NextToken
value.
The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned NextToken
value.
The token used to retrieve the next page of results. This value is null
when there are no more results to return.
The token to request the next page of results.
", "ListComponentsResponse$NextToken": "The token to request the next page of results.
", + "ListConfigurationHistoryRequest$NextToken": "The NextToken
value returned from a previous paginated ListConfigurationHistory
request where MaxResults
was used and the results exceeded the value of that parameter. Pagination continues from the end of the previous results that returned the NextToken
value. This value is null
when there are no more results to return.
The NextToken
value to include in a future ListConfigurationHistory
request. When the results of a ListConfigurationHistory
request exceed MaxResults
, this value can be used to retrieve the next page of results. This value is null
when there are no more results to return.
The token to request the next page of results.
", "ListLogPatternSetsResponse$NextToken": "The token used to retrieve the next page of results. This value is null
when there are no more results to return.
The token to request the next page of results.
", @@ -559,7 +623,7 @@ "Remarks": { "base": null, "refs": { - "ApplicationInfo$Remarks": "The issues on the user side that block Application Insights from successfully monitoring an application.
" + "ApplicationInfo$Remarks": "The issues on the user side that block Application Insights from successfully monitoring an application. Example remarks include:
“Configuring application, detected 1 Errors, 3 Warnings”
“Configuring application, detected 1 Unconfigured Components”
The name of the resource group.
", "DescribeLogPatternResponse$ResourceGroupName": "The name of the resource group.
", "ListComponentsRequest$ResourceGroupName": "The name of the resource group.
", + "ListConfigurationHistoryRequest$ResourceGroupName": "Resource group to which the application belongs.
", "ListLogPatternSetsRequest$ResourceGroupName": "The name of the resource group.
", "ListLogPatternSetsResponse$ResourceGroupName": "The name of the resource group.
", "ListLogPatternsRequest$ResourceGroupName": "The name of the resource group.
", @@ -650,6 +715,7 @@ "StartTime": { "base": null, "refs": { + "ListConfigurationHistoryRequest$StartTime": "The start time of the event.
", "ListProblemsRequest$StartTime": "The time when the problem was detected, in epoch seconds. If you don't specify a time frame for the request, problems within the past seven days are returned.
", "Observation$StartTime": "The time when the observation was first detected, in epoch seconds.
", "Problem$StartTime": "The time when the problem started, in epoch seconds.
" diff --git a/models/apis/application-insights/2018-11-25/paginators-1.json b/models/apis/application-insights/2018-11-25/paginators-1.json index 7a952b6eee3..2f237e4c958 100644 --- a/models/apis/application-insights/2018-11-25/paginators-1.json +++ b/models/apis/application-insights/2018-11-25/paginators-1.json @@ -10,6 +10,11 @@ "output_token": "NextToken", "limit_key": "MaxResults" }, + "ListConfigurationHistory": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "ListLogPatternSets": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/apis/backup/2018-11-15/api-2.json b/models/apis/backup/2018-11-15/api-2.json index 1ed605ecd0c..e7fc2bf53ef 100644 --- a/models/apis/backup/2018-11-15/api-2.json +++ b/models/apis/backup/2018-11-15/api-2.json @@ -186,6 +186,22 @@ ], "idempotent":true }, + "DescribeCopyJob":{ + "name":"DescribeCopyJob", + "http":{ + "method":"GET", + "requestUri":"/copy-jobs/{copyJobId}" + }, + "input":{"shape":"DescribeCopyJobInput"}, + "output":{"shape":"DescribeCopyJobOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParameterValueException"}, + {"shape":"MissingParameterValueException"}, + {"shape":"ServiceUnavailableException"} + ], + "idempotent":true + }, "DescribeProtectedResource":{ "name":"DescribeProtectedResource", "http":{ @@ -382,6 +398,7 @@ "output":{"shape":"ListBackupJobsOutput"}, "errors":[ {"shape":"InvalidParameterValueException"}, + {"shape":"InvalidRequestException"}, {"shape":"ServiceUnavailableException"} ], "idempotent":true @@ -465,6 +482,19 @@ ], "idempotent":true }, + "ListCopyJobs":{ + "name":"ListCopyJobs", + "http":{ + "method":"GET", + "requestUri":"/copy-jobs/" + }, + "input":{"shape":"ListCopyJobsInput"}, + "output":{"shape":"ListCopyJobsOutput"}, + "errors":[ + {"shape":"InvalidParameterValueException"}, + {"shape":"ServiceUnavailableException"} + ] + }, "ListProtectedResources":{ "name":"ListProtectedResources", "http":{ @@ -590,6 +620,23 @@ ], "idempotent":true }, + "StartCopyJob":{ + "name":"StartCopyJob", + "http":{ + "method":"PUT", + "requestUri":"/copy-jobs" + }, + "input":{"shape":"StartCopyJobInput"}, + "output":{"shape":"StartCopyJobOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidParameterValueException"}, + {"shape":"MissingParameterValueException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"LimitExceededException"} + ], + "idempotent":true + }, "StartRestoreJob":{ "name":"StartRestoreJob", "http":{ @@ -807,7 +854,8 @@ "CompletionWindowMinutes":{"shape":"WindowMinutes"}, "Lifecycle":{"shape":"Lifecycle"}, "RecoveryPointTags":{"shape":"Tags"}, - "RuleId":{"shape":"string"} + "RuleId":{"shape":"string"}, + "CopyActions":{"shape":"CopyActions"} } }, "BackupRuleInput":{ @@ -823,7 +871,8 @@ "StartWindowMinutes":{"shape":"WindowMinutes"}, "CompletionWindowMinutes":{"shape":"WindowMinutes"}, "Lifecycle":{"shape":"Lifecycle"}, - "RecoveryPointTags":{"shape":"Tags"} + "RecoveryPointTags":{"shape":"Tags"}, + "CopyActions":{"shape":"CopyActions"} } }, "BackupRuleName":{ @@ -875,8 +924,16 @@ "enum":[ "BACKUP_JOB_STARTED", "BACKUP_JOB_COMPLETED", + "BACKUP_JOB_SUCCESSFUL", + "BACKUP_JOB_FAILED", + "BACKUP_JOB_EXPIRED", "RESTORE_JOB_STARTED", "RESTORE_JOB_COMPLETED", + "RESTORE_JOB_SUCCESSFUL", + "RESTORE_JOB_FAILED", + "COPY_JOB_STARTED", + "COPY_JOB_SUCCESSFUL", + "COPY_JOB_FAILED", "RECOVERY_POINT_MODIFIED", "BACKUP_PLAN_CREATED", "BACKUP_PLAN_MODIFIED" @@ -932,6 +989,50 @@ "enum":["STRINGEQUALS"] }, "ConditionValue":{"type":"string"}, + "CopyAction":{ + "type":"structure", + "required":["DestinationBackupVaultArn"], + "members":{ + "Lifecycle":{"shape":"Lifecycle"}, + "DestinationBackupVaultArn":{"shape":"ARN"} + } + }, + "CopyActions":{ + "type":"list", + "member":{"shape":"CopyAction"} + }, + "CopyJob":{ + "type":"structure", + "members":{ + "CopyJobId":{"shape":"string"}, + "SourceBackupVaultArn":{"shape":"ARN"}, + "SourceRecoveryPointArn":{"shape":"ARN"}, + "DestinationBackupVaultArn":{"shape":"ARN"}, + "DestinationRecoveryPointArn":{"shape":"ARN"}, + "ResourceArn":{"shape":"ARN"}, + "CreationDate":{"shape":"timestamp"}, + "CompletionDate":{"shape":"timestamp"}, + "State":{"shape":"CopyJobState"}, + "StatusMessage":{"shape":"string"}, + "BackupSizeInBytes":{"shape":"Long"}, + "IamRoleArn":{"shape":"IAMRoleArn"}, + "CreatedBy":{"shape":"RecoveryPointCreator"}, + "ResourceType":{"shape":"ResourceType"} + } + }, + "CopyJobState":{ + "type":"string", + "enum":[ + "CREATED", + "RUNNING", + "COMPLETED", + "FAILED" + ] + }, + "CopyJobsList":{ + "type":"list", + "member":{"shape":"CopyJob"} + }, "CreateBackupPlanInput":{ "type":"structure", "required":["BackupPlan"], @@ -1154,6 +1255,23 @@ "NumberOfRecoveryPoints":{"shape":"long"} } }, + "DescribeCopyJobInput":{ + "type":"structure", + "required":["CopyJobId"], + "members":{ + "CopyJobId":{ + "shape":"string", + "location":"uri", + "locationName":"copyJobId" + } + } + }, + "DescribeCopyJobOutput":{ + "type":"structure", + "members":{ + "CopyJob":{"shape":"CopyJob"} + } + }, "DescribeProtectedResourceInput":{ "type":"structure", "required":["ResourceArn"], @@ -1636,6 +1754,58 @@ "NextToken":{"shape":"string"} } }, + "ListCopyJobsInput":{ + "type":"structure", + "members":{ + "NextToken":{ + "shape":"string", + "location":"querystring", + "locationName":"nextToken" + }, + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"maxResults" + }, + "ByResourceArn":{ + "shape":"ARN", + "location":"querystring", + "locationName":"resourceArn" + }, + "ByState":{ + "shape":"CopyJobState", + "location":"querystring", + "locationName":"state" + }, + "ByCreatedBefore":{ + "shape":"timestamp", + "location":"querystring", + "locationName":"createdBefore" + }, + "ByCreatedAfter":{ + "shape":"timestamp", + "location":"querystring", + "locationName":"createdAfter" + }, + "ByResourceType":{ + "shape":"ResourceType", + "location":"querystring", + "locationName":"resourceType" + }, + "ByDestinationVaultArn":{ + "shape":"string", + "location":"querystring", + "locationName":"destinationVaultArn" + } + } + }, + "ListCopyJobsOutput":{ + "type":"structure", + "members":{ + "CopyJobs":{"shape":"CopyJobsList"}, + "NextToken":{"shape":"string"} + } + }, "ListOfTags":{ "type":"list", "member":{"shape":"Condition"} @@ -1802,7 +1972,8 @@ "Metadata":{ "type":"map", "key":{"shape":"MetadataKey"}, - "value":{"shape":"MetadataValue"} + "value":{"shape":"MetadataValue"}, + "sensitive":true }, "MetadataKey":{"type":"string"}, "MetadataValue":{"type":"string"}, @@ -2005,6 +2176,30 @@ "CreationDate":{"shape":"timestamp"} } }, + "StartCopyJobInput":{ + "type":"structure", + "required":[ + "RecoveryPointArn", + "SourceBackupVaultName", + "DestinationBackupVaultArn", + "IamRoleArn" + ], + "members":{ + "RecoveryPointArn":{"shape":"ARN"}, + "SourceBackupVaultName":{"shape":"BackupVaultName"}, + "DestinationBackupVaultArn":{"shape":"ARN"}, + "IamRoleArn":{"shape":"IAMRoleArn"}, + "IdempotencyToken":{"shape":"string"}, + "Lifecycle":{"shape":"Lifecycle"} + } + }, + "StartCopyJobOutput":{ + "type":"structure", + "members":{ + "CopyJobId":{"shape":"string"}, + "CreationDate":{"shape":"timestamp"} + } + }, "StartRestoreJobInput":{ "type":"structure", "required":[ diff --git a/models/apis/backup/2018-11-15/docs-2.json b/models/apis/backup/2018-11-15/docs-2.json index c446a6257d6..489eb2e0052 100644 --- a/models/apis/backup/2018-11-15/docs-2.json +++ b/models/apis/backup/2018-11-15/docs-2.json @@ -2,8 +2,8 @@ "version": "2.0", "service": "AWS Backup is a unified backup service designed to protect AWS services and their associated data. AWS Backup simplifies the creation, migration, restoration, and deletion of backups, while also providing reporting and auditing.
", "operations": { - "CreateBackupPlan": "Backup plans are documents that contain information that AWS Backup uses to schedule tasks that create recovery points of resources.
If you call CreateBackupPlan
with a plan that already exists, the existing backupPlanId
is returned.
Creates a JSON document that specifies a set of resources to assign to a backup plan. Resources can be included by specifying patterns for a ListOfTags
and selected Resources
.
For example, consider the following patterns:
Resources: \"arn:aws:ec2:region:account-id:volume/volume-id\"
ConditionKey:\"department\"
ConditionValue:\"finance\"
ConditionType:\"StringEquals\"
ConditionKey:\"importance\"
ConditionValue:\"critical\"
ConditionType:\"StringEquals\"
Using these patterns would back up all Amazon Elastic Block Store (Amazon EBS) volumes that are tagged as \"department=finance\"
, \"importance=critical\"
, in addition to an EBS volume with the specified volume Id.
Resources and conditions are additive in that all resources that match the pattern are selected. This shouldn't be confused with a logical AND, where all conditions must match. The matching patterns are logically 'put together using the OR operator. In other words, all patterns that match are selected for backup.
", + "CreateBackupPlan": "Backup plans are documents that contain information that AWS Backup uses to schedule tasks that create recovery points of resources.
If you call CreateBackupPlan
with a plan that already exists, an AlreadyExistsException
is returned.
Creates a JSON document that specifies a set of resources to assign to a backup plan. Resources can be included by specifying patterns for a ListOfTags
and selected Resources
.
For example, consider the following patterns:
Resources: \"arn:aws:ec2:region:account-id:volume/volume-id\"
ConditionKey:\"department\"
ConditionValue:\"finance\"
ConditionType:\"STRINGEQUALS\"
ConditionKey:\"importance\"
ConditionValue:\"critical\"
ConditionType:\"STRINGEQUALS\"
Using these patterns would back up all Amazon Elastic Block Store (Amazon EBS) volumes that are tagged as \"department=finance\"
, \"importance=critical\"
, in addition to an EBS volume with the specified volume Id.
Resources and conditions are additive in that all resources that match the pattern are selected. This shouldn't be confused with a logical AND, where all conditions must match. The matching patterns are logically 'put together using the OR operator. In other words, all patterns that match are selected for backup.
", "CreateBackupVault": "Creates a logical container where backups are stored. A CreateBackupVault
request includes a name, optionally one or more resource tags, an encryption key, and a request ID.
Sensitive data, such as passport numbers, should not be included the name of a backup vault.
Deletes a backup plan. A backup plan can only be deleted after all associated selections of resources have been deleted. Deleting a backup plan deletes the current version of a backup plan. Previous versions, if any, will still exist.
", "DeleteBackupSelection": "Deletes the resource selection associated with a backup plan that is specified by the SelectionId
.
Deletes the recovery point specified by a recovery point ID.
", "DescribeBackupJob": "Returns metadata associated with creating a backup of a resource.
", "DescribeBackupVault": "Returns metadata about a backup vault specified by its name.
", + "DescribeCopyJob": "Returns metadata associated with creating a copy of a resource.
", "DescribeProtectedResource": "Returns information about a saved resource, including the last time it was backed-up, its Amazon Resource Name (ARN), and the AWS service type of the saved resource.
", "DescribeRecoveryPoint": "Returns metadata associated with a recovery point, including ID, status, encryption, and lifecycle.
", "DescribeRestoreJob": "Returns metadata associated with a restore job that is specified by a job ID.
", @@ -23,7 +24,7 @@ "GetBackupSelection": "Returns selection metadata and a document in JSON format that specifies a list of resources that are associated with a backup plan.
", "GetBackupVaultAccessPolicy": "Returns the access policy document that is associated with the named backup vault.
", "GetBackupVaultNotifications": "Returns event notifications for the specified backup vault.
", - "GetRecoveryPointRestoreMetadata": "Returns two sets of metadata key-value pairs. The first set lists the metadata that the recovery point was created with. The second set lists the metadata key-value pairs that are required to restore the recovery point.
These sets can be the same, or the restore metadata set can contain different values if the target service to be restored has changed since the recovery point was created and now requires additional or different information in order to be restored.
", + "GetRecoveryPointRestoreMetadata": "Returns a set of metadata key-value pairs that were used to create the backup.
", "GetSupportedResourceTypes": "Returns the AWS resource types supported by AWS Backup.
", "ListBackupJobs": "Returns metadata about your backup jobs.
", "ListBackupPlanTemplates": "Returns metadata of your saved backup plan templates, including the template ID, name, and the creation and deletion dates.
", @@ -31,6 +32,7 @@ "ListBackupPlans": "Returns metadata of your saved backup plans, including Amazon Resource Names (ARNs), plan IDs, creation and deletion dates, version IDs, plan names, and creator request IDs.
", "ListBackupSelections": "Returns an array containing metadata of the resources associated with the target backup plan.
", "ListBackupVaults": "Returns a list of recovery point storage containers along with information about them.
", + "ListCopyJobs": "Returns metadata about your copy jobs.
", "ListProtectedResources": "Returns an array of resources successfully backed up by AWS Backup, including the time the resource was saved, an Amazon Resource Name (ARN) of the resource, and a resource type.
", "ListRecoveryPointsByBackupVault": "Returns detailed information about the recovery points stored in a backup vault.
", "ListRecoveryPointsByResource": "Returns detailed information about recovery points of the type specified by a resource Amazon Resource Name (ARN).
", @@ -39,6 +41,7 @@ "PutBackupVaultAccessPolicy": "Sets a resource-based policy that is used to manage access permissions on the target backup vault. Requires a backup vault name and an access policy document in JSON format.
", "PutBackupVaultNotifications": "Turns on notifications on a backup vault for the specified topic and events.
", "StartBackupJob": "Starts a job to create a one-time backup of the specified resource.
", + "StartCopyJob": "Starts a job to create a one-time copy of the specified resource.
", "StartRestoreJob": "Recovers the saved resource identified by an Amazon Resource Name (ARN).
If the resource ARN is included in the request, then the last complete backup of that resource is recovered. If the ARN of a recovery point is supplied, then that recovery point is restored.
", "StopBackupJob": "Attempts to cancel a job to create a one-time backup of a resource.
", "TagResource": "Assigns a set of key-value pairs to a recovery point, backup plan, or backup vault identified by an Amazon Resource Name (ARN).
", @@ -56,6 +59,12 @@ "BackupPlansListMember$BackupPlanArn": "An Amazon Resource Name (ARN) that uniquely identifies a backup plan; for example, arn:aws:backup:us-east-1:123456789012:plan:8F81F553-3A74-4A3F-B93D-B3360DC80C50
.
An Amazon Resource Name (ARN) that uniquely identifies a backup vault; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault
.
The server-side encryption key that is used to protect your backups; for example, arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
.
An Amazon Resource Name (ARN) that uniquely identifies the destination backup vault for the copied backup. For example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault.
", + "CopyJob$SourceBackupVaultArn": "An Amazon Resource Name (ARN) that uniquely identifies a source copy vault; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault.
", + "CopyJob$SourceRecoveryPointArn": "An ARN that uniquely identifies a source recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
", + "CopyJob$DestinationBackupVaultArn": "An Amazon Resource Name (ARN) that uniquely identifies a destination copy vault; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault.
", + "CopyJob$DestinationRecoveryPointArn": "An ARN that uniquely identifies a destination recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
", + "CopyJob$ResourceArn": "The type of AWS resource to be copied; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.
", "CreateBackupPlanOutput$BackupPlanArn": "An Amazon Resource Name (ARN) that uniquely identifies a backup plan; for example, arn:aws:backup:us-east-1:123456789012:plan:8F81F553-3A74-4A3F-B93D-B3360DC80C50
.
The server-side encryption key that is used to protect your backups; for example, arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
.
An Amazon Resource Name (ARN) that uniquely identifies a backup vault; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault
.
An ARN that uniquely identifies a backup vault; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault
.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.
Returns only backup jobs that match the specified resource Amazon Resource Name (ARN).
", + "ListCopyJobsInput$ByResourceArn": "Returns only copy jobs that match the specified resource Amazon Resource Name (ARN).
", "ListRecoveryPointsByBackupVaultInput$ByResourceArn": "Returns only recovery points that match the specified resource Amazon Resource Name (ARN).
", "ListRecoveryPointsByResourceInput$ResourceArn": "An ARN that uniquely identifies a resource. The format of the ARN depends on the resource type.
", "ListTagsInput$ResourceArn": "An Amazon Resource Name (ARN) that uniquely identifies a resource. The format of the ARN depends on the type of resource. Valid targets for ListTags
are recovery points, backup plans, and backup vaults.
An Amazon Resource Name (ARN) that uniquely identifies a resource. The format of the ARN depends on the resource type.
", "StartBackupJobInput$ResourceArn": "An Amazon Resource Name (ARN) that uniquely identifies a resource. The format of the ARN depends on the resource type.
", "StartBackupJobOutput$RecoveryPointArn": "An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.
An ARN that uniquely identifies a recovery point to use for the copy job; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45.
", + "StartCopyJobInput$DestinationBackupVaultArn": "An Amazon Resource Name (ARN) that uniquely identifies a destination backup vault to copy to; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault
.
An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45
.
An ARN that uniquely identifies a resource. The format of the ARN depends on the type of the tagged resource.
", "UntagResourceInput$ResourceArn": "An ARN that uniquely identifies a resource. The format of the ARN depends on the type of the tagged resource.
", @@ -222,8 +234,8 @@ "BackupSelection": { "base": "Used to specify a set of resources to a backup plan.
", "refs": { - "CreateBackupSelectionInput$BackupSelection": "Specifies the body of a request to assign a set of resources to a backup plan.
It includes an array of resources, an optional array of patterns to exclude resources, an optional role to provide access to the AWS service the resource belongs to, and an optional array of tags used to identify a set of resources.
", - "GetBackupSelectionOutput$BackupSelection": "Specifies the body of a request to assign a set of resources to a backup plan.
It includes an array of resources, an optional array of patterns to exclude resources, an optional role to provide access to the AWS service that the resource belongs to, and an optional array of tags used to identify a set of resources.
" + "CreateBackupSelectionInput$BackupSelection": "Specifies the body of a request to assign a set of resources to a backup plan.
", + "GetBackupSelectionOutput$BackupSelection": "Specifies the body of a request to assign a set of resources to a backup plan.
" } }, "BackupSelectionName": { @@ -297,6 +309,7 @@ "RecoveryPointByBackupVault$BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.
", "RecoveryPointByResource$BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.
", "StartBackupJobInput$BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.
", + "StartCopyJobInput$SourceBackupVaultName": "The name of a logical source container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens. >
", "UpdateRecoveryPointLifecycleInput$BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.
" } }, @@ -315,7 +328,7 @@ } }, "Condition": { - "base": "Contains an array of triplets made up of a condition type (such as StringEquals
), a key, and a value. Conditions are used to filter resources in a selection that is assigned to a backup plan.
Contains an array of triplets made up of a condition type (such as STRINGEQUALS
), a key, and a value. Conditions are used to filter resources in a selection that is assigned to a backup plan.
An operation, such as StringEquals
, that is applied to a key-value pair used to filter resources in a selection.
An operation, such as STRINGEQUALS
, that is applied to a key-value pair used to filter resources in a selection.
The value in a key-value pair. For example, in \"ec2:ResourceTag/Department\": \"accounting\"
, \"accounting\"
is the value.
The details of the copy operation.
", + "refs": { + "CopyActions$member": null + } + }, + "CopyActions": { + "base": null, + "refs": { + "BackupRule$CopyActions": "An array of CopyAction
objects, which contains the details of the copy operation.
An array of CopyAction
objects, which contains the details of the copy operation.
Contains detailed information about a copy job.
", + "refs": { + "CopyJobsList$member": null, + "DescribeCopyJobOutput$CopyJob": "Contains detailed information about a copy job.
" + } + }, + "CopyJobState": { + "base": null, + "refs": { + "CopyJob$State": "The current state of a resource recovery point.
", + "ListCopyJobsInput$ByState": "Returns only copy jobs that are in the specified state.
" + } + }, + "CopyJobsList": { + "base": null, + "refs": { + "ListCopyJobsOutput$CopyJobs": "An array of structures containing metadata about your copy jobs returned in JSON format.
" + } + }, "CreateBackupPlanInput": { "base": null, "refs": { @@ -435,6 +481,16 @@ "refs": { } }, + "DescribeCopyJobInput": { + "base": null, + "refs": { + } + }, + "DescribeCopyJobOutput": { + "base": null, + "refs": { + } + }, "DescribeProtectedResourceInput": { "base": null, "refs": { @@ -563,12 +619,14 @@ "BackupJob$IamRoleArn": "Specifies the IAM role ARN used to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
The ARN of the IAM role that AWS Backup uses to authenticate when restoring the target resource; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role Amazon Resource Name (ARN) to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role ARN used to copy the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access.
", "DescribeBackupJobOutput$IamRoleArn": "Specifies the IAM role ARN used to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role ARN used to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role ARN used to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role ARN used to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role ARN used to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role ARN used to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Specifies the IAM role ARN used to copy the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access.
", "StartRestoreJobInput$IamRoleArn": "The Amazon Resource Name (ARN) of the IAM role that AWS Backup uses to create the target recovery point; for example, arn:aws:iam::123456789012:role/S3Access
.
Contains an array of Transition
objects specifying how long in days before a recovery point transitions to cold storage or is deleted.
Contains an array of Transition
objects specifying how long in days before a recovery point transitions to cold storage or is deleted.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, on the console, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
", "refs": { "BackupRule$Lifecycle": "The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
", "BackupRuleInput$Lifecycle": "The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. AWS Backup will transition and expire backups automatically according to the lifecycle that you define.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days”. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
", + "CopyAction$Lifecycle": null, "DescribeRecoveryPointOutput$Lifecycle": "The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define.
Backups that are transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
", "RecoveryPointByBackupVault$Lifecycle": "The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
", "StartBackupJobInput$Lifecycle": "The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. AWS Backup will transition and expire backups automatically according to the lifecycle that you define.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
", + "StartCopyJobInput$Lifecycle": null, "UpdateRecoveryPointLifecycleInput$Lifecycle": "The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
", "UpdateRecoveryPointLifecycleOutput$Lifecycle": "The lifecycle defines when a protected resource is transitioned to cold storage and when it expires. AWS Backup transitions and expires backups automatically according to the lifecycle that you define.
Backups transitioned to cold storage must be stored in cold storage for a minimum of 90 days. Therefore, the “expire after days” setting must be 90 days greater than the “transition to cold after days” setting. The “transition to cold after days” setting cannot be changed after a backup has been transitioned to cold.
" } @@ -659,10 +719,20 @@ "refs": { } }, + "ListCopyJobsInput": { + "base": null, + "refs": { + } + }, + "ListCopyJobsOutput": { + "base": null, + "refs": { + } + }, "ListOfTags": { "base": null, "refs": { - "BackupSelection$ListOfTags": "An array of conditions used to specify a set of resources to assign to a backup plan; for example, \"StringEquals\": {\"ec2:ResourceTag/Department\": \"accounting\"
.
An array of conditions used to specify a set of resources to assign to a backup plan; for example, \"STRINGEQUALS\": {\"ec2:ResourceTag/Department\": \"accounting\"
.
The size, in bytes, of a backup.
", "BackupJob$BytesTransferred": "The size in bytes transferred to a backup vault at the time that the job status was queried.
", + "CopyJob$BackupSizeInBytes": "The size, in bytes, of a copy job.
", "DescribeBackupJobOutput$BackupSizeInBytes": "The size, in bytes, of a backup.
", "DescribeBackupJobOutput$BytesTransferred": "The size in bytes transferred to a backup vault at the time that the job status was queried.
", "DescribeRecoveryPointOutput$BackupSizeInBytes": "The size, in bytes, of a backup.
", "DescribeRestoreJobOutput$BackupSizeInBytes": "The size, in bytes, of the restored resource.
", "DescribeRestoreJobOutput$ExpectedCompletionTimeMinutes": "The amount of time in minutes that a job restoring a recovery point is expected to take.
", "Lifecycle$MoveToColdStorageAfterDays": "Specifies the number of days after creation that a recovery point is moved to cold storage.
", - "Lifecycle$DeleteAfterDays": "Specifies the number of days after creation that a recovery point is deleted. Must be greater than MoveToColdStorageAfterDays
.
Specifies the number of days after creation that a recovery point is deleted. Must be greater than 90 days plus MoveToColdStorageAfterDays
.
The size, in bytes, of a backup.
", "RecoveryPointByResource$BackupSizeBytes": "The size, in bytes, of a backup.
", "RestoreJobsListMember$BackupSizeInBytes": "The size, in bytes, of the restored resource.
", @@ -742,6 +813,7 @@ "ListBackupPlansInput$MaxResults": "The maximum number of items to be returned.
", "ListBackupSelectionsInput$MaxResults": "The maximum number of items to be returned.
", "ListBackupVaultsInput$MaxResults": "The maximum number of items to be returned.
", + "ListCopyJobsInput$MaxResults": "The maximum number of items to be returned.
", "ListProtectedResourcesInput$MaxResults": "The maximum number of items to be returned.
", "ListRecoveryPointsByBackupVaultInput$MaxResults": "The maximum number of items to be returned.
", "ListRecoveryPointsByResourceInput$MaxResults": "The maximum number of items to be returned.
", @@ -752,8 +824,8 @@ "Metadata": { "base": null, "refs": { - "GetRecoveryPointRestoreMetadataOutput$RestoreMetadata": "A set of metadata key-value pairs that lists the metadata key-value pairs that are required to restore the recovery point.
", - "StartRestoreJobInput$Metadata": "A set of metadata key-value pairs. Lists the metadata that the recovery point was created with.
" + "GetRecoveryPointRestoreMetadataOutput$RestoreMetadata": "The set of metadata key-value pairs that describes the original configuration of the backed-up resource. These values vary depending on the service that is being restored.
", + "StartRestoreJobInput$Metadata": "A set of metadata key-value pairs. Contains information, such as a resource name, required to restore a recovery point.
You can get configuration metadata about a resource at the time it was backed-up by calling GetRecoveryPointRestoreMetadata
. However, values in addition to those provided by GetRecoveryPointRestoreMetadata
might be required to restore a resource. For example, you might need to provide a new resource name if the original already exists.
You need to specify specific metadata to restore an Amazon Elastic File System (Amazon EFS) instance:
file-system-id
: ID of the Amazon EFS file system that is backed up by AWS Backup. Returned in GetRecoveryPointRestoreMetadata
.
Encrypted
: A Boolean value that, if true, specifies that the file system is encrypted. If KmsKeyId
is specified, Encrypted
must be set to true
.
KmsKeyId
: Specifies the AWS KMS key that is used to encrypt the restored file system.
PerformanceMode
: Specifies the throughput mode of the file system.
CreationToken
: A user-supplied value that ensures the uniqueness (idempotency) of the request.
newFileSystem
: A Boolean value that, if true, specifies that the recovery point is restored to a new Amazon EFS file system.
Contains information about the backup plan and rule that AWS Backup used to initiate the recovery point backup.
", "refs": { "BackupJob$CreatedBy": "Contains identifying information about the creation of a backup job, including the BackupPlanArn
, BackupPlanId
, BackupPlanVersion
, and BackupRuleId
of the backup plan used to create it.
Contains identifying information about the creation of a backup job, including the BackupPlanArn
, BackupPlanId
, BackupPlanVersion
, and BackupRuleId
of the backup plan that is used to create it.
Contains identifying information about the creation of a recovery point, including the BackupPlanArn
, BackupPlanId
, BackupPlanVersion
, and BackupRuleId
of the backup plan used to create it.
Contains identifying information about the creation of a recovery point, including the BackupPlanArn
, BackupPlanId
, BackupPlanVersion
, and BackupRuleId
of the backup plan that is used to create it.
An array of strings that either contain Amazon Resource Names (ARNs) or match patterns such as \"arn:aws:ec2:us-east-1:123456789012:volume/*
\" of resources to assign to a backup plan.
An array of strings that contain Amazon Resource Names (ARNs) of resources to assign to a backup plan.
" } }, "ResourceNotFoundException": { @@ -851,21 +924,23 @@ "base": null, "refs": { "BackupJob$ResourceType": "The type of AWS resource to be backed-up; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.
", + "CopyJob$ResourceType": "The type of AWS resource to be copied; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.
", "DescribeBackupJobOutput$ResourceType": "The type of AWS resource to be backed-up; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.
", "DescribeProtectedResourceOutput$ResourceType": "The type of AWS resource saved as a recovery point; for example, an EBS volume or an Amazon RDS database.
", "DescribeRecoveryPointOutput$ResourceType": "The type of AWS resource to save as a recovery point; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.
", - "ListBackupJobsInput$ByResourceType": "Returns only backup jobs for the specified resources:
EBS
for Amazon Elastic Block Store
SGW
for AWS Storage Gateway
RDS
for Amazon Relational Database Service
DDB
for Amazon DynamoDB
EFS
for Amazon Elastic File System
Returns only backup jobs for the specified resources:
DynamoDB
for Amazon DynamoDB
EBS
for Amazon Elastic Block Store
EFS
for Amazon Elastic File System
RDS
for Amazon Relational Database Service
Storage Gateway
for AWS Storage Gateway
Returns only backup jobs for the specified resources:
DynamoDB
for Amazon DynamoDB
EBS
for Amazon Elastic Block Store
EFS
for Amazon Elastic File System
RDS
for Amazon Relational Database Service
Storage Gateway
for AWS Storage Gateway
Returns only recovery points that match the specified resource type.
", "ProtectedResource$ResourceType": "The type of AWS resource; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.
", "RecoveryPointByBackupVault$ResourceType": "The type of AWS resource saved as a recovery point; for example, an Amazon Elastic Block Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon RDS) database.
", "ResourceTypes$member": null, - "StartRestoreJobInput$ResourceType": "Starts a job to restore a recovery point for one of the following resources:
EBS
for Amazon Elastic Block Store
SGW
for AWS Storage Gateway
RDS
for Amazon Relational Database Service
DDB
for Amazon DynamoDB
EFS
for Amazon Elastic File System
Starts a job to restore a recovery point for one of the following resources:
EBS
for Amazon Elastic Block Store
Storage Gateway
for AWS Storage Gateway
RDS
for Amazon Relational Database Service
DDB
for Amazon DynamoDB
EFS
for Amazon Elastic File System
Contains a string with the supported AWS resource types:
EBS
for Amazon Elastic Block Store
SGW
for AWS Storage Gateway
RDS
for Amazon Relational Database Service
DDB
for Amazon DynamoDB
EFS
for Amazon Elastic File System
Contains a string with the supported AWS resource types:
EBS
for Amazon Elastic Block Store
Storage Gateway
for AWS Storage Gateway
RDS
for Amazon Relational Database Service
DDB
for Amazon DynamoDB
EFS
for Amazon Elastic File System
Uniquely identifies a backup plan.
", "BackupSelectionsListMember$CreatorRequestId": "A unique string that identifies the request and allows failed requests to be retried without the risk of executing the operation twice.
", "BackupVaultListMember$CreatorRequestId": "A unique string that identifies the request and allows failed requests to be retried without the risk of executing the operation twice.
", + "CopyJob$CopyJobId": "Uniquely identifies a request to AWS Backup to copy a resource.
", + "CopyJob$StatusMessage": "A detailed message explaining the status of the job that to copy a resource.
", "CreateBackupPlanInput$CreatorRequestId": "Identifies the request and allows failed requests to be retried without the risk of executing the operation twice. If the request includes a CreatorRequestId
that matches an existing backup plan, that plan is returned. This parameter is optional.
Uniquely identifies a backup plan.
", "CreateBackupPlanOutput$VersionId": "Unique, randomly generated, Unicode, UTF-8 encoded strings that are at most 1024 bytes long. They cannot be edited.
", @@ -1062,6 +1149,7 @@ "DescribeBackupVaultInput$BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the AWS Region where they are created. They consist of lowercase letters, numbers, and hyphens.
", "DescribeBackupVaultOutput$BackupVaultName": "The name of a logical container where backups are stored. Backup vaults are identified by names that are unique to the account used to create them and the Region where they are created. They consist of lowercase letters, numbers, and hyphens.
", "DescribeBackupVaultOutput$CreatorRequestId": "A unique string that identifies the request and allows failed requests to be retried without the risk of executing the operation twice.
", + "DescribeCopyJobInput$CopyJobId": "Uniquely identifies a request to AWS Backup to copy a resource.
", "DescribeRestoreJobOutput$RestoreJobId": "Uniquely identifies the job that restores a recovery point.
", "DescribeRestoreJobOutput$StatusMessage": "A detailed message explaining the status of a job to restore a recovery point.
", "DescribeRestoreJobOutput$PercentDone": "Contains an estimated percentage that is complete of a job at the time the job status was queried.
", @@ -1105,6 +1193,9 @@ "ListBackupSelectionsOutput$NextToken": "The next item following a partial list of returned items. For example, if a request is made to return maxResults
number of items, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned items. For example, if a request is made to return maxResults
number of items, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned items. For example, if a request is made to return maxResults
number of items, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
", + "ListCopyJobsInput$ByDestinationVaultArn": "An Amazon Resource Name (ARN) that uniquely identifies a source backup vault to copy from; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault.
", + "ListCopyJobsOutput$NextToken": "The next item following a partial list of returned items. For example, if a request is made to return maxResults number of items, NextToken allows you to return more items in your list starting at the location pointed to by the next token.
", "ListProtectedResourcesInput$NextToken": "The next item following a partial list of returned items. For example, if a request is made to return maxResults
number of items, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned items. For example, if a request is made to return maxResults
number of items, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.
The next item following a partial list of returned items. For example, if a request is made to return maxResults
number of items, NextToken
allows you to return more items in your list starting at the location pointed to by the next token.
A customer chosen string that can be used to distinguish between calls to StartBackupJob
. Idempotency tokens time out after one hour. Therefore, if you call StartBackupJob
multiple times with the same idempotency token within one hour, AWS Backup recognizes that you are requesting only one backup job and initiates only one. If you change the idempotency token for each call, AWS Backup recognizes that you are requesting to start multiple backups.
A customer chosen string that can be used to distinguish between calls to StartBackupJob
.
Uniquely identifies a request to AWS Backup to back up a resource.
", - "StartRestoreJobInput$IdempotencyToken": "A customer chosen string that can be used to distinguish between calls to StartRestoreJob
. Idempotency tokens time out after one hour. Therefore, if you call StartRestoreJob
multiple times with the same idempotency token within one hour, AWS Backup recognizes that you are requesting only one restore job and initiates only one. If you change the idempotency token for each call, AWS Backup recognizes that you are requesting to start multiple restores.
A customer chosen string that can be used to distinguish between calls to StartCopyJob
.
Uniquely identifies a request to AWS Backup to copy a resource.
", + "StartRestoreJobInput$IdempotencyToken": "A customer chosen string that can be used to distinguish between calls to StartRestoreJob
.
Uniquely identifies a request to AWS Backup to back up a resource.
", "TagKeyList$member": null, "UpdateBackupPlanInput$BackupPlanId": "Uniquely identifies a backup plan.
", @@ -1158,6 +1251,8 @@ "BackupVaultListMember$CreationDate": "The date and time a resource backup is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
A timestamp that specifies when to transition a recovery point to cold storage.
", "CalculatedLifecycle$DeleteAt": "A timestamp that specifies when to delete a recovery point.
", + "CopyJob$CreationDate": "The date and time a copy job is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
", + "CopyJob$CompletionDate": "The date and time a job to create a copy job is completed, in Unix format and Coordinated Universal Time (UTC). The value of CompletionDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
", "CreateBackupPlanOutput$CreationDate": "The date and time that a backup plan is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time a backup selection is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time a backup vault is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time a backup selection is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
Returns only backup jobs that were created before the specified date.
", "ListBackupJobsInput$ByCreatedAfter": "Returns only backup jobs that were created after the specified date.
", + "ListCopyJobsInput$ByCreatedBefore": "Returns only copy jobs that were created before the specified date.
", + "ListCopyJobsInput$ByCreatedAfter": "Returns only copy jobs that were created after the specified date.
", "ListRecoveryPointsByBackupVaultInput$ByCreatedBefore": "Returns only recovery points that were created before the specified timestamp.
", "ListRecoveryPointsByBackupVaultInput$ByCreatedAfter": "Returns only recovery points that were created after the specified timestamp.
", "ProtectedResource$LastBackupTime": "The date and time a resource was last backed up, in Unix format and Coordinated Universal Time (UTC). The value of LastBackupTime
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time a restore job is created, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time a job to restore a recovery point is completed, in Unix format and Coordinated Universal Time (UTC). The value of CompletionDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup job is started, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
The date and time that a backup job is started, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM. >
", "UpdateBackupPlanOutput$CreationDate": "The date and time a backup plan is updated, in Unix format and Coordinated Universal Time (UTC). The value of CreationDate
is accurate to milliseconds. For example, the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 AM.
Creates an AWS Batch job queue. When you create a job queue, you associate one or more compute environments to the queue and assign an order of preference for the compute environments.
You also set a priority to the job queue that determines the order in which the AWS Batch scheduler places jobs onto its associated compute environments. For example, if a compute environment is associated with more than one job queue, the job queue with a higher priority is given preference for scheduling jobs to that compute environment.
", "DeleteComputeEnvironment": "Deletes an AWS Batch compute environment.
Before you can delete a compute environment, you must set its state to DISABLED
with the UpdateComputeEnvironment API operation and disassociate it from any job queues with the UpdateJobQueue API operation.
Deletes the specified job queue. You must first disable submissions for a queue with the UpdateJobQueue operation. All jobs in the queue are terminated when you delete a job queue.
It is not necessary to disassociate compute environments from a queue before submitting a DeleteJobQueue
request.
Deregisters an AWS Batch job definition.
", + "DeregisterJobDefinition": "Deregisters an AWS Batch job definition. Job definitions will be permanently deleted after 180 days.
", "DescribeComputeEnvironments": "Describes one or more of your compute environments.
If you are using an unmanaged compute environment, you can use the DescribeComputeEnvironment
operation to determine the ecsClusterArn
that you should launch your Amazon ECS container instances into.
Describes a list of job definitions. You can specify a status
(such as ACTIVE
) to only return job definitions that match that status.
Describes one or more of your job queues.
", @@ -104,7 +104,7 @@ "CRAllocationStrategy": { "base": null, "refs": { - "ComputeResource$allocationStrategy": "The allocation strategy to use for the compute resource in case not enough instances of the best fitting instance type can be allocated. This could be due to availability of the instance type in the region or Amazon EC2 service limits. If this is not specified, the default is BEST_FIT
, which will use only the best fitting instance type, waiting for additional capacity if it's not available. This allocation strategy keeps costs lower but can limit scaling. BEST_FIT_PROGRESSIVE
will select an additional instance type that is large enough to meet the requirements of the jobs in the queue, with a preference for an instance type with a lower cost. SPOT_CAPACITY_OPTIMIZED
is only available for Spot Instance compute resources and will select an additional instance type that is large enough to meet the requirements of the jobs in the queue, with a preference for an instance type that is less likely to be interrupted.
The allocation strategy to use for the compute resource in case not enough instances of the best fitting instance type can be allocated. This could be due to availability of the instance type in the region or Amazon EC2 service limits. If this is not specified, the default is BEST_FIT
, which will use only the best fitting instance type, waiting for additional capacity if it's not available. This allocation strategy keeps costs lower but can limit scaling. If you are using Spot Fleets with BEST_FIT
then the Spot Fleet IAM Role must be specified. BEST_FIT_PROGRESSIVE
will select additional instance types that are large enough to meet the requirements of the jobs in the queue, with a preference for instance types with a lower cost per vCPU. SPOT_CAPACITY_OPTIMIZED
is only available for Spot Instance compute resources and will select additional instance types that are large enough to meet the requirements of the jobs in the queue, with a preference for instance types that are less likely to be interrupted. For more information, see Allocation Strategies in the AWS Batch User Guide.
A list of job names or IDs on which this job depends.
", + "JobDetail$dependsOn": "A list of job IDs on which this job depends.
", "SubmitJobRequest$dependsOn": "A list of dependencies for the job. A job can depend upon a maximum of 20 jobs. You can specify a SEQUENTIAL
type dependency without specifying a job ID for array jobs so that each child array job completes sequentially, starting at index 0. You can also specify an N_TO_N
type dependency with a job ID for array jobs. In that case, each index child of this job must wait for the corresponding index child of each dependency to complete before it can begin.
The Amazon EC2 key pair that is used for instances launched in the compute environment.
", "ComputeResource$instanceRole": "The Amazon ECS instance profile applied to Amazon EC2 instances in a compute environment. You can specify the short name or full Amazon Resource Name (ARN) of an instance profile. For example, ecsInstanceRole
or arn:aws:iam::<aws_account_id>:instance-profile/ecsInstanceRole
. For more information, see Amazon ECS Instance Role in the AWS Batch User Guide.
The Amazon EC2 placement group to associate with your compute resources. If you intend to submit multi-node parallel jobs to your compute environment, you should consider creating a cluster placement group and associate it with your compute resources. This keeps your multi-node parallel job on a logical grouping of instances within a single Availability Zone with high network flow potential. For more information, see Placement Groups in the Amazon EC2 User Guide for Linux Instances.
", - "ComputeResource$spotIamFleetRole": "The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied to a SPOT
compute environment. For more information, see Amazon EC2 Spot Fleet Role in the AWS Batch User Guide.
The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied to a SPOT
compute environment. This role is required if the allocation strategy set to BEST_FIT
or if the allocation strategy is not specified. For more information, see Amazon EC2 Spot Fleet Role in the AWS Batch User Guide.
The image used to start the container.
", "ContainerDetail$jobRoleArn": "The Amazon Resource Name (ARN) associated with the job upon execution.
", "ContainerDetail$user": "The user name to use inside the container.
", @@ -739,7 +739,7 @@ "StringList$member": null, "SubmitJobRequest$jobName": "The name of the job. The first character must be alphanumeric, and up to 128 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.
", "SubmitJobRequest$jobQueue": "The job queue into which the job is submitted. You can specify either the name or the Amazon Resource Name (ARN) of the queue.
", - "SubmitJobRequest$jobDefinition": "The job definition used by this job. This value can be either a name:revision
or the Amazon Resource Name (ARN) for the job definition.
The job definition used by this job. This value can be one of name
, name:revision
, or the Amazon Resource Name (ARN) for the job definition. If name
is specified without a revision then the latest active revision is used.
The name of the job.
", "SubmitJobResponse$jobId": "The unique identifier for the job.
", "TagsMap$key": null, diff --git a/models/apis/ce/2017-10-25/docs-2.json b/models/apis/ce/2017-10-25/docs-2.json index 2385f0d0e26..5c3a0724755 100644 --- a/models/apis/ce/2017-10-25/docs-2.json +++ b/models/apis/ce/2017-10-25/docs-2.json @@ -2,9 +2,9 @@ "version": "2.0", "service": "The Cost Explorer API enables you to programmatically query your cost and usage data. You can query for aggregated data such as total monthly costs or total daily usage. You can also query for granular data, such as the number of daily write operations for Amazon DynamoDB database tables in your production environment.
Service Endpoint
The Cost Explorer API provides the following endpoint:
https://ce.us-east-1.amazonaws.com
For information about costs associated with the Cost Explorer API, see AWS Cost Management Pricing.
", "operations": { - "CreateCostCategoryDefinition": "Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Creates a new Cost Category with the requested name and rules.
", - "DeleteCostCategoryDefinition": "Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Deletes a Cost Category. Expenses from this month going forward will no longer be categorized with this Cost Category.
", - "DescribeCostCategoryDefinition": "Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Returns the name, ARN, rules, definition, and effective dates of a Cost Category that's defined in the account.
You have the option to use EffectiveOn
to return a Cost Category that is active on a specific date. If there is no EffectiveOn
specified, you’ll see a Cost Category that is effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response.
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Creates a new Cost Category with the requested name and rules.
", + "DeleteCostCategoryDefinition": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Deletes a Cost Category. Expenses from this month going forward will no longer be categorized with this Cost Category.
", + "DescribeCostCategoryDefinition": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Returns the name, ARN, rules, definition, and effective dates of a Cost Category that's defined in the account.
You have the option to use EffectiveOn
to return a Cost Category that is active on a specific date. If there is no EffectiveOn
specified, you’ll see a Cost Category that is effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response.
Retrieves cost and usage metrics for your account. You can specify which cost and usage-related metric, such as BlendedCosts
or UsageQuantity
, that you want the request to return. You can also filter and group your data by various dimensions, such as SERVICE
or AZ
, in a specific time range. For a complete list of valid dimensions, see the GetDimensionValues operation. Master accounts in an organization in AWS Organizations have access to all member accounts.
Retrieves cost and usage metrics with resources for your account. You can specify which cost and usage-related metric, such as BlendedCosts
or UsageQuantity
, that you want the request to return. You can also filter and group your data by various dimensions, such as SERVICE
or AZ
, in a specific time range. For a complete list of valid dimensions, see the GetDimensionValues operation. Master accounts in an organization in AWS Organizations have access to all member accounts. This API is currently available for the Amazon Elastic Compute Cloud – Compute service only.
This is an opt-in only feature. You can enable this feature from the Cost Explorer Settings page. For information on how to access the Settings page, see Controlling Access for Cost Explorer in the AWS Billing and Cost Management User Guide.
Retrieves a forecast for how much Amazon Web Services predicts that you will spend over the forecast time period that you select, based on your past costs.
", @@ -19,8 +19,8 @@ "GetSavingsPlansUtilizationDetails": "Retrieves attribute data along with aggregate utilization and savings data for a given time period. This doesn't support granular or grouped data (daily/monthly) in response. You can't retrieve data by dates in a single response similar to GetSavingsPlanUtilization
, but you have the option to make multiple calls to GetSavingsPlanUtilizationDetails
by providing individual dates. You can use GetDimensionValues
in SAVINGS_PLANS
to determine the possible dimension values.
GetSavingsPlanUtilizationDetails
internally groups data by SavingsPlansArn
.
Queries for available tag keys and tag values for a specified period. You can search the tag values for an arbitrary string.
", "GetUsageForecast": "Retrieves a forecast for how much Amazon Web Services predicts that you will use over the forecast time period that you select, based on your past usage.
", - "ListCostCategoryDefinitions": "Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Returns the name, ARN and effective dates of all Cost Categories defined in the account. You have the option to use EffectiveOn
to return a list of Cost Categories that were active on a specific date. If there is no EffectiveOn
specified, you’ll see Cost Categories that are effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response.
Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Updates an existing Cost Category. Changes made to the Cost Category rules will be used to categorize the current month’s expenses and future expenses. This won’t change categorization for the previous months.
" + "ListCostCategoryDefinitions": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Returns the name, ARN and effective dates of all Cost Categories defined in the account. You have the option to use EffectiveOn
to return a list of Cost Categories that were active on a specific date. If there is no EffectiveOn
specified, you’ll see Cost Categories that are effective on the current date. If Cost Category is still effective, EffectiveEnd
is omitted in the response.
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Updates an existing Cost Category. Changes made to the Cost Category rules will be used to categorize the current month’s expenses and future expenses. This won’t change categorization for the previous months.
" }, "shapes": { "AccountScope": { @@ -89,7 +89,7 @@ } }, "CostCategory": { - "base": "Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The structure of Cost Categories. This includes detailed metadata and the set of rules for the CostCategory
object.
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The structure of Cost Categories. This includes detailed metadata and the set of rules for the CostCategory
object.
Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
A reference to a Cost Category containing only enough information to identify the Cost Category.
You can use this information to retrieve the full Cost Category information using DescribeCostCategory
.
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
A reference to a Cost Category containing only enough information to identify the Cost Category.
You can use this information to retrieve the full Cost Category information using DescribeCostCategory
.
Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", + "base": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", "refs": { "CostCategoryRulesList$member": null } @@ -133,8 +133,8 @@ "base": null, "refs": { "CostCategory$Rules": "Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", - "CreateCostCategoryDefinitionRequest$Rules": "Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", - "UpdateCostCategoryDefinitionRequest$Rules": "Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
" + "CreateCostCategoryDefinitionRequest$Rules": " CreateCostCategoryDefinition
supports dimensions, Tags, and nested expressions. Currently the only dimensions supported is LINKED_ACCOUNT
.
Root level OR
is not supported. We recommend you create a separate rule instead.
Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
", + "UpdateCostCategoryDefinitionRequest$Rules": " UpdateCostCategoryDefinition
supports dimensions, Tags, and nested expressions. Currently the only dimensions supported is LINKED_ACCOUNT
.
Root level OR
is not supported. We recommend you create a separate rule instead.
Rules are processed in order. If there are multiple rules that match the line item, then the first rule to match is used to determine that Cost Category value.
" } }, "CostCategoryValue": { @@ -144,9 +144,9 @@ } }, "CostCategoryValues": { - "base": "Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The values that are available for Cost Categories.
", + "base": "Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The values that are available for Cost Categories.
", "refs": { - "Expression$CostCategories": "Cost Category is in preview release for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The specific CostCategory
used for Expression
.
Cost Category is in public beta for AWS Billing and Cost Management and is subject to change. Your use of Cost Categories is subject to the Beta Service Participation terms of the AWS Service Terms (Section 1.10).
The specific CostCategory
used for Expression
.
Filters Amazon Web Services costs by different dimensions. For example, you can specify SERVICE
and LINKED_ACCOUNT
and get the costs that are associated with that account's usage of that service. You can nest Expression
objects to define any combination of dimension filters. For more information, see Expression.
The GetCostAndUsageWithResources
operation requires that you either group by or filter by a ResourceId
.
The filters that you want to use to filter your forecast. Cost Explorer API supports all of the Cost Explorer filters.
", "GetReservationCoverageRequest$Filter": "Filters utilization data by dimensions. You can filter by the following dimensions:
AZ
CACHE_ENGINE
DATABASE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
TAG
TENANCY
GetReservationCoverage
uses the same Expression object as the other operations, but only AND
is supported among each dimension. You can nest only one level deep. If there are multiple values for a dimension, they are OR'd together.
If you don't provide a SERVICE
filter, Cost Explorer defaults to EC2.
Filters utilization data by dimensions. You can filter by the following dimensions:
AZ
CACHE_ENGINE
DATABASE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
SCOPE
TENANCY
GetReservationUtilization
uses the same Expression object as the other operations, but only AND
is supported among each dimension, and nesting is supported up to only one level deep. If there are multiple values for a dimension, they are OR'd together.
Filters utilization data by dimensions. You can filter by the following dimensions:
AZ
CACHE_ENGINE
DEPLOYMENT_OPTION
INSTANCE_TYPE
LINKED_ACCOUNT
OPERATING_SYSTEM
PLATFORM
REGION
SERVICE
SCOPE
TENANCY
GetReservationUtilization
uses the same Expression object as the other operations, but only AND
is supported among each dimension, and nesting is supported up to only one level deep. If there are multiple values for a dimension, they are OR'd together.
Filters Savings Plans coverage data by dimensions. You can filter data for Savings Plans usage with the following dimensions:
LINKED_ACCOUNT
REGION
SERVICE
INSTANCE_FAMILY
GetSavingsPlansCoverage
uses the same Expression object as the other operations, but only AND
is supported among each dimension. If there are multiple values for a dimension, they are OR'd together.
Filters Savings Plans utilization coverage data for active Savings Plans dimensions. You can filter data with the following dimensions:
LINKED_ACCOUNT
SAVINGS_PLAN_ARN
REGION
PAYMENT_OPTION
INSTANCE_TYPE_FAMILY
GetSavingsPlansUtilizationDetails
uses the same Expression object as the other operations, but only AND
is supported among each dimension.
Associates a phone number with the specified Amazon Chime user.
", "AssociatePhoneNumbersWithVoiceConnector": "Associates phone numbers with the specified Amazon Chime Voice Connector.
", "AssociatePhoneNumbersWithVoiceConnectorGroup": "Associates phone numbers with the specified Amazon Chime Voice Connector group.
", + "AssociateSigninDelegateGroupsWithAccount": "Associates the specified sign-in delegate groups with the specified Amazon Chime account.
", "BatchCreateAttendee": "Creates up to 100 new attendees for an active Amazon Chime SDK meeting. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide.
", "BatchCreateRoomMembership": "Adds up to 50 members to a chat room. Members can be either users or bots. The member role designates whether the member is a chat room administrator or a general chat room member.
", "BatchDeletePhoneNumber": "Moves phone numbers into the Deletion queue. Phone numbers must be disassociated from any users or Amazon Chime Voice Connectors before they can be deleted.
Phone numbers remain in the Deletion queue for 7 days before they are deleted permanently.
", - "BatchSuspendUser": "Suspends up to 50 users from a Team
or EnterpriseLWA
Amazon Chime account. For more information about different account types, see Managing Your Amazon Chime Accounts in the Amazon Chime Administration Guide.
Users suspended from a Team
account are dissasociated from the account, but they can continue to use Amazon Chime as free users. To remove the suspension from suspended Team
account users, invite them to the Team
account again. You can use the InviteUsers action to do so.
Users suspended from an EnterpriseLWA
account are immediately signed out of Amazon Chime and can no longer sign in. To remove the suspension from suspended EnterpriseLWA
account users, use the BatchUnsuspendUser action.
To sign out users without suspending them, use the LogoutUser action.
", + "BatchSuspendUser": "Suspends up to 50 users from a Team
or EnterpriseLWA
Amazon Chime account. For more information about different account types, see Managing Your Amazon Chime Accounts in the Amazon Chime Administration Guide.
Users suspended from a Team
account are disassociated from the account, but they can continue to use Amazon Chime as free users. To remove the suspension from suspended Team
account users, invite them to the Team
account again. You can use the InviteUsers action to do so.
Users suspended from an EnterpriseLWA
account are immediately signed out of Amazon Chime and can no longer sign in. To remove the suspension from suspended EnterpriseLWA
account users, use the BatchUnsuspendUser action.
To sign out users without suspending them, use the LogoutUser action.
", "BatchUnsuspendUser": "Removes the suspension from up to 50 previously suspended users for the specified Amazon Chime EnterpriseLWA
account. Only users on EnterpriseLWA
accounts can be unsuspended using this action. For more information about different account types, see Managing Your Amazon Chime Accounts in the Amazon Chime Administration Guide.
Previously suspended users who are unsuspended using this action are returned to Registered
status. Users who are not previously suspended are ignored.
Updates phone number product types or calling names. You can update one attribute at a time for each UpdatePhoneNumberRequestItem
. For example, you can update either the product type or the calling name.
For product types, choose from Amazon Chime Business Calling and Amazon Chime Voice Connector. For toll-free numbers, you must use the Amazon Chime Voice Connector product type.
Updates to outbound calling names can take up to 72 hours to complete. Pending updates to outbound calling names must be complete before you can request another update.
", "BatchUpdateUser": "Updates user details within the UpdateUserRequestItem object for up to 20 users for the specified Amazon Chime account. Currently, only LicenseType
updates are supported for this action.
Creates an order for phone numbers to be provisioned. Choose from Amazon Chime Business Calling and Amazon Chime Voice Connector product types. For toll-free numbers, you must use the Amazon Chime Voice Connector product type.
", "CreateRoom": "Creates a chat room for the specified Amazon Chime account.
", "CreateRoomMembership": "Adds a member to a chat room. A member can be either a user or a bot. The member role designates whether the member is a chat room administrator or a general chat room member.
", + "CreateUser": "Creates a user under the specified Amazon Chime account.
", "CreateVoiceConnector": "Creates an Amazon Chime Voice Connector under the administrator's AWS account. You can choose to create an Amazon Chime Voice Connector in a specific AWS Region.
Enabling CreateVoiceConnectorRequest$RequireEncryption configures your Amazon Chime Voice Connector to use TLS transport for SIP signaling and Secure RTP (SRTP) for media. Inbound calls use TLS transport, and unencrypted outbound calls are blocked.
", "CreateVoiceConnectorGroup": "Creates an Amazon Chime Voice Connector group under the administrator's AWS account. You can associate up to three existing Amazon Chime Voice Connectors with the Amazon Chime Voice Connector group by including VoiceConnectorItems
in the request.
You can include Amazon Chime Voice Connectors from different AWS Regions in your group. This creates a fault tolerant mechanism for fallback in case of availability events.
", "DeleteAccount": "Deletes the specified Amazon Chime account. You must suspend all users before deleting a Team
account. You can use the BatchSuspendUser action to do so.
For EnterpriseLWA
and EnterpriseAD
accounts, you must release the claimed domains for your Amazon Chime account before deletion. As soon as you release the domain, all users under that account are suspended.
Deleted accounts appear in your Disabled
accounts list for 90 days. To restore a deleted account from your Disabled
accounts list, you must contact AWS Support.
After 90 days, deleted accounts are permanently removed from your Disabled
accounts list.
Disassociates the primary provisioned phone number from the specified Amazon Chime user.
", "DisassociatePhoneNumbersFromVoiceConnector": "Disassociates the specified phone numbers from the specified Amazon Chime Voice Connector.
", "DisassociatePhoneNumbersFromVoiceConnectorGroup": "Disassociates the specified phone numbers from the specified Amazon Chime Voice Connector group.
", + "DisassociateSigninDelegateGroupsFromAccount": "Disassociates the specified sign-in delegate groups from the specified Amazon Chime account.
", "GetAccount": "Retrieves details for the specified Amazon Chime account, such as account type and supported licenses.
", "GetAccountSettings": "Retrieves account settings for the specified Amazon Chime account ID, such as remote control and dial out settings. For more information about these settings, see Use the Policies Page in the Amazon Chime Administration Guide.
", "GetAttendee": "Gets the Amazon Chime SDK attendee details for a specified meeting ID and attendee ID. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide.
", @@ -47,7 +50,7 @@ "GetPhoneNumber": "Retrieves details for the specified phone number ID, such as associations, capabilities, and product type.
", "GetPhoneNumberOrder": "Retrieves details for the specified phone number order, such as order creation timestamp, phone numbers in E.164 format, product type, and order status.
", "GetPhoneNumberSettings": "Retrieves the phone number settings for the administrator's AWS account, such as the default outbound calling name.
", - "GetRoom": "Retrieves room details, such as name.
", + "GetRoom": "Retrieves room details, such as the room name.
", "GetUser": "Retrieves details for the specified user ID, such as primary email address, license type, and personal meeting PIN.
To retrieve user details with an email address instead of a user ID, use the ListUsers action, and then filter by email address.
", "GetUserSettings": "Retrieves settings for the specified user ID, such as any associated phone number settings.
", "GetVoiceConnector": "Retrieves details for the specified Amazon Chime Voice Connector, such as timestamps, name, outbound host, and encryption requirements.
", @@ -64,7 +67,7 @@ "ListMeetings": "Lists up to 100 active Amazon Chime SDK meetings. For more information about the Amazon Chime SDK, see Using the Amazon Chime SDK in the Amazon Chime Developer Guide.
", "ListPhoneNumberOrders": "Lists the phone number orders for the administrator's Amazon Chime account.
", "ListPhoneNumbers": "Lists the phone numbers for the specified Amazon Chime account, Amazon Chime user, Amazon Chime Voice Connector, or Amazon Chime Voice Connector group.
", - "ListRoomMemberships": "Lists the membership details for the specified room, such as member IDs, member email addresses, and member names.
", + "ListRoomMemberships": "Lists the membership details for the specified room, such as the members' IDs, email addresses, and names.
", "ListRooms": "Lists the room details for the specified Amazon Chime account. Optionally, filter the results by a member ID (user ID or bot ID) to see a list of rooms that the member belongs to.
", "ListUsers": "Lists the users that belong to the specified Amazon Chime account. You can specify an email address to list only the user that the email address belongs to.
", "ListVoiceConnectorGroups": "Lists the Amazon Chime Voice Connector groups for the administrator's AWS account.
", @@ -88,7 +91,7 @@ "UpdatePhoneNumber": "Updates phone number details, such as product type or calling name, for the specified phone number ID. You can update one phone number detail at a time. For example, you can update either the product type or the calling name in one action.
For toll-free numbers, you must use the Amazon Chime Voice Connector product type.
Updates to outbound calling names can take up to 72 hours to complete. Pending updates to outbound calling names must be complete before you can request another update.
", "UpdatePhoneNumberSettings": "Updates the phone number settings for the administrator's AWS account, such as the default outbound calling name. You can update the default outbound calling name once every seven days. Outbound calling names can take up to 72 hours to update.
", "UpdateRoom": "Updates room details, such as the room name.
", - "UpdateRoomMembership": "Updates room membership details, such as member role. The member role designates whether the member is a chat room administrator or a general chat room member. Member role can only be updated for user IDs.
", + "UpdateRoomMembership": "Updates room membership details, such as the member role. The member role designates whether the member is a chat room administrator or a general chat room member. The member role can be updated only for user IDs.
", "UpdateUser": "Updates user details for a specified user ID. Currently, only LicenseType
updates are supported for this action.
Updates the settings for the specified user, such as phone number settings.
", "UpdateVoiceConnector": "Updates details for the specified Amazon Chime Voice Connector.
", @@ -136,6 +139,14 @@ "Account$AccountType": "The Amazon Chime account type. For more information about different account types, see Managing Your Amazon Chime Accounts in the Amazon Chime Administration Guide.
" } }, + "AlexaForBusinessMetadata": { + "base": "The Alexa for Business metadata associated with an Amazon Chime user, used to integrate Alexa for Business with a device.
", + "refs": { + "UpdateUserRequest$AlexaForBusinessMetadata": "The Alexa for Business metadata.
", + "UpdateUserRequestItem$AlexaForBusinessMetadata": "The Alexa for Business metadata.
", + "User$AlexaForBusinessMetadata": "The Alexa for Business metadata.
" + } + }, "Arn": { "base": null, "refs": { @@ -173,6 +184,16 @@ "refs": { } }, + "AssociateSigninDelegateGroupsWithAccountRequest": { + "base": null, + "refs": { + } + }, + "AssociateSigninDelegateGroupsWithAccountResponse": { + "base": null, + "refs": { + } + }, "Attendee": { "base": "An Amazon Chime SDK meeting attendee. Includes a unique AttendeeId
and JoinToken
. The JoinToken
allows a client to authenticate and join as the specified attendee. The JoinToken
expires when the meeting ends or when DeleteAttendee is called. After that, the attendee is unable to join the meeting.
We recommend securely transferring each JoinToken
from your server application to the client so that no other client has access to the token except for the one authorized to represent the attendee.
Setting that stops or starts remote control of shared screens during meetings.
", "AccountSettings$EnableDialOut": "Setting that allows meeting participants to choose the Call me at a phone number option. For more information, see Join a Meeting without the Amazon Chime App.
", + "AlexaForBusinessMetadata$IsAlexaForBusinessEnabled": "Starts or stops Alexa for Business.
", "CreateVoiceConnectorRequest$RequireEncryption": "When enabled, requires encryption for the Amazon Chime Voice Connector.
", "LoggingConfiguration$EnableSIPLogs": "When true, enables SIP message logs for sending to Amazon CloudWatch Logs.
", "Origination$Disabled": "When origination settings are disabled, inbound calls are not enabled for your Amazon Chime Voice Connector.
", @@ -449,6 +471,16 @@ "refs": { } }, + "CreateUserRequest": { + "base": null, + "refs": { + } + }, + "CreateUserResponse": { + "base": null, + "refs": { + } + }, "CreateVoiceConnectorGroupRequest": { "base": null, "refs": { @@ -587,6 +619,16 @@ "refs": { } }, + "DisassociateSigninDelegateGroupsFromAccountRequest": { + "base": null, + "refs": { + } + }, + "DisassociateSigninDelegateGroupsFromAccountResponse": { + "base": null, + "refs": { + } + }, "E164PhoneNumber": { "base": null, "refs": { @@ -611,6 +653,7 @@ "EmailAddress": { "base": null, "refs": { + "CreateUserRequest$Email": "The user's email address.
", "Invite$EmailAddress": "The email address to which the invite is sent.
", "ListAccountsRequest$UserEmail": "User email address with which to filter results.
", "ListUsersRequest$UserEmail": "Optional. The user email address used to filter results. Maximum 1.
", @@ -1154,6 +1197,7 @@ "refs": { "AssociatePhoneNumbersWithVoiceConnectorGroupRequest$VoiceConnectorGroupId": "The Amazon Chime Voice Connector group ID.
", "AssociatePhoneNumbersWithVoiceConnectorRequest$VoiceConnectorId": "The Amazon Chime Voice Connector ID.
", + "AssociateSigninDelegateGroupsWithAccountRequest$AccountId": "The Amazon Chime account ID.
", "BatchCreateRoomMembershipRequest$AccountId": "The Amazon Chime account ID.
", "BatchCreateRoomMembershipRequest$RoomId": "The room ID.
", "BatchSuspendUserRequest$AccountId": "The Amazon Chime account ID.
", @@ -1165,6 +1209,7 @@ "CreateRoomMembershipRequest$RoomId": "The room ID.
", "CreateRoomMembershipRequest$MemberId": "The Amazon Chime member ID (user ID or bot ID).
", "CreateRoomRequest$AccountId": "The Amazon Chime account ID.
", + "CreateUserRequest$AccountId": "The Amazon Chime account ID.
", "DeleteAccountRequest$AccountId": "The Amazon Chime account ID.
", "DeleteEventsConfigurationRequest$AccountId": "The Amazon Chime account ID.
", "DeleteEventsConfigurationRequest$BotId": "The bot ID.
", @@ -1181,6 +1226,7 @@ "DeleteVoiceConnectorTerminationRequest$VoiceConnectorId": "The Amazon Chime Voice Connector ID.
", "DisassociatePhoneNumbersFromVoiceConnectorGroupRequest$VoiceConnectorGroupId": "The Amazon Chime Voice Connector group ID.
", "DisassociatePhoneNumbersFromVoiceConnectorRequest$VoiceConnectorId": "The Amazon Chime Voice Connector ID.
", + "DisassociateSigninDelegateGroupsFromAccountRequest$AccountId": "The Amazon Chime account ID.
", "GetAccountRequest$AccountId": "The Amazon Chime account ID.
", "GetAccountSettingsRequest$AccountId": "The Amazon Chime account ID.
", "GetBotRequest$AccountId": "The Amazon Chime account ID.
", @@ -1229,6 +1275,7 @@ "Room$CreatedBy": "The identifier of the room creator.
", "RoomMembership$RoomId": "The room ID.
", "RoomMembership$InvitedBy": "The identifier of the user that invited the room member.
", + "SigninDelegateGroup$GroupName": "The group name.
", "UpdateAccountRequest$AccountId": "The Amazon Chime account ID.
", "UpdateAccountSettingsRequest$AccountId": "The Amazon Chime account ID.
", "UpdateBotRequest$AccountId": "The Amazon Chime account ID.
", @@ -1254,7 +1301,8 @@ "NonEmptyStringList": { "base": null, "refs": { - "BatchDeletePhoneNumberRequest$PhoneNumberIds": "List of phone number IDs.
" + "BatchDeletePhoneNumberRequest$PhoneNumberIds": "List of phone number IDs.
", + "DisassociateSigninDelegateGroupsFromAccountRequest$GroupNames": "The sign-in delegate group names.
" } }, "NotFoundException": { @@ -1614,6 +1662,7 @@ "SensitiveString": { "base": null, "refs": { + "AlexaForBusinessMetadata$AlexaForBusinessRoomArn": "The ARN of the room resource.
", "Bot$DisplayName": "The bot display name.
", "Bot$BotEmail": "The bot email address.
", "Bot$SecurityToken": "The security token used to authenticate Amazon Chime with the outgoing event endpoint.
", @@ -1651,6 +1700,19 @@ "refs": { } }, + "SigninDelegateGroup": { + "base": "An Active Directory (AD) group whose members are granted permission to act as delegates.
", + "refs": { + "SigninDelegateGroupList$member": null + } + }, + "SigninDelegateGroupList": { + "base": null, + "refs": { + "Account$SigninDelegateGroups": "The sign-in delegate groups associated with the account.
", + "AssociateSigninDelegateGroupsWithAccountRequest$SigninDelegateGroups": "The sign-in delegate groups.
" + } + }, "StreamingConfiguration": { "base": "The streaming configuration associated with an Amazon Chime Voice Connector. Specifies whether media streaming is enabled for sending to Amazon Kinesis, and shows the retention period for the Amazon Kinesis data, in hours.
", "refs": { @@ -1675,7 +1737,8 @@ "ConflictException$Message": null, "CreateAttendeeError$ErrorCode": "The error code.
", "CreateAttendeeError$ErrorMessage": "The error message.
", - "CreateMeetingRequest$MediaRegion": "The Region in which to create the meeting. Available values: us-east-1
, us-west-2
.
The Region in which to create the meeting. Available values: ap-northeast-1
, ap-southeast-1
, ap-southeast-2
, ca-central-1
, eu-central-1
, eu-north-1
, eu-west-1
, eu-west-2
, eu-west-3
, sa-east-1
, us-east-1
, us-east-2
, us-west-1
, us-west-2
.
The user name.
", "DeletePhoneNumberRequest$PhoneNumberId": "The phone number ID.
", "DisassociatePhoneNumberFromUserRequest$AccountId": "The Amazon Chime account ID.
", "DisassociatePhoneNumberFromUserRequest$UserId": "The user ID.
", @@ -1709,7 +1772,7 @@ "ListVoiceConnectorGroupsResponse$NextToken": "The token to use to retrieve the next page of results.
", "ListVoiceConnectorsRequest$NextToken": "The token to use to retrieve the next page of results.
", "ListVoiceConnectorsResponse$NextToken": "The token to use to retrieve the next page of results.
", - "Meeting$MediaRegion": "The Region in which to create the meeting. Available values: us-east-1
, us-west-2
.
The Region in which to create the meeting. Available values: ap-northeast-1
, ap-southeast-1
, ap-southeast-2
, ca-central-1
, eu-central-1
, eu-north-1
, eu-west-1
, eu-west-2
, eu-west-3
, sa-east-1
, us-east-1
, us-east-2
, us-west-1
, us-west-2
.
The error message.
", "NonEmptyStringList$member": null, "NotFoundException$Message": null, @@ -1931,6 +1994,7 @@ "User": { "base": "The user on the Amazon Chime account.
", "refs": { + "CreateUserResponse$User": null, "GetUserResponse$User": "The user details.
", "ResetPersonalPINResponse$User": "The user details and new personal meeting PIN.
", "UpdateUserResponse$User": "The updated user details.
", @@ -1977,6 +2041,17 @@ "UpdateUserSettingsRequest$UserSettings": "The user settings to update.
" } }, + "UserType": { + "base": null, + "refs": { + "CreateUserRequest$UserType": "The user type.
", + "InviteUsersRequest$UserType": "The user type.
", + "ListUsersRequest$UserType": "The user type.
", + "UpdateUserRequest$UserType": "The user type.
", + "UpdateUserRequestItem$UserType": "The user type.
", + "User$UserType": "The user type.
" + } + }, "VoiceConnector": { "base": "The Amazon Chime Voice Connector configuration, including outbound host name and encryption settings.
", "refs": { diff --git a/models/apis/cloudfront/2019-03-26/docs-2.json b/models/apis/cloudfront/2019-03-26/docs-2.json index d50a3558a74..fa1a9bd2de1 100644 --- a/models/apis/cloudfront/2019-03-26/docs-2.json +++ b/models/apis/cloudfront/2019-03-26/docs-2.json @@ -593,10 +593,10 @@ } }, "ForwardedValues": { - "base": "A complex type that specifies how CloudFront handles query strings and cookies.
", + "base": "A complex type that specifies how CloudFront handles query strings, cookies, and HTTP headers.
", "refs": { - "CacheBehavior$ForwardedValues": "A complex type that specifies how CloudFront handles query strings and cookies.
", - "DefaultCacheBehavior$ForwardedValues": "A complex type that specifies how CloudFront handles query strings and cookies.
" + "CacheBehavior$ForwardedValues": "A complex type that specifies how CloudFront handles query strings, cookies, and HTTP headers.
", + "DefaultCacheBehavior$ForwardedValues": "A complex type that specifies how CloudFront handles query strings, cookies, and HTTP headers.
" } }, "GeoRestriction": { @@ -763,7 +763,7 @@ "ICPRecordalStatus": { "base": null, "refs": { - "AliasICPRecordal$ICPRecordalStatus": "The Internet Content Provider (ICP) recordal status for a CNAME. The ICPRecordalStatus is set to APPROVED for all CNAMEs (aliases) in regions outside of China.
The status values returned are the following:
APPROVED indicates that the associated CNAME has a valid ICP recordal number. Multiple CNAMEs can be associated with a distribution, and CNAMEs can correspond to different ICP recordals. To be marked as APPROVED, that is, valid to use with China region, a CNAME must have one ICP recordal number associated with it.
SUSPENDED indicates that the associated CNAME does not have a valid ICP recordal number.
PENDING indicates that at least one CNAME associated with the distribution does not have a valid ICP recordal number.
The Internet Content Provider (ICP) recordal status for a CNAME. The ICPRecordalStatus is set to APPROVED for all CNAMEs (aliases) in regions outside of China.
The status values returned are the following:
APPROVED indicates that the associated CNAME has a valid ICP recordal number. Multiple CNAMEs can be associated with a distribution, and CNAMEs can correspond to different ICP recordals. To be marked as APPROVED, that is, valid to use with China region, a CNAME must have one ICP recordal number associated with it.
SUSPENDED indicates that the associated CNAME does not have a valid ICP recordal number.
PENDING indicates that CloudFront can't determine the ICP recordal status of the CNAME associated with the distribution because there was an error in trying to determine the status. You can try again to see if the error is resolved in which case CloudFront returns an APPROVED or SUSPENDED status.
A web ACL id specified in the response body is not valid.
", + "base": "A web ACL ID specified in the response body is not valid. To specify a web ACL created using the latest version of AWS WAF, use the ACL ARN, for example arn:aws:wafv2:us-east-1:123456789012:global/webacl/ExampleWebACL/473e64fd-f30b-4765-81a0-62ad96dd167a
. To specify a web ACL created using AWS WAF Classic, use the ACL ID, for example 473e64fd-f30b-4765-81a0-62ad96dd167a
.
Specify the security policy that you want CloudFront to use for HTTPS connections. A security policy determines two settings:
The minimum SSL/TLS protocol that CloudFront uses to communicate with viewers
The cipher that CloudFront uses to encrypt the content that it returns to viewers
On the CloudFront console, this setting is called Security policy.
We recommend that you specify TLSv1.1_2016
unless your users are using browsers or devices that do not support TLSv1.1 or later.
When both of the following are true, you must specify TLSv1
or later for the security policy:
You're using a custom certificate: you specified a value for ACMCertificateArn
or for IAMCertificateId
You're using SNI: you specified sni-only
for SSLSupportMethod
If you specify true
for CloudFrontDefaultCertificate
, CloudFront automatically sets the security policy to TLSv1
regardless of the value that you specify for MinimumProtocolVersion
.
For information about the relationship between the security policy that you choose and the protocols and ciphers that CloudFront uses to communicate with viewers, see Supported SSL/TLS Protocols and Ciphers for Communication Between Viewers and CloudFront in the Amazon CloudFront Developer Guide.
" + "ViewerCertificate$MinimumProtocolVersion": "Specify the security policy that you want CloudFront to use for HTTPS connections. A security policy determines two settings:
The minimum SSL/TLS protocol that CloudFront uses to communicate with viewers.
The cipher that CloudFront uses to encrypt the content that it returns to viewers.
On the CloudFront console, this setting is called Security Policy.
We recommend that you specify TLSv1.1_2016
unless your viewers are using browsers or devices that do not support TLSv1.1 or later.
When both of the following are true, you must specify TLSv1
or later for the security policy:
You're using a custom certificate; that is, you specified a value for ACMCertificateArn
or for IAMCertificateId
.
You're using SNI; that is, you specified sni-only
for SSLSupportMethod
.
If you specify true
for CloudFrontDefaultCertificate
, CloudFront automatically sets the security policy to TLSv1
regardless of the value that you specify here.
For information about the relationship between the security policy that you choose and the protocols and ciphers that CloudFront uses to communicate with viewers, see Supported SSL/TLS Protocols and Ciphers for Communication Between Viewers and CloudFront in the Amazon CloudFront Developer Guide.
" } }, "MissingBody": { @@ -1372,7 +1372,7 @@ "SSLSupportMethod": { "base": null, "refs": { - "ViewerCertificate$SSLSupportMethod": "If you specify a value for ACMCertificateArn or for IAMCertificateId, you must also specify how you want CloudFront to serve HTTPS requests: using a method that works for browsers and clients released after 2010 or one that works for all clients.
sni-only
: CloudFront can respond to HTTPS requests from viewers that support Server Name Indication (SNI). All modern browsers support SNI, but there are a few that don't. For a current list of the browsers that support SNI, see the Wikipedia entry Server Name Indication. To learn about options to explore if you have users with browsers that don't include SNI support, see Choosing How CloudFront Serves HTTPS Requests in the Amazon CloudFront Developer Guide.
vip
: CloudFront uses dedicated IP addresses for your content and can respond to HTTPS requests from any viewer. However, there are additional monthly charges. For details, including specific pricing information, see Custom SSL options for Amazon CloudFront on the AWS marketing site.
Don't specify a value for SSLSupportMethod
if you specified <CloudFrontDefaultCertificate>true<CloudFrontDefaultCertificate>
.
For more information, see Choosing How CloudFront Serves HTTPS Requests in the Amazon CloudFront Developer Guide.
" + "ViewerCertificate$SSLSupportMethod": "If you specify a value for ACMCertificateArn or for IAMCertificateId, you must also specify how you want CloudFront to serve HTTPS requests: using a method that works for browsers and clients released after 2010, or one that works for all clients.
sni-only
: CloudFront can respond to HTTPS requests from viewers that support Server Name Indication (SNI). All modern browsers support SNI, but there are a few that don't. For a current list of the browsers that support SNI, see the Wikipedia entry Server Name Indication. To learn about options to explore if you have viewers with browsers that don't include SNI support, see Choosing How CloudFront Serves HTTPS Requests in the Amazon CloudFront Developer Guide.
vip
: CloudFront uses dedicated IP addresses for your content and can respond to HTTPS requests from any viewer. However, there are additional monthly charges. For details, including specific pricing information, see Custom SSL options for Amazon CloudFront on the AWS marketing site.
Don't specify a value here if you specified CloudFrontDefaultCertificate
as true
.
For more information, see Choosing How CloudFront Serves HTTPS Requests in the Amazon CloudFront Developer Guide.
" } }, "Signer": { @@ -1726,7 +1726,7 @@ } }, "ViewerCertificate": { - "base": "A complex type that specifies the following:
Whether you want viewers to use HTTP or HTTPS to request your objects.
If you want viewers to use HTTPS, whether you're using an alternate domain name such as example.com
or the CloudFront domain name for your distribution, such as d111111abcdef8.cloudfront.net
.
If you're using an alternate domain name, whether AWS Certificate Manager (ACM) provided the certificate, or you purchased a certificate from a third-party certificate authority and imported it into ACM or uploaded it to the IAM certificate store.
Specify only one of the following values:
For more information, see Using Alternate Domain Names and HTTPS in the Amazon CloudFront Developer Guide.
", + "base": "A complex type that specifies the following:
Whether you want viewers to use HTTP or HTTPS to request your objects.
If you want viewers to use HTTPS, whether you're using an alternate domain name, such as example.com
, or the CloudFront domain name for your distribution, such as d111111abcdef8.cloudfront.net
.
If you're using an alternate domain name, whether AWS Certificate Manager (ACM) provided the certificate, or you purchased a certificate from a third-party certificate authority and imported it into ACM or uploaded it to the IAM certificate store.
Specify only one of the following values:
For more information, see Using Alternate Domain Names and HTTPS in the Amazon CloudFront Developer Guide.
", "refs": { "DistributionConfig$ViewerCertificate": "A complex type that specifies whether you want viewers to use HTTP or HTTPS to request your objects, whether you're using an alternate domain name with HTTPS, and if so, if you're using AWS Certificate Manager (ACM) or a third-party certificate authority.
", "DistributionSummary$ViewerCertificate": "A complex type that specifies whether you want viewers to use HTTP or HTTPS to request your objects, whether you're using an alternate domain name with HTTPS, and if so, if you're using AWS Certificate Manager (ACM) or a third-party certificate authority.
" @@ -1765,7 +1765,7 @@ "StreamingDistributionSummary$Enabled": "Whether the distribution is enabled to accept end user requests for content.
", "StreamingLoggingConfig$Enabled": "Specifies whether you want CloudFront to save access logs to an Amazon S3 bucket. If you don't want to enable logging when you create a streaming distribution or if you want to disable logging for an existing streaming distribution, specify false
for Enabled
, and specify empty Bucket
and Prefix
elements. If you specify false
for Enabled
but you specify values for Bucket
and Prefix
, the values are automatically deleted.
Specifies whether you want to require viewers to use signed URLs to access the files specified by PathPattern
and TargetOriginId
.
If you're using the CloudFront domain name for your distribution, such as d111111abcdef8.cloudfront.net
, specify the following value:
<CloudFrontDefaultCertificate>true<CloudFrontDefaultCertificate>
If you're using the CloudFront domain name for your distribution, such as d111111abcdef8.cloudfront.net
, specify this value as true
.
The minimum amount of time that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. For more information, see Managing How Long Content Stays in an Edge Cache (Expiration) in the Amazon CloudFront Developer Guide.
You must specify 0
for MinTTL
if you configure CloudFront to forward all headers to your origin (under Headers
, if you specify 1
for Quantity
and *
for Name
).
The default amount of time that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. The value that you specify applies only when your origin does not add HTTP headers such as Cache-Control max-age
, Cache-Control s-maxage
, and Expires
to objects. For more information, see Managing How Long Content Stays in an Edge Cache (Expiration) in the Amazon CloudFront Developer Guide.
The maximum amount of time that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. The value that you specify applies only when your origin adds HTTP headers such as Cache-Control max-age
, Cache-Control s-maxage
, and Expires
to objects. For more information, see Managing How Long Content Stays in an Edge Cache (Expiration) in the Amazon CloudFront Developer Guide.
The minimum amount of time, in seconds, that you want CloudFront to cache the HTTP status code specified in ErrorCode
. When this time period has elapsed, CloudFront queries your origin to see whether the problem that caused the error has been resolved and the requested object is now available.
If you don't want to specify a value, include an empty element, <ErrorCachingMinTTL>
, in the XML document.
For more information, see Customizing Error Responses in the Amazon CloudFront Developer Guide.
", + "CustomErrorResponse$ErrorCachingMinTTL": "The minimum amount of time, in seconds, that you want CloudFront to cache the HTTP status code specified in ErrorCode
. When this time period has elapsed, CloudFront queries your origin to see whether the problem that caused the error has been resolved and the requested object is now available.
For more information, see Customizing Error Responses in the Amazon CloudFront Developer Guide.
", "DefaultCacheBehavior$MinTTL": "The minimum amount of time that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. For more information, see Managing How Long Content Stays in an Edge Cache (Expiration) in the Amazon CloudFront Developer Guide.
You must specify 0
for MinTTL
if you configure CloudFront to forward all headers to your origin (under Headers
, if you specify 1
for Quantity
and *
for Name
).
The default amount of time that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. The value that you specify applies only when your origin does not add HTTP headers such as Cache-Control max-age
, Cache-Control s-maxage
, and Expires
to objects. For more information, see Managing How Long Content Stays in an Edge Cache (Expiration) in the Amazon CloudFront Developer Guide.
The maximum amount of time that you want objects to stay in CloudFront caches before CloudFront forwards another request to your origin to determine whether the object has been updated. The value that you specify applies only when your origin adds HTTP headers such as Cache-Control max-age
, Cache-Control s-maxage
, and Expires
to objects. For more information, see Managing How Long Content Stays in an Edge Cache (Expiration) in the Amazon CloudFront Developer Guide.
The current version of the streaming distribution created.
", "CreateStreamingDistributionWithTagsResult$Location": "The fully qualified URI of the new streaming distribution resource just created. For example: https://cloudfront.amazonaws.com/2010-11-01/streaming-distribution/EGTXBD79H29TRA8
.
The current version of the distribution created.
", - "CustomErrorResponse$ResponsePagePath": "The path to the custom error page that you want CloudFront to return to a viewer when your origin returns the HTTP status code specified by ErrorCode
, for example, /4xx-errors/403-forbidden.html
. If you want to store your objects and your custom error pages in different locations, your distribution must include a cache behavior for which the following is true:
The value of PathPattern
matches the path to your custom error messages. For example, suppose you saved custom error pages for 4xx errors in an Amazon S3 bucket in a directory named /4xx-errors
. Your distribution must include a cache behavior for which the path pattern routes requests for your custom error pages to that location, for example, /4xx-errors/*
.
The value of TargetOriginId
specifies the value of the ID
element for the origin that contains your custom error pages.
If you specify a value for ResponsePagePath
, you must also specify a value for ResponseCode
. If you don't want to specify a value, include an empty element, <ResponsePagePath>
, in the XML document.
We recommend that you store custom error pages in an Amazon S3 bucket. If you store custom error pages on an HTTP server and the server starts to return 5xx errors, CloudFront can't get the files that you want to return to viewers because the origin server is unavailable.
", - "CustomErrorResponse$ResponseCode": "The HTTP status code that you want CloudFront to return to the viewer along with the custom error page. There are a variety of reasons that you might want CloudFront to return a status code different from the status code that your origin returned to CloudFront, for example:
Some Internet devices (some firewalls and corporate proxies, for example) intercept HTTP 4xx and 5xx and prevent the response from being returned to the viewer. If you substitute 200
, the response typically won't be intercepted.
If you don't care about distinguishing among different client errors or server errors, you can specify 400
or 500
as the ResponseCode
for all 4xx or 5xx errors.
You might want to return a 200
status code (OK) and static website so your customers don't know that your website is down.
If you specify a value for ResponseCode
, you must also specify a value for ResponsePagePath
. If you don't want to specify a value, include an empty element, <ResponseCode>
, in the XML document.
The path to the custom error page that you want CloudFront to return to a viewer when your origin returns the HTTP status code specified by ErrorCode
, for example, /4xx-errors/403-forbidden.html
. If you want to store your objects and your custom error pages in different locations, your distribution must include a cache behavior for which the following is true:
The value of PathPattern
matches the path to your custom error messages. For example, suppose you saved custom error pages for 4xx errors in an Amazon S3 bucket in a directory named /4xx-errors
. Your distribution must include a cache behavior for which the path pattern routes requests for your custom error pages to that location, for example, /4xx-errors/*
.
The value of TargetOriginId
specifies the value of the ID
element for the origin that contains your custom error pages.
If you specify a value for ResponsePagePath
, you must also specify a value for ResponseCode
.
We recommend that you store custom error pages in an Amazon S3 bucket. If you store custom error pages on an HTTP server and the server starts to return 5xx errors, CloudFront can't get the files that you want to return to viewers because the origin server is unavailable.
", + "CustomErrorResponse$ResponseCode": "The HTTP status code that you want CloudFront to return to the viewer along with the custom error page. There are a variety of reasons that you might want CloudFront to return a status code different from the status code that your origin returned to CloudFront, for example:
Some Internet devices (some firewalls and corporate proxies, for example) intercept HTTP 4xx and 5xx and prevent the response from being returned to the viewer. If you substitute 200
, the response typically won't be intercepted.
If you don't care about distinguishing among different client errors or server errors, you can specify 400
or 500
as the ResponseCode
for all 4xx or 5xx errors.
You might want to return a 200
status code (OK) and static website so your customers don't know that your website is down.
If you specify a value for ResponseCode
, you must also specify a value for ResponsePagePath
.
The value of ID
for the origin that you want CloudFront to route requests to when a request matches the path pattern either for a cache behavior or for the default cache behavior in your distribution.
The value of ID
for the field-level encryption configuration that you want CloudFront to use for encrypting specific fields of data for a cache behavior or for the default cache behavior in your distribution.
The origin access identity's ID.
", @@ -1898,7 +1898,7 @@ "DistributionAlreadyExists$Message": null, "DistributionConfig$CallerReference": "A unique value (for example, a date-time stamp) that ensures that the request can't be replayed.
If the value of CallerReference
is new (regardless of the content of the DistributionConfig
object), CloudFront creates a new distribution.
If CallerReference
is a value that you already sent in a previous request to create a distribution, CloudFront returns a DistributionAlreadyExists
error.
The object that you want CloudFront to request from your origin (for example, index.html
) when a viewer requests the root URL for your distribution (http://www.example.com
) instead of an object in your distribution (http://www.example.com/product-description.html
). Specifying a default root object avoids exposing the contents of your distribution.
Specify only the object name, for example, index.html
. Don't add a /
before the object name.
If you don't want to specify a default root object when you create a distribution, include an empty DefaultRootObject
element.
To delete the default root object from an existing distribution, update the distribution configuration and include an empty DefaultRootObject
element.
To replace the default root object, update the distribution configuration and specify the new object.
For more information about the default root object, see Creating a Default Root Object in the Amazon CloudFront Developer Guide.
", - "DistributionConfig$WebACLId": "A unique identifier that specifies the AWS WAF web ACL, if any, to associate with this distribution.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to CloudFront, and lets you control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, CloudFront responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked. For more information about AWS WAF, see the AWS WAF Developer Guide.
", + "DistributionConfig$WebACLId": "A unique identifier that specifies the AWS WAF web ACL, if any, to associate with this distribution. To specify a web ACL created using the latest version of AWS WAF, use the ACL ARN, for example arn:aws:wafv2:us-east-1:123456789012:global/webacl/ExampleWebACL/473e64fd-f30b-4765-81a0-62ad96dd167a
. To specify a web ACL created using AWS WAF Classic, use the ACL ID, for example 473e64fd-f30b-4765-81a0-62ad96dd167a
.
AWS WAF is a web application firewall that lets you monitor the HTTP and HTTPS requests that are forwarded to CloudFront, and lets you control access to your content. Based on conditions that you specify, such as the IP addresses that requests originate from or the values of query strings, CloudFront responds to requests either with the requested content or with an HTTP 403 status code (Forbidden). You can also configure CloudFront to return a custom error page when a request is blocked. For more information about AWS WAF, see the AWS WAF Developer Guide.
", "DistributionList$Marker": "The value you provided for the Marker
request parameter.
If IsTruncated
is true
, this element is present and contains the value you can use for the Marker
request parameter to continue listing your distributions where they left off.
The streaming distribution's id.
", "UpdateStreamingDistributionRequest$IfMatch": "The value of the ETag
header that you received when retrieving the streaming distribution's configuration. For example: E2QWRUHAPOMQZL
.
The current version of the configuration. For example: E2QWRUHAPOMQZL
.
If you want viewers to use HTTPS to request your objects and you're using an alternate domain name, you must choose the type of certificate that you want to use. Specify the following value if you purchased your certificate from a third-party certificate authority:
<IAMCertificateId>IAM certificate ID<IAMCertificateId>
where IAM certificate ID
is the ID that IAM returned when you added the certificate to the IAM certificate store.
If you specify IAMCertificateId
, you must also specify a value for SSLSupportMethod
.
If you want viewers to use HTTPS to request your objects and you're using an alternate domain name, you must choose the type of certificate that you want to use. Specify the following value if ACM provided your certificate:
<ACMCertificateArn>ARN for ACM SSL/TLS certificate<ACMCertificateArn>
where ARN for ACM SSL/TLS certificate
is the ARN for the ACM SSL/TLS certificate that you want to use for this distribution.
If you specify ACMCertificateArn
, you must also specify a value for SSLSupportMethod
.
If you want viewers to use HTTPS to request your objects and you're using an alternate domain name, you must choose the type of certificate that you want to use. If you purchased your certificate from a third-party certificate authority and uploaded it to the IAM certificate store, specify the certificate ID that you want to use for this distribution.
If you specify a certificate ID, you must also specify an SSL support method (sni-only
or vip
).
If you want viewers to use HTTPS to request your objects and you're using an alternate domain name, you must choose the type of certificate that you want to use. If ACM provided your certificate, specify the Amazon Resource Name (ARN) for the ACM certificate that you want to use for this distribution. CloudFront only supports ACM certificates in the US East (N. Virginia) Region (us-east-1).
If you specify an ACM certificate ARN, you must also specify an SSL support method (sni-only
or vip
).
This field is no longer used. Use one of the following fields instead:
" } }, diff --git a/models/apis/cloudhsmv2/2017-04-28/api-2.json b/models/apis/cloudhsmv2/2017-04-28/api-2.json index 872de43fa3a..d710041fd64 100644 --- a/models/apis/cloudhsmv2/2017-04-28/api-2.json +++ b/models/apis/cloudhsmv2/2017-04-28/api-2.json @@ -23,11 +23,12 @@ "input":{"shape":"CopyBackupToRegionRequest"}, "output":{"shape":"CopyBackupToRegionResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] }, "CreateCluster":{ @@ -39,11 +40,12 @@ "input":{"shape":"CreateClusterRequest"}, "output":{"shape":"CreateClusterResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] }, "CreateHsm":{ @@ -71,11 +73,11 @@ "input":{"shape":"DeleteBackupRequest"}, "output":{"shape":"DeleteBackupResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"} ] }, "DeleteCluster":{ @@ -87,11 +89,12 @@ "input":{"shape":"DeleteClusterRequest"}, "output":{"shape":"DeleteClusterResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] }, "DeleteHsm":{ @@ -119,11 +122,12 @@ "input":{"shape":"DescribeBackupsRequest"}, "output":{"shape":"DescribeBackupsResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] }, "DescribeClusters":{ @@ -135,10 +139,11 @@ "input":{"shape":"DescribeClustersRequest"}, "output":{"shape":"DescribeClustersResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] }, "InitializeCluster":{ @@ -150,11 +155,11 @@ "input":{"shape":"InitializeClusterRequest"}, "output":{"shape":"InitializeClusterResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"} ] }, "ListTags":{ @@ -166,11 +171,12 @@ "input":{"shape":"ListTagsRequest"}, "output":{"shape":"ListTagsResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] }, "RestoreBackup":{ @@ -182,11 +188,11 @@ "input":{"shape":"RestoreBackupRequest"}, "output":{"shape":"RestoreBackupResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"} ] }, "TagResource":{ @@ -198,11 +204,12 @@ "input":{"shape":"TagResourceRequest"}, "output":{"shape":"TagResourceResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] }, "UntagResource":{ @@ -214,11 +221,12 @@ "input":{"shape":"UntagResourceRequest"}, "output":{"shape":"UntagResourceResponse"}, "errors":[ + {"shape":"CloudHsmAccessDeniedException"}, {"shape":"CloudHsmInternalFailureException"}, - {"shape":"CloudHsmServiceException"}, - {"shape":"CloudHsmResourceNotFoundException"}, {"shape":"CloudHsmInvalidRequestException"}, - {"shape":"CloudHsmAccessDeniedException"} + {"shape":"CloudHsmResourceNotFoundException"}, + {"shape":"CloudHsmServiceException"}, + {"shape":"CloudHsmTagException"} ] } }, @@ -235,7 +243,8 @@ "SourceRegion":{"shape":"Region"}, "SourceBackup":{"shape":"BackupId"}, "SourceCluster":{"shape":"ClusterId"}, - "DeleteTimestamp":{"shape":"Timestamp"} + "DeleteTimestamp":{"shape":"Timestamp"}, + "TagList":{"shape":"TagList"} } }, "BackupId":{ @@ -311,6 +320,13 @@ }, "exception":true }, + "CloudHsmTagException":{ + "type":"structure", + "members":{ + "Message":{"shape":"errorMessage"} + }, + "exception":true + }, "Cluster":{ "type":"structure", "members":{ @@ -326,7 +342,8 @@ "StateMessage":{"shape":"StateMessage"}, "SubnetMapping":{"shape":"ExternalSubnetMapping"}, "VpcId":{"shape":"VpcId"}, - "Certificates":{"shape":"Certificates"} + "Certificates":{"shape":"Certificates"}, + "TagList":{"shape":"TagList"} } }, "ClusterId":{ @@ -359,7 +376,8 @@ ], "members":{ "DestinationRegion":{"shape":"Region"}, - "BackupId":{"shape":"BackupId"} + "BackupId":{"shape":"BackupId"}, + "TagList":{"shape":"TagList"} } }, "CopyBackupToRegionResponse":{ @@ -377,7 +395,8 @@ "members":{ "SubnetIds":{"shape":"SubnetIds"}, "HsmType":{"shape":"HsmType"}, - "SourceBackupId":{"shape":"BackupId"} + "SourceBackupId":{"shape":"BackupId"}, + "TagList":{"shape":"TagList"} } }, "CreateClusterResponse":{ @@ -572,7 +591,7 @@ "type":"structure", "required":["ResourceId"], "members":{ - "ResourceId":{"shape":"ClusterId"}, + "ResourceId":{"shape":"ResourceId"}, "NextToken":{"shape":"NextToken"}, "MaxResults":{"shape":"MaxSize"} } @@ -604,6 +623,10 @@ "type":"string", "pattern":"[a-z]{2}(-(gov))?-(east|west|north|south|central){1,2}-\\d" }, + "ResourceId":{ + "type":"string", + "pattern":"(?:cluster|backup)-[2-7a-zA-Z]{11,16}" + }, "RestoreBackupRequest":{ "type":"structure", "required":["BackupId"], @@ -619,7 +642,7 @@ }, "SecurityGroup":{ "type":"string", - "pattern":"sg-[0-9a-fA-F]" + "pattern":"sg-[0-9a-fA-F]{8,17}" }, "StateMessage":{ "type":"string", @@ -677,7 +700,7 @@ "TagList" ], "members":{ - "ResourceId":{"shape":"ClusterId"}, + "ResourceId":{"shape":"ResourceId"}, "TagList":{"shape":"TagList"} } }, @@ -700,7 +723,7 @@ "TagKeyList" ], "members":{ - "ResourceId":{"shape":"ClusterId"}, + "ResourceId":{"shape":"ResourceId"}, "TagKeyList":{"shape":"TagKeyList"} } }, diff --git a/models/apis/cloudhsmv2/2017-04-28/docs-2.json b/models/apis/cloudhsmv2/2017-04-28/docs-2.json index 10faffa942a..0075d3d6b3c 100644 --- a/models/apis/cloudhsmv2/2017-04-28/docs-2.json +++ b/models/apis/cloudhsmv2/2017-04-28/docs-2.json @@ -1,24 +1,24 @@ { "version": "2.0", - "service": "For more information about AWS CloudHSM, see AWS CloudHSM and the AWS CloudHSM User Guide.
", + "service": "For more information about AWS CloudHSM, see AWS CloudHSM and the AWS CloudHSM User Guide.
", "operations": { "CopyBackupToRegion": "Copy an AWS CloudHSM cluster backup to a different region.
", "CreateCluster": "Creates a new AWS CloudHSM cluster.
", "CreateHsm": "Creates a new hardware security module (HSM) in the specified AWS CloudHSM cluster.
", - "DeleteBackup": "Deletes a specified AWS CloudHSM backup. A backup can be restored up to 7 days after the DeleteBackup request. For more information on restoring a backup, see RestoreBackup
", + "DeleteBackup": "Deletes a specified AWS CloudHSM backup. A backup can be restored up to 7 days after the DeleteBackup request is made. For more information on restoring a backup, see RestoreBackup.
", "DeleteCluster": "Deletes the specified AWS CloudHSM cluster. Before you can delete a cluster, you must delete all HSMs in the cluster. To see if the cluster contains any HSMs, use DescribeClusters. To delete an HSM, use DeleteHsm.
", "DeleteHsm": "Deletes the specified HSM. To specify an HSM, you can use its identifier (ID), the IP address of the HSM's elastic network interface (ENI), or the ID of the HSM's ENI. You need to specify only one of these values. To find these values, use DescribeClusters.
", "DescribeBackups": "Gets information about backups of AWS CloudHSM clusters.
This is a paginated operation, which means that each response might contain only a subset of all the backups. When the response contains only a subset of backups, it includes a NextToken
value. Use this value in a subsequent DescribeBackups
request to get more backups. When you receive a response with no NextToken
(or an empty or null value), that means there are no more backups to get.
Gets information about AWS CloudHSM clusters.
This is a paginated operation, which means that each response might contain only a subset of all the clusters. When the response contains only a subset of clusters, it includes a NextToken
value. Use this value in a subsequent DescribeClusters
request to get more clusters. When you receive a response with no NextToken
(or an empty or null value), that means there are no more clusters to get.
Claims an AWS CloudHSM cluster by submitting the cluster certificate issued by your issuing certificate authority (CA) and the CA's root certificate. Before you can claim a cluster, you must sign the cluster's certificate signing request (CSR) with your issuing CA. To get the cluster's CSR, use DescribeClusters.
", "ListTags": "Gets a list of tags for the specified AWS CloudHSM cluster.
This is a paginated operation, which means that each response might contain only a subset of all the tags. When the response contains only a subset of tags, it includes a NextToken
value. Use this value in a subsequent ListTags
request to get more tags. When you receive a response with no NextToken
(or an empty or null value), that means there are no more tags to get.
Restores a specified AWS CloudHSM backup that is in the PENDING_DELETION
state. For more information on deleting a backup, see DeleteBackup.
Restores a specified AWS CloudHSM backup that is in the PENDING_DELETION
state. For mor information on deleting a backup, see DeleteBackup.
Adds or overwrites one or more tags for the specified AWS CloudHSM cluster.
", "UntagResource": "Removes the specified tag or tags from the specified AWS CloudHSM cluster.
" }, "shapes": { "Backup": { - "base": "Contains information about a backup of an AWS CloudHSM cluster.
", + "base": "Contains information about a backup of an AWS CloudHSM cluster. All backup objects contain the BackupId, BackupState, ClusterId, and CreateTimestamp parameters. Backups that were copied into a destination region additionally contain the CopyTimestamp, SourceBackup, SourceCluster, and SourceRegion paramters. A backup that is pending deletion will include the DeleteTimestamp parameter.
", "refs": { "Backups$member": null, "DeleteBackupResponse$Backup": "Information on the Backup
object deleted.
The identifier (ID) of the backup.
", - "Backup$SourceBackup": null, + "Backup$SourceBackup": "The identifier (ID) of the source backup from which the new backup was copied.
", "Cluster$SourceBackupId": "The identifier (ID) of the backup used to create the cluster. This value exists only when the cluster was created from a backup.
", "CopyBackupToRegionRequest$BackupId": "The ID of the backup that will be copied to the destination region.
", "CreateClusterRequest$SourceBackupId": "The identifier (ID) of the cluster backup to restore. Use this value to restore the cluster from a backup instead of creating a new cluster. To find the backup ID, use DescribeBackups.
", "DeleteBackupRequest$BackupId": "The ID of the backup to be deleted. To find the ID of a backup, use the DescribeBackups operation.
", - "DestinationBackup$SourceBackup": null, + "DestinationBackup$SourceBackup": "The identifier (ID) of the source backup from which the new backup was copied.
", "RestoreBackupRequest$BackupId": "The ID of the backup to be restored. To find the ID of a backup, use the DescribeBackups operation.
" } }, @@ -59,7 +59,7 @@ "Boolean": { "base": null, "refs": { - "DescribeBackupsRequest$SortAscending": null + "DescribeBackupsRequest$SortAscending": "Designates whether or not to sort the return backups by ascending chronological order of generation.
" } }, "Cert": { @@ -71,7 +71,7 @@ "Certificates$ManufacturerHardwareCertificate": "The HSM hardware certificate issued (signed) by the hardware manufacturer.
", "Certificates$ClusterCertificate": "The cluster certificate issued (signed) by the issuing certificate authority (CA) of the cluster's owner.
", "InitializeClusterRequest$SignedCert": "The cluster certificate issued (signed) by your issuing certificate authority (CA). The certificate must be in PEM format and can contain a maximum of 5000 characters.
", - "InitializeClusterRequest$TrustAnchor": "The issuing certificate of the issuing certificate authority (CA) that issued (signed) the cluster certificate. This can be a root (self-signed) certificate or a certificate chain that begins with the certificate that issued the cluster certificate and ends with a root certificate. The certificate or certificate chain must be in PEM format and can contain a maximum of 5000 characters.
" + "InitializeClusterRequest$TrustAnchor": "The issuing certificate of the issuing certificate authority (CA) that issued (signed) the cluster certificate. You must use a self-signed certificate. The certificate used to sign the HSM CSR must be directly available, and thus must be the root certificate. The certificate must be in PEM format and can contain a maximum of 5000 characters.
" } }, "Certificates": { @@ -105,6 +105,11 @@ "refs": { } }, + "CloudHsmTagException": { + "base": null, + "refs": { + } + }, "Cluster": { "base": "Contains information about an AWS CloudHSM cluster.
", "refs": { @@ -117,17 +122,14 @@ "base": null, "refs": { "Backup$ClusterId": "The identifier (ID) of the cluster that was backed up.
", - "Backup$SourceCluster": null, + "Backup$SourceCluster": "The identifier (ID) of the cluster containing the source backup from which the new backup was copied. .
", "Cluster$ClusterId": "The cluster's identifier (ID).
", "CreateHsmRequest$ClusterId": "The identifier (ID) of the HSM's cluster. To find the cluster ID, use DescribeClusters.
", "DeleteClusterRequest$ClusterId": "The identifier (ID) of the cluster that you are deleting. To find the cluster ID, use DescribeClusters.
", "DeleteHsmRequest$ClusterId": "The identifier (ID) of the cluster that contains the HSM that you are deleting.
", - "DestinationBackup$SourceCluster": null, + "DestinationBackup$SourceCluster": "The identifier (ID) of the cluster containing the source backup from which the new backup was copied.
", "Hsm$ClusterId": "The identifier (ID) of the cluster that contains the HSM.
", - "InitializeClusterRequest$ClusterId": "The identifier (ID) of the cluster that you are claiming. To find the cluster ID, use DescribeClusters.
", - "ListTagsRequest$ResourceId": "The cluster identifier (ID) for the cluster whose tags you are getting. To find the cluster ID, use DescribeClusters.
", - "TagResourceRequest$ResourceId": "The cluster identifier (ID) for the cluster that you are tagging. To find the cluster ID, use DescribeClusters.
", - "UntagResourceRequest$ResourceId": "The cluster identifier (ID) for the cluster whose tags you are removing. To find the cluster ID, use DescribeClusters.
" + "InitializeClusterRequest$ClusterId": "The identifier (ID) of the cluster that you are claiming. To find the cluster ID, use DescribeClusters.
" } }, "ClusterState": { @@ -224,7 +226,7 @@ } }, "DestinationBackup": { - "base": null, + "base": "Contains information about the backup that will be copied and created by the CopyBackupToRegion operation.
", "refs": { "CopyBackupToRegionResponse$DestinationBackup": "Information on the backup that will be copied to the destination region, including CreateTimestamp, SourceBackup, SourceCluster, and Source Region. CreateTimestamp of the destination backup will be the same as that of the source backup.
You will need to use the sourceBackupID
returned in this operation to use the DescribeBackups operation on the backup that will be copied to the destination region.
A map of the cluster's subnets and their corresponding Availability Zones.
" + "Cluster$SubnetMapping": "A map from availability zone to the cluster’s subnet in that availability zone.
" } }, "Field": { @@ -353,9 +355,17 @@ "Region": { "base": null, "refs": { - "Backup$SourceRegion": null, + "Backup$SourceRegion": "The AWS region that contains the source backup from which the new backup was copied.
", "CopyBackupToRegionRequest$DestinationRegion": "The AWS region that will contain your copied CloudHSM cluster backup.
", - "DestinationBackup$SourceRegion": null + "DestinationBackup$SourceRegion": "The AWS region that contains the source backup from which the new backup was copied.
" + } + }, + "ResourceId": { + "base": null, + "refs": { + "ListTagsRequest$ResourceId": "The cluster identifier (ID) for the cluster whose tags you are getting. To find the cluster ID, use DescribeClusters.
", + "TagResourceRequest$ResourceId": "The cluster identifier (ID) for the cluster that you are tagging. To find the cluster ID, use DescribeClusters.
", + "UntagResourceRequest$ResourceId": "The cluster identifier (ID) for the cluster whose tags you are removing. To find the cluster ID, use DescribeClusters.
" } }, "RestoreBackupRequest": { @@ -430,6 +440,10 @@ "TagList": { "base": null, "refs": { + "Backup$TagList": null, + "Cluster$TagList": null, + "CopyBackupToRegionRequest$TagList": null, + "CreateClusterRequest$TagList": null, "ListTagsResponse$TagList": "A list of tags.
", "TagResourceRequest$TagList": "A list of one or more tags.
" } @@ -454,10 +468,10 @@ "base": null, "refs": { "Backup$CreateTimestamp": "The date and time when the backup was created.
", - "Backup$CopyTimestamp": null, + "Backup$CopyTimestamp": "The date and time when the backup was copied from a source backup.
", "Backup$DeleteTimestamp": "The date and time when the backup will be permanently deleted.
", "Cluster$CreateTimestamp": "The date and time when the cluster was created.
", - "DestinationBackup$CreateTimestamp": null + "DestinationBackup$CreateTimestamp": "The date and time when both the source backup was created.
" } }, "UntagResourceRequest": { @@ -483,7 +497,8 @@ "CloudHsmInternalFailureException$Message": null, "CloudHsmInvalidRequestException$Message": null, "CloudHsmResourceNotFoundException$Message": null, - "CloudHsmServiceException$Message": null + "CloudHsmServiceException$Message": null, + "CloudHsmTagException$Message": null } } } diff --git a/models/apis/codebuild/2016-10-06/api-2.json b/models/apis/codebuild/2016-10-06/api-2.json index 3912792335e..e0a3c5d304e 100644 --- a/models/apis/codebuild/2016-10-06/api-2.json +++ b/models/apis/codebuild/2016-10-06/api-2.json @@ -151,6 +151,18 @@ {"shape":"InvalidInputException"} ] }, + "DeleteResourcePolicy":{ + "name":"DeleteResourcePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteResourcePolicyInput"}, + "output":{"shape":"DeleteResourcePolicyOutput"}, + "errors":[ + {"shape":"InvalidInputException"} + ] + }, "DeleteSourceCredentials":{ "name":"DeleteSourceCredentials", "http":{ @@ -191,6 +203,19 @@ {"shape":"ResourceNotFoundException"} ] }, + "GetResourcePolicy":{ + "name":"GetResourcePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetResourcePolicyInput"}, + "output":{"shape":"GetResourcePolicyOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidInputException"} + ] + }, "ImportSourceCredentials":{ "name":"ImportSourceCredentials", "http":{ @@ -301,6 +326,30 @@ {"shape":"ResourceNotFoundException"} ] }, + "ListSharedProjects":{ + "name":"ListSharedProjects", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListSharedProjectsInput"}, + "output":{"shape":"ListSharedProjectsOutput"}, + "errors":[ + {"shape":"InvalidInputException"} + ] + }, + "ListSharedReportGroups":{ + "name":"ListSharedReportGroups", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListSharedReportGroupsInput"}, + "output":{"shape":"ListSharedReportGroupsOutput"}, + "errors":[ + {"shape":"InvalidInputException"} + ] + }, "ListSourceCredentials":{ "name":"ListSourceCredentials", "http":{ @@ -310,6 +359,19 @@ "input":{"shape":"ListSourceCredentialsInput"}, "output":{"shape":"ListSourceCredentialsOutput"} }, + "PutResourcePolicy":{ + "name":"PutResourcePolicy", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"PutResourcePolicyInput"}, + "output":{"shape":"PutResourcePolicyOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidInputException"} + ] + }, "StartBuild":{ "name":"StartBuild", "http":{ @@ -737,6 +799,18 @@ "members":{ } }, + "DeleteResourcePolicyInput":{ + "type":"structure", + "required":["resourceArn"], + "members":{ + "resourceArn":{"shape":"NonEmptyString"} + } + }, + "DeleteResourcePolicyOutput":{ + "type":"structure", + "members":{ + } + }, "DeleteSourceCredentialsInput":{ "type":"structure", "required":["arn"], @@ -865,6 +939,19 @@ "type":"list", "member":{"shape":"FilterGroup"} }, + "GetResourcePolicyInput":{ + "type":"structure", + "required":["resourceArn"], + "members":{ + "resourceArn":{"shape":"NonEmptyString"} + } + }, + "GetResourcePolicyOutput":{ + "type":"structure", + "members":{ + "policy":{"shape":"NonEmptyString"} + } + }, "GitCloneDepth":{ "type":"integer", "min":0 @@ -1053,6 +1140,38 @@ "reports":{"shape":"ReportArns"} } }, + "ListSharedProjectsInput":{ + "type":"structure", + "members":{ + "sortBy":{"shape":"SharedResourceSortByType"}, + "sortOrder":{"shape":"SortOrderType"}, + "maxResults":{"shape":"PageSize"}, + "nextToken":{"shape":"NonEmptyString"} + } + }, + "ListSharedProjectsOutput":{ + "type":"structure", + "members":{ + "nextToken":{"shape":"String"}, + "projects":{"shape":"ProjectArns"} + } + }, + "ListSharedReportGroupsInput":{ + "type":"structure", + "members":{ + "sortOrder":{"shape":"SortOrderType"}, + "sortBy":{"shape":"SharedResourceSortByType"}, + "nextToken":{"shape":"String"}, + "maxResults":{"shape":"PageSize"} + } + }, + "ListSharedReportGroupsOutput":{ + "type":"structure", + "members":{ + "nextToken":{"shape":"String"}, + "reportGroups":{"shape":"ReportGroupArns"} + } + }, "ListSourceCredentialsInput":{ "type":"structure", "members":{ @@ -1160,6 +1279,12 @@ "logsConfig":{"shape":"LogsConfig"} } }, + "ProjectArns":{ + "type":"list", + "member":{"shape":"NonEmptyString"}, + "max":100, + "min":1 + }, "ProjectArtifacts":{ "type":"structure", "required":["type"], @@ -1286,6 +1411,23 @@ "type":"list", "member":{"shape":"Project"} }, + "PutResourcePolicyInput":{ + "type":"structure", + "required":[ + "policy", + "resourceArn" + ], + "members":{ + "policy":{"shape":"NonEmptyString"}, + "resourceArn":{"shape":"NonEmptyString"} + } + }, + "PutResourcePolicyOutput":{ + "type":"structure", + "members":{ + "resourceArn":{"shape":"NonEmptyString"} + } + }, "RegistryCredential":{ "type":"structure", "required":[ @@ -1456,6 +1598,13 @@ "GITHUB_ENTERPRISE" ] }, + "SharedResourceSortByType":{ + "type":"string", + "enum":[ + "ARN", + "MODIFIED_TIME" + ] + }, "SortOrderType":{ "type":"string", "enum":[ @@ -1527,6 +1676,7 @@ "privilegedModeOverride":{"shape":"WrapperBoolean"}, "timeoutInMinutesOverride":{"shape":"TimeOut"}, "queuedTimeoutInMinutesOverride":{"shape":"TimeOut"}, + "encryptionKeyOverride":{"shape":"NonEmptyString"}, "idempotencyToken":{"shape":"String"}, "logsConfigOverride":{"shape":"LogsConfig"}, "registryCredentialOverride":{"shape":"RegistryCredential"}, diff --git a/models/apis/codebuild/2016-10-06/docs-2.json b/models/apis/codebuild/2016-10-06/docs-2.json index 46c172e84c4..b8d8b752646 100644 --- a/models/apis/codebuild/2016-10-06/docs-2.json +++ b/models/apis/codebuild/2016-10-06/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "AWS CodeBuild is a fully managed build service in the cloud. AWS CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. AWS CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools, such as Apache Maven, Gradle, and more. You can also fully customize build environments in AWS CodeBuild to use your own build tools. AWS CodeBuild scales automatically to meet peak build requests. You pay only for the build time you consume. For more information about AWS CodeBuild, see the AWS CodeBuild User Guide.
AWS CodeBuild supports these operations:
BatchDeleteBuilds
: Deletes one or more builds.
BatchGetBuilds
: Gets information about one or more builds.
BatchGetProjects
: Gets information about one or more build projects. A build project defines how AWS CodeBuild runs a build. This includes information such as where to get the source code to build, the build environment to use, the build commands to run, and where to store the build output. A build environment is a representation of operating system, programming language runtime, and tools that AWS CodeBuild uses to run a build. You can add tags to build projects to help manage your resources and costs.
BatchGetReportGroups
: Returns an array of report groups.
BatchGetReports
: Returns an array of reports.
CreateProject
: Creates a build project.
CreateReportGroup
: Creates a report group. A report group contains a collection of reports.
CreateWebhook
: For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, enables AWS CodeBuild to start rebuilding the source code every time a code change is pushed to the repository.
DeleteProject
: Deletes a build project.
DeleteReport
: Deletes a report.
DeleteReportGroup
: Deletes a report group.
DeleteSourceCredentials
: Deletes a set of GitHub, GitHub Enterprise, or Bitbucket source credentials.
DeleteWebhook
: For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, stops AWS CodeBuild from rebuilding the source code every time a code change is pushed to the repository.
DescribeTestCases
: Returns a list of details about test cases for a report.
ImportSourceCredentials
: Imports the source repository credentials for an AWS CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository.
InvalidateProjectCache
: Resets the cache for a project.
ListBuilds
: Gets a list of build IDs, with each build ID representing a single build.
ListBuildsForProject
: Gets a list of build IDs for the specified build project, with each build ID representing a single build.
ListCuratedEnvironmentImages
: Gets information about Docker images that are managed by AWS CodeBuild.
ListProjects
: Gets a list of build project names, with each build project name representing a single build project.
ListReportGroups
: Gets a list ARNs for the report groups in the current AWS account.
ListReports
: Gets a list ARNs for the reports in the current AWS account.
ListReportsForReportGroup
: Returns a list of ARNs for the reports that belong to a ReportGroup
.
ListSourceCredentials
: Returns a list of SourceCredentialsInfo
objects. Each SourceCredentialsInfo
object includes the authentication type, token ARN, and type of source provider for one set of credentials.
StartBuild
: Starts running a build.
StopBuild
: Attempts to stop running a build.
UpdateProject
: Changes the settings of an existing build project.
UpdateReportGroup
: Changes a report group.
UpdateWebhook
: Changes the settings of an existing webhook.
AWS CodeBuild is a fully managed build service in the cloud. AWS CodeBuild compiles your source code, runs unit tests, and produces artifacts that are ready to deploy. AWS CodeBuild eliminates the need to provision, manage, and scale your own build servers. It provides prepackaged build environments for the most popular programming languages and build tools, such as Apache Maven, Gradle, and more. You can also fully customize build environments in AWS CodeBuild to use your own build tools. AWS CodeBuild scales automatically to meet peak build requests. You pay only for the build time you consume. For more information about AWS CodeBuild, see the AWS CodeBuild User Guide.
AWS CodeBuild supports these operations:
BatchDeleteBuilds
: Deletes one or more builds.
BatchGetBuilds
: Gets information about one or more builds.
BatchGetProjects
: Gets information about one or more build projects. A build project defines how AWS CodeBuild runs a build. This includes information such as where to get the source code to build, the build environment to use, the build commands to run, and where to store the build output. A build environment is a representation of operating system, programming language runtime, and tools that AWS CodeBuild uses to run a build. You can add tags to build projects to help manage your resources and costs.
BatchGetReportGroups
: Returns an array of report groups.
BatchGetReports
: Returns an array of reports.
CreateProject
: Creates a build project.
CreateReportGroup
: Creates a report group. A report group contains a collection of reports.
CreateWebhook
: For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, enables AWS CodeBuild to start rebuilding the source code every time a code change is pushed to the repository.
DeleteProject
: Deletes a build project.
DeleteReport
: Deletes a report.
DeleteReportGroup
: Deletes a report group.
DeleteResourcePolicy
: Deletes a resource policy that is identified by its resource ARN.
DeleteSourceCredentials
: Deletes a set of GitHub, GitHub Enterprise, or Bitbucket source credentials.
DeleteWebhook
: For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, stops AWS CodeBuild from rebuilding the source code every time a code change is pushed to the repository.
DescribeTestCases
: Returns a list of details about test cases for a report.
GetResourcePolicy
: Gets a resource policy that is identified by its resource ARN.
ImportSourceCredentials
: Imports the source repository credentials for an AWS CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository.
InvalidateProjectCache
: Resets the cache for a project.
ListBuilds
: Gets a list of build IDs, with each build ID representing a single build.
ListBuildsForProject
: Gets a list of build IDs for the specified build project, with each build ID representing a single build.
ListCuratedEnvironmentImages
: Gets information about Docker images that are managed by AWS CodeBuild.
ListProjects
: Gets a list of build project names, with each build project name representing a single build project.
ListReportGroups
: Gets a list ARNs for the report groups in the current AWS account.
ListReports
: Gets a list ARNs for the reports in the current AWS account.
ListReportsForReportGroup
: Returns a list of ARNs for the reports that belong to a ReportGroup
.
ListSharedProjects
: Gets a list of ARNs associated with projects shared with the current AWS account or user.
ListSharedReportGroups
: Gets a list of ARNs associated with report groups shared with the current AWS account or user
ListSourceCredentials
: Returns a list of SourceCredentialsInfo
objects. Each SourceCredentialsInfo
object includes the authentication type, token ARN, and type of source provider for one set of credentials.
PutResourcePolicy
: Stores a resource policy for the ARN of a Project
or ReportGroup
object.
StartBuild
: Starts running a build.
StopBuild
: Attempts to stop running a build.
UpdateProject
: Changes the settings of an existing build project.
UpdateReportGroup
: Changes a report group.
UpdateWebhook
: Changes the settings of an existing webhook.
Deletes one or more builds.
", "BatchGetBuilds": "Gets information about one or more builds.
", @@ -13,9 +13,11 @@ "DeleteProject": "Deletes a build project. When you delete a project, its builds are not deleted.
", "DeleteReport": "Deletes a report.
", "DeleteReportGroup": " DeleteReportGroup
: Deletes a report group. Before you delete a report group, you must delete its reports. Use ListReportsForReportGroup to get the reports in a report group. Use DeleteReport to delete the reports. If you call DeleteReportGroup
for a report group that contains one or more reports, an exception is thrown.
Deletes a resource policy that is identified by its resource ARN.
", "DeleteSourceCredentials": "Deletes a set of GitHub, GitHub Enterprise, or Bitbucket source credentials.
", "DeleteWebhook": "For an existing AWS CodeBuild build project that has its source code stored in a GitHub or Bitbucket repository, stops AWS CodeBuild from rebuilding the source code every time a code change is pushed to the repository.
", "DescribeTestCases": "Returns a list of details about test cases for a report.
", + "GetResourcePolicy": "Gets a resource policy that is identified by its resource ARN.
", "ImportSourceCredentials": "Imports the source repository credentials for an AWS CodeBuild project that has its source code stored in a GitHub, GitHub Enterprise, or Bitbucket repository.
", "InvalidateProjectCache": "Resets the cache for a project.
", "ListBuilds": "Gets a list of build IDs, with each build ID representing a single build.
", @@ -25,7 +27,10 @@ "ListReportGroups": "Gets a list ARNs for the report groups in the current AWS account.
", "ListReports": "Returns a list of ARNs for the reports in the current AWS account.
", "ListReportsForReportGroup": " Returns a list of ARNs for the reports that belong to a ReportGroup
.
Gets a list of projects that are shared with other AWS accounts or users.
", + "ListSharedReportGroups": "Gets a list of report groups that are shared with other AWS accounts or users.
", "ListSourceCredentials": " Returns a list of SourceCredentialsInfo
objects.
Stores a resource policy for the ARN of a Project
or ReportGroup
object.
Starts running a build.
", "StopBuild": "Attempts to stop running a build.
", "UpdateProject": "Changes the settings of a build project.
", @@ -287,6 +292,16 @@ "refs": { } }, + "DeleteResourcePolicyInput": { + "base": null, + "refs": { + } + }, + "DeleteResourcePolicyOutput": { + "base": null, + "refs": { + } + }, "DeleteSourceCredentialsInput": { "base": null, "refs": { @@ -369,7 +384,7 @@ "EnvironmentVariableType": { "base": null, "refs": { - "EnvironmentVariable$type": "The type of environment variable. Valid values include:
PARAMETER_STORE
: An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
PLAINTEXT
: An environment variable in plain text format.
SECRETS_MANAGER
: An environment variable stored in AWS Secrets Manager.
The type of environment variable. Valid values include:
PARAMETER_STORE
: An environment variable stored in Amazon EC2 Systems Manager Parameter Store.
PLAINTEXT
: An environment variable in plain text format. This is the default value.
SECRETS_MANAGER
: An environment variable stored in AWS Secrets Manager.
An array of arrays of WebhookFilter
objects used to determine which webhooks are triggered. At least one WebhookFilter
in the array must specify EVENT
as its type
.
For a build to be triggered, at least one filter group in the filterGroups
array must pass. For a filter group to pass, each of its filters must pass.
The name of the build project.
", "DeleteReportGroupInput$arn": "The ARN of the report group to delete.
", "DeleteReportInput$arn": "The ARN of the report to delete.
", + "DeleteResourcePolicyInput$resourceArn": "The ARN of the resource that is associated with the resource policy.
", "DeleteSourceCredentialsInput$arn": "The Amazon Resource Name (ARN) of the token.
", "DeleteSourceCredentialsOutput$arn": "The Amazon Resource Name (ARN) of the token.
", "EnvironmentVariable$name": "The name or key of the environment variable.
", "ExportedEnvironmentVariable$name": "The name of this exported environment variable.
", + "GetResourcePolicyInput$resourceArn": "The ARN of the resource that is associated with the resource policy.
", + "GetResourcePolicyOutput$policy": "The resource policy for the resource identified by the input ARN parameter.
", "ImportSourceCredentialsInput$username": " The Bitbucket username when the authType
is BASIC_AUTH. This parameter is not valid for other types of source providers or connections.
The Amazon Resource Name (ARN) of the token.
", "InvalidateProjectCacheInput$projectName": "The name of the AWS CodeBuild build project that the cache is reset for.
", "ListBuildsForProjectInput$projectName": "The name of the AWS CodeBuild project.
", "ListProjectsInput$nextToken": "During a previous call, if there are more than 100 items in the list, only the first 100 items are returned, along with a unique string called a nextToken. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
", + "ListSharedProjectsInput$nextToken": " During a previous call, the maximum number of items that can be returned is the value specified in maxResults
. If there more items in the list, then a unique string called a nextToken is returned. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
The ID of the subnet.
", "NetworkInterface$networkInterfaceId": "The ID of the network interface.
", "Project$serviceRole": "The ARN of the AWS Identity and Access Management (IAM) role that enables AWS CodeBuild to interact with dependent AWS services on behalf of the AWS account.
", "Project$encryptionKey": "The AWS Key Management Service (AWS KMS) customer master key (CMK) to be used for encrypting the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/alias-name
).
The image tag or image digest that identifies the Docker image to use for this build project. Use the following formats:
For an image tag: registry/repository:tag
. For example, to specify an image with the tag \"latest,\" use registry/repository:latest
.
For an image digest: registry/repository@digest
. For example, to specify an image with the digest \"sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf,\" use registry/repository@sha256:cbbf2f9a99b47fc460d422812b6a5adff7dfee951d8fa2e4a98caa0382cfbdbf
.
A JSON-formatted resource policy. For more information, see Sharing a Project and Sharing a Report Group in the AWS CodeBuild User Guide.
", + "PutResourcePolicyInput$resourceArn": " The ARN of the Project
or ReportGroup
resource you want to associate with a resource policy.
The ARN of the Project
or ReportGroup
resource that is associated with a resource policy.
The Amazon Resource Name (ARN) or name of credentials created using AWS Secrets Manager.
The credential
can use the name of the credentials only if they exist in your current region.
The ARN of the report run.
", "Report$reportGroupArn": "The ARN of the report group associated with this report.
", @@ -622,6 +675,7 @@ "StartBuildInput$projectName": "The name of the AWS CodeBuild build project to start running a build.
", "StartBuildInput$imageOverride": "The name of an image for this build that overrides the one specified in the build project.
", "StartBuildInput$serviceRoleOverride": "The name of a service role for this build that overrides the one specified in the build project.
", + "StartBuildInput$encryptionKeyOverride": "The AWS Key Management Service (AWS KMS) customer master key (CMK) that overrides the one specified in the build project. The CMK key encrypts the build output artifacts.
You can use a cross-account KMS key to encrypt the build output artifacts if your service role has permission to that key.
You can specify either the Amazon Resource Name (ARN) of the CMK or, if available, the CMK's alias (using the format alias/alias-name
).
The ID of the build.
", "Subnets$member": null, "TestCase$reportArn": "The ARN of the report to which the test case belongs.
", @@ -646,7 +700,9 @@ "DescribeTestCasesInput$maxResults": " The maximum number of paginated test cases returned per response. Use nextToken
to iterate pages in the list of returned TestCase
objects. The default value is 100.
The maximum number of paginated report groups returned per response. Use nextToken
to iterate pages in the list of returned ReportGroup
objects. The default value is 100.
The maximum number of paginated reports in this report group returned per response. Use nextToken
to iterate pages in the list of returned Report
objects. The default value is 100.
The maximum number of paginated reports returned per response. Use nextToken
to iterate pages in the list of returned Report
objects. The default value is 100.
The maximum number of paginated reports returned per response. Use nextToken
to iterate pages in the list of returned Report
objects. The default value is 100.
The maximum number of paginated shared build projects returned per response. Use nextToken
to iterate pages in the list of returned Project
objects. The default value is 100.
The maximum number of paginated shared report groups per response. Use nextToken
to iterate pages in the list of returned ReportGroup
objects. The default value is 100.
Information about the build project that was changed.
" } }, + "ProjectArns": { + "base": null, + "refs": { + "ListSharedProjectsOutput$projects": "The list of ARNs for the build projects shared with the current AWS account or user.
" + } + }, "ProjectArtifacts": { "base": "Information about the build output artifacts for the build project.
", "refs": { @@ -746,7 +808,7 @@ "ProjectNames": { "base": null, "refs": { - "BatchGetProjectsInput$names": "The names of the build projects.
", + "BatchGetProjectsInput$names": "The names or ARNs of the build projects. To get information about a project shared with your AWS account, its ARN must be specified. You cannot specify a shared project using its name.
", "BatchGetProjectsOutput$projectsNotFound": "The names of build projects for which information could not be found.
", "ListProjectsOutput$projects": "The list of build project names, with each build project name representing a single build project.
" } @@ -799,6 +861,16 @@ "BatchGetProjectsOutput$projects": "Information about the requested build projects.
" } }, + "PutResourcePolicyInput": { + "base": null, + "refs": { + } + }, + "PutResourcePolicyOutput": { + "base": null, + "refs": { + } + }, "RegistryCredential": { "base": "Information about credentials that provide access to a private Docker registry. When this is set:
imagePullCredentialsType
must be set to SERVICE_ROLE
.
images cannot be curated or an Amazon ECR image.
For more information, see Private Registry with AWS Secrets Manager Sample for AWS CodeBuild.
", "refs": { @@ -856,7 +928,8 @@ "refs": { "BatchGetReportGroupsInput$reportGroupArns": "An array of report group ARNs that identify the report groups to return.
", "BatchGetReportGroupsOutput$reportGroupsNotFound": " An array of ARNs passed to BatchGetReportGroups
that are not associated with a ReportGroup
.
The list of ARNs for the report groups in the current AWS account.
" + "ListReportGroupsOutput$reportGroups": "The list of ARNs for the report groups in the current AWS account.
", + "ListSharedReportGroupsOutput$reportGroups": "The list of ARNs for the report groups shared with the current AWS account or user.
" } }, "ReportGroupName": { @@ -953,6 +1026,13 @@ "SourceCredentialsInfo$serverType": "The type of source provider. The valid options are GITHUB, GITHUB_ENTERPRISE, or BITBUCKET.
" } }, + "SharedResourceSortByType": { + "base": null, + "refs": { + "ListSharedProjectsInput$sortBy": "The criterion to be used to list build projects shared with the current AWS account or user. Valid values include:
ARN
: List based on the ARN.
MODIFIED_TIME
: List based on when information about the shared project was last changed.
The criterion to be used to list report groups shared with the current AWS account or user. Valid values include:
ARN
: List based on the ARN.
MODIFIED_TIME
: List based on when information about the shared report group was last changed.
The order in which to list build projects. Valid values include:
ASCENDING
: List in ascending order.
DESCENDING
: List in descending order.
Use sortBy
to specify the criterion to be used to list build project names.
Used to specify the order to sort the list of returned report groups. Valid values are ASCENDING
and DESCENDING
.
Use to specify whether the results are returned in ascending or descending order.
", - "ListReportsInput$sortOrder": "Specifies the sort order for the list of returned reports. Valid values are:
ASCENDING
: return reports in chronological order based on their creation date.
DESCENDING
: return reports in the reverse chronological order based on their creation date.
Specifies the sort order for the list of returned reports. Valid values are:
ASCENDING
: return reports in chronological order based on their creation date.
DESCENDING
: return reports in the reverse chronological order based on their creation date.
The order in which to list shared build projects. Valid values include:
ASCENDING
: List in ascending order.
DESCENDING
: List in descending order.
The order in which to list shared report groups. Valid values include:
ASCENDING
: List in ascending order.
DESCENDING
: List in descending order.
During a previous call, the maximum number of items that can be returned is the value specified in maxResults
. If there more items in the list, then a unique string called a nextToken is returned. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
During a previous call, the maximum number of items that can be returned is the value specified in maxResults
. If there more items in the list, then a unique string called a nextToken is returned. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
During a previous call, the maximum number of items that can be returned is the value specified in maxResults
. If there more items in the list, then a unique string called a nextToken is returned. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
During a previous call, the maximum number of items that can be returned is the value specified in maxResults
. If there more items in the list, then a unique string called a nextToken is returned. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
During a previous call, the maximum number of items that can be returned is the value specified in maxResults
. If there more items in the list, then a unique string called a nextToken is returned. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
During a previous call, the maximum number of items that can be returned is the value specified in maxResults
. If there more items in the list, then a unique string called a nextToken is returned. To get the next batch of items in the list, call this operation again, adding the next token to the call. To get all of the items in the list, keep calling this operation with each subsequent next token that is returned, until no more next tokens are returned.
The name of the Amazon CloudWatch Logs group for the build logs.
", "LogsLocation$streamName": "The name of the Amazon CloudWatch Logs stream for the build logs.
", "LogsLocation$deepLink": "The URL to an individual build log in Amazon CloudWatch Logs.
", diff --git a/models/apis/codepipeline/2015-07-09/api-2.json b/models/apis/codepipeline/2015-07-09/api-2.json index 8ca54491bb2..301a02c1c63 100644 --- a/models/apis/codepipeline/2015-07-09/api-2.json +++ b/models/apis/codepipeline/2015-07-09/api-2.json @@ -481,6 +481,21 @@ {"shape":"PipelineNotFoundException"} ] }, + "StopPipelineExecution":{ + "name":"StopPipelineExecution", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopPipelineExecutionInput"}, + "output":{"shape":"StopPipelineExecutionOutput"}, + "errors":[ + {"shape":"ValidationException"}, + {"shape":"PipelineNotFoundException"}, + {"shape":"PipelineExecutionNotStoppableException"}, + {"shape":"DuplicatedStopRequestException"} + ] + }, "TagResource":{ "name":"TagResource", "http":{ @@ -759,6 +774,7 @@ "type":"string", "enum":[ "InProgress", + "Abandoned", "Succeeded", "Failed" ] @@ -1177,6 +1193,13 @@ "min":1, "pattern":"[a-zA-Z0-9!@ \\(\\)\\.\\*\\?\\-]+" }, + "DuplicatedStopRequestException":{ + "type":"structure", + "members":{ + "message":{"shape":"Message"} + }, + "exception":true + }, "EnableStageTransitionInput":{ "type":"structure", "required":[ @@ -1749,10 +1772,19 @@ }, "exception":true }, + "PipelineExecutionNotStoppableException":{ + "type":"structure", + "members":{ + "message":{"shape":"Message"} + }, + "exception":true + }, "PipelineExecutionStatus":{ "type":"string", "enum":[ "InProgress", + "Stopped", + "Stopping", "Succeeded", "Superseded", "Failed" @@ -1766,7 +1798,8 @@ "startTime":{"shape":"Timestamp"}, "lastUpdateTime":{"shape":"Timestamp"}, "sourceRevisions":{"shape":"SourceRevisionList"}, - "trigger":{"shape":"ExecutionTrigger"} + "trigger":{"shape":"ExecutionTrigger"}, + "stopTrigger":{"shape":"StopExecutionTrigger"} } }, "PipelineExecutionSummaryList":{ @@ -2125,6 +2158,8 @@ "enum":[ "InProgress", "Failed", + "Stopped", + "Stopping", "Succeeded" ] }, @@ -2187,6 +2222,35 @@ "pipelineExecutionId":{"shape":"PipelineExecutionId"} } }, + "StopExecutionTrigger":{ + "type":"structure", + "members":{ + "reason":{"shape":"StopPipelineExecutionReason"} + } + }, + "StopPipelineExecutionInput":{ + "type":"structure", + "required":[ + "pipelineName", + "pipelineExecutionId" + ], + "members":{ + "pipelineName":{"shape":"PipelineName"}, + "pipelineExecutionId":{"shape":"PipelineExecutionId"}, + "abandon":{"shape":"Boolean"}, + "reason":{"shape":"StopPipelineExecutionReason"} + } + }, + "StopPipelineExecutionOutput":{ + "type":"structure", + "members":{ + "pipelineExecutionId":{"shape":"PipelineExecutionId"} + } + }, + "StopPipelineExecutionReason":{ + "type":"string", + "max":200 + }, "String":{"type":"string"}, "Tag":{ "type":"structure", diff --git a/models/apis/codepipeline/2015-07-09/docs-2.json b/models/apis/codepipeline/2015-07-09/docs-2.json index 1fbdfa4715c..7c8a8d260a8 100644 --- a/models/apis/codepipeline/2015-07-09/docs-2.json +++ b/models/apis/codepipeline/2015-07-09/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "Overview
This is the AWS CodePipeline API Reference. This guide provides descriptions of the actions and data types for AWS CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the AWS CodePipeline User Guide.
You can use the AWS CodePipeline API to work with pipelines, stages, actions, and transitions.
Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions.
You can work with pipelines by calling:
CreatePipeline, which creates a uniquely named pipeline.
DeletePipeline, which deletes the specified pipeline.
GetPipeline, which returns information about the pipeline structure and pipeline metadata, including the pipeline Amazon Resource Name (ARN).
GetPipelineExecution, which returns information about a specific execution of a pipeline.
GetPipelineState, which returns information about the current state of the stages and actions of a pipeline.
ListActionExecutions, which returns action-level details for past executions. The details include full stage and action-level details, including individual action duration, status, any errors that occurred during the execution, and input and output artifact location details.
ListPipelines, which gets a summary of all of the pipelines associated with your account.
ListPipelineExecutions, which gets a summary of the most recent executions for a pipeline.
StartPipelineExecution, which runs the most recent revision of an artifact through the pipeline.
UpdatePipeline, which updates a pipeline with edits or changes to the structure of the pipeline.
Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see AWS CodePipeline Pipeline Structure Reference.
Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipeline and GetPipelineState. Valid action categories are:
Source
Build
Test
Deploy
Approval
Invoke
Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete.
You can work with transitions by calling:
DisableStageTransition, which prevents artifacts from transitioning to the next stage in a pipeline.
EnableStageTransition, which enables transition of artifacts between stages in a pipeline.
Using the API to integrate with AWS CodePipeline
For third-party integrators or developers who want to create their own integrations with AWS CodePipeline, the expected sequence varies from the standard API user. To integrate with AWS CodePipeline, developers need to work with the following items:
Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source.
You can work with jobs by calling:
AcknowledgeJob, which confirms whether a job worker has received the specified job.
GetJobDetails, which returns the details of a job.
PollForJobs, which determines whether there are any jobs to act on.
PutJobFailureResult, which provides details of a job failure.
PutJobSuccessResult, which provides details of a job success.
Third party jobs, which are instances of an action created by a partner action and integrated into AWS CodePipeline. Partner actions are created by members of the AWS Partner Network.
You can work with third party jobs by calling:
AcknowledgeThirdPartyJob, which confirms whether a job worker has received the specified job.
GetThirdPartyJobDetails, which requests the details of a job for a partner action.
PollForThirdPartyJobs, which determines whether there are any jobs to act on.
PutThirdPartyJobFailureResult, which provides details of a job failure.
PutThirdPartyJobSuccessResult, which provides details of a job success.
Overview
This is the AWS CodePipeline API Reference. This guide provides descriptions of the actions and data types for AWS CodePipeline. Some functionality for your pipeline can only be configured through the API. For more information, see the AWS CodePipeline User Guide.
You can use the AWS CodePipeline API to work with pipelines, stages, actions, and transitions.
Pipelines are models of automated release processes. Each pipeline is uniquely named, and consists of stages, actions, and transitions.
You can work with pipelines by calling:
CreatePipeline, which creates a uniquely named pipeline.
DeletePipeline, which deletes the specified pipeline.
GetPipeline, which returns information about the pipeline structure and pipeline metadata, including the pipeline Amazon Resource Name (ARN).
GetPipelineExecution, which returns information about a specific execution of a pipeline.
GetPipelineState, which returns information about the current state of the stages and actions of a pipeline.
ListActionExecutions, which returns action-level details for past executions. The details include full stage and action-level details, including individual action duration, status, any errors that occurred during the execution, and input and output artifact location details.
ListPipelines, which gets a summary of all of the pipelines associated with your account.
ListPipelineExecutions, which gets a summary of the most recent executions for a pipeline.
StartPipelineExecution, which runs the most recent revision of an artifact through the pipeline.
StopPipelineExecution, which stops the specified pipeline execution from continuing through the pipeline.
UpdatePipeline, which updates a pipeline with edits or changes to the structure of the pipeline.
Pipelines include stages. Each stage contains one or more actions that must complete before the next stage begins. A stage results in success or failure. If a stage fails, the pipeline stops at that stage and remains stopped until either a new version of an artifact appears in the source location, or a user takes action to rerun the most recent artifact through the pipeline. You can call GetPipelineState, which displays the status of a pipeline, including the status of stages in the pipeline, or GetPipeline, which returns the entire structure of the pipeline, including the stages of that pipeline. For more information about the structure of stages and actions, see AWS CodePipeline Pipeline Structure Reference.
Pipeline stages include actions that are categorized into categories such as source or build actions performed in a stage of a pipeline. For example, you can use a source action to import artifacts into a pipeline from a source such as Amazon S3. Like stages, you do not work with actions directly in most cases, but you do define and interact with actions when working with pipeline operations such as CreatePipeline and GetPipelineState. Valid action categories are:
Source
Build
Test
Deploy
Approval
Invoke
Pipelines also include transitions, which allow the transition of artifacts from one stage to the next in a pipeline after the actions in one stage complete.
You can work with transitions by calling:
DisableStageTransition, which prevents artifacts from transitioning to the next stage in a pipeline.
EnableStageTransition, which enables transition of artifacts between stages in a pipeline.
Using the API to integrate with AWS CodePipeline
For third-party integrators or developers who want to create their own integrations with AWS CodePipeline, the expected sequence varies from the standard API user. To integrate with AWS CodePipeline, developers need to work with the following items:
Jobs, which are instances of an action. For example, a job for a source action might import a revision of an artifact from a source.
You can work with jobs by calling:
AcknowledgeJob, which confirms whether a job worker has received the specified job.
GetJobDetails, which returns the details of a job.
PollForJobs, which determines whether there are any jobs to act on.
PutJobFailureResult, which provides details of a job failure.
PutJobSuccessResult, which provides details of a job success.
Third party jobs, which are instances of an action created by a partner action and integrated into AWS CodePipeline. Partner actions are created by members of the AWS Partner Network.
You can work with third party jobs by calling:
AcknowledgeThirdPartyJob, which confirms whether a job worker has received the specified job.
GetThirdPartyJobDetails, which requests the details of a job for a partner action.
PollForThirdPartyJobs, which determines whether there are any jobs to act on.
PutThirdPartyJobFailureResult, which provides details of a job failure.
PutThirdPartyJobSuccessResult, which provides details of a job success.
Returns information about a specified job and whether that job has been received by the job worker. Used for custom actions only.
", "AcknowledgeThirdPartyJob": "Confirms a job worker has received the specified job. Used for partner actions only.
", @@ -12,19 +12,19 @@ "DeregisterWebhookWithThirdParty": "Removes the connection between the webhook that was created by CodePipeline and the external tool with events to be detected. Currently supported only for webhooks that target an action type of GitHub.
", "DisableStageTransition": "Prevents artifacts in a pipeline from transitioning to the next stage in the pipeline.
", "EnableStageTransition": "Enables artifacts in a pipeline to transition to a stage in a pipeline.
", - "GetJobDetails": "Returns information about a job. Used for custom actions only.
When this API is called, AWS CodePipeline returns temporary credentials for the Amazon S3 bucket used to store artifacts for the pipeline, if the action requires access to that Amazon S3 bucket for input or output artifacts. This API also returns any secret values defined for the action.
Returns information about a job. Used for custom actions only.
When this API is called, AWS CodePipeline returns temporary credentials for the S3 bucket used to store artifacts for the pipeline, if the action requires access to that S3 bucket for input or output artifacts. This API also returns any secret values defined for the action.
Returns the metadata, structure, stages, and actions of a pipeline. Can be used to return the entire structure of a pipeline in JSON format, which can then be modified and used to update the pipeline structure with UpdatePipeline.
", "GetPipelineExecution": "Returns information about an execution of a pipeline, including details about artifacts, the pipeline execution ID, and the name, version, and status of the pipeline.
", "GetPipelineState": "Returns information about the state of a pipeline, including the stages and actions.
Values returned in the revisionId
and revisionUrl
fields indicate the source revision information, such as the commit ID, for the current state.
Requests the details of a job for a third party action. Used for partner actions only.
When this API is called, AWS CodePipeline returns temporary credentials for the Amazon S3 bucket used to store artifacts for the pipeline, if the action requires access to that Amazon S3 bucket for input or output artifacts. This API also returns any secret values defined for the action.
Requests the details of a job for a third party action. Used for partner actions only.
When this API is called, AWS CodePipeline returns temporary credentials for the S3 bucket used to store artifacts for the pipeline, if the action requires access to that S3 bucket for input or output artifacts. This API also returns any secret values defined for the action.
Lists the action executions that have occurred in a pipeline.
", "ListActionTypes": "Gets a summary of all AWS CodePipeline action types associated with your account.
", "ListPipelineExecutions": "Gets a summary of the most recent executions for a pipeline.
", "ListPipelines": "Gets a summary of all of the pipelines associated with your account.
", "ListTagsForResource": "Gets the set of key-value pairs (metadata) that are used to manage the resource.
", "ListWebhooks": "Gets a listing of all the webhooks in this AWS Region for this account. The output lists all webhooks and includes the webhook URL and ARN and the configuration for each webhook.
", - "PollForJobs": "Returns information about any jobs for AWS CodePipeline to act on. PollForJobs
is valid only for action types with \"Custom\" in the owner field. If the action type contains \"AWS\" or \"ThirdParty\" in the owner field, the PollForJobs
action returns an error.
When this API is called, AWS CodePipeline returns temporary credentials for the Amazon S3 bucket used to store artifacts for the pipeline, if the action requires access to that Amazon S3 bucket for input or output artifacts. This API also returns any secret values defined for the action.
Determines whether there are any third party jobs for a job worker to act on. Used for partner actions only.
When this API is called, AWS CodePipeline returns temporary credentials for the Amazon S3 bucket used to store artifacts for the pipeline, if the action requires access to that Amazon S3 bucket for input or output artifacts.
Returns information about any jobs for AWS CodePipeline to act on. PollForJobs
is valid only for action types with \"Custom\" in the owner field. If the action type contains \"AWS\" or \"ThirdParty\" in the owner field, the PollForJobs
action returns an error.
When this API is called, AWS CodePipeline returns temporary credentials for the S3 bucket used to store artifacts for the pipeline, if the action requires access to that S3 bucket for input or output artifacts. This API also returns any secret values defined for the action.
Determines whether there are any third party jobs for a job worker to act on. Used for partner actions only.
When this API is called, AWS CodePipeline returns temporary credentials for the S3 bucket used to store artifacts for the pipeline, if the action requires access to that S3 bucket for input or output artifacts.
Provides information to AWS CodePipeline about new revisions to a source.
", "PutApprovalResult": "Provides the response to a manual approval request to AWS CodePipeline. Valid responses include Approved and Rejected.
", "PutJobFailureResult": "Represents the failure of a job as returned to the pipeline by a job worker. Used for custom actions only.
", @@ -35,6 +35,7 @@ "RegisterWebhookWithThirdParty": "Configures a connection between the webhook that was created and the external tool with events to be detected.
", "RetryStageExecution": "Resumes the pipeline execution by retrying the last failed actions in a stage. You can retry a stage immediately if any of the actions in the stage fail. When you retry, all actions that are still in progress continue working, and failed actions are triggered again.
", "StartPipelineExecution": "Starts the specified pipeline. Specifically, it begins processing the latest commit to the source location specified as part of the pipeline.
", + "StopPipelineExecution": "Stops the specified pipeline execution. You choose to either stop the pipeline execution by completing in-progress actions without starting subsequent actions, or by abandoning in-progress actions. While completing or abandoning in-progress actions, the pipeline execution is in a Stopping
state. After all in-progress actions are completed or abandoned, the pipeline execution is in a Stopped
state.
Adds to or modifies the tags of the given resource. Tags are metadata that can be used to manage a resource.
", "UntagResource": "Removes tags from an AWS resource.
", "UpdatePipeline": "Updates a specified pipeline with edits or changes to its structure. Use a JSON file with the pipeline structure and UpdatePipeline
to provide the full structure of the pipeline. Updating the pipeline increases the version number of the pipeline by 1.
Represents an AWS session credentials object. These credentials are temporary credentials that are issued by AWS Secure Token Service (STS). They can be used to access input and output artifacts in the Amazon S3 bucket used to store artifact for the pipeline in AWS CodePipeline.
", + "base": "Represents an AWS session credentials object. These credentials are temporary credentials that are issued by AWS Secure Token Service (STS). They can be used to access input and output artifacts in the S3 bucket used to store artifact for the pipeline in AWS CodePipeline.
", "refs": { - "JobData$artifactCredentials": "Represents an AWS session credentials object. These credentials are temporary credentials that are issued by AWS Secure Token Service (STS). They can be used to access input and output artifacts in the Amazon S3 bucket used to store artifacts for the pipeline in AWS CodePipeline.
", - "ThirdPartyJobData$artifactCredentials": "Represents an AWS session credentials object. These credentials are temporary credentials that are issued by AWS Secure Token Service (STS). They can be used to access input and output artifacts in the Amazon S3 bucket used to store artifact for the pipeline in AWS CodePipeline.
" + "JobData$artifactCredentials": "Represents an AWS session credentials object. These credentials are temporary credentials that are issued by AWS Secure Token Service (STS). They can be used to access input and output artifacts in the S3 bucket used to store artifacts for the pipeline in AWS CodePipeline.
", + "ThirdPartyJobData$artifactCredentials": "Represents an AWS session credentials object. These credentials are temporary credentials that are issued by AWS Secure Token Service (STS). They can be used to access input and output artifacts in the S3 bucket used to store artifact for the pipeline in AWS CodePipeline.
" } }, "AccessKeyId": { @@ -427,16 +428,16 @@ } }, "ArtifactStore": { - "base": "The Amazon S3 bucket where artifacts for the pipeline are stored.
You must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
The S3 bucket where artifacts for the pipeline are stored.
You must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
Represents information about the Amazon S3 bucket where artifacts are stored for the pipeline.
You must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
Represents information about the S3 bucket where artifacts are stored for the pipeline.
You must include either artifactStore
or artifactStores
in your pipeline, but you cannot use both. If you create a cross-region action in your pipeline, you must use artifactStores
.
The Amazon S3 bucket used for storing the artifacts for a pipeline. You can specify the name of an S3 bucket but not a folder in the bucket. A folder to contain the pipeline artifacts is created for you based on the name of the pipeline. You can use any Amazon S3 bucket in the same AWS Region as the pipeline to store your pipeline artifacts.
" + "ArtifactStore$location": "The S3 bucket used for storing the artifacts for a pipeline. You can specify the name of an S3 bucket but not a folder in the bucket. A folder to contain the pipeline artifacts is created for you based on the name of the pipeline. You can use any S3 bucket in the same AWS Region as the pipeline to store your pipeline artifacts.
" } }, "ArtifactStoreMap": { @@ -476,7 +477,8 @@ "ActionConfigurationProperty$key": "Whether the configuration property is a key.
", "ActionConfigurationProperty$secret": "Whether the configuration property is secret. Secrets are hidden from all calls except for GetJobDetails
, GetThirdPartyJobDetails
, PollForJobs
, and PollForThirdPartyJobs
.
When updating a pipeline, passing * * * * * without changing any other values of the action preserves the previous value of the secret.
", "ActionConfigurationProperty$queryable": "Indicates that the property is used with PollForJobs
. When creating a custom action, an action can have up to one queryable property. If it has one, that property must be both required and not secret.
If you create a pipeline with a custom action type, and that custom action contains a queryable property, the value for that configuration property is subject to other restrictions. The value must be less than or equal to twenty (20) characters. The value can contain only alphanumeric characters, underscores, and hyphens.
", - "PutActionRevisionOutput$newRevision": "Indicates whether the artifact revision was previously used in an execution of the specified pipeline.
" + "PutActionRevisionOutput$newRevision": "Indicates whether the artifact revision was previously used in an execution of the specified pipeline.
", + "StopPipelineExecutionInput$abandon": "Use this option to stop the pipeline execution by abandoning, rather than finishing, in-progress actions.
This option can lead to failed or out-of-sequence tasks.
The user-specified reason why the transition between two stages of a pipeline was disabled.
" } }, + "DuplicatedStopRequestException": { + "base": "The pipeline execution is already in a Stopping
state. If you already chose to stop and wait, you cannot make that request again. You can choose to stop and abandon now, but be aware that this option can lead to failed tasks or out of sequence tasks. If you already chose to stop and abandon, you cannot make that request again.
Represents the input of an EnableStageTransition
action.
The text of the error message.
", "FailureDetails$message": "The message about the failure.
", "InvalidArnException$message": null, "InvalidTagsException$message": null, "OutputVariablesSizeExceededException$message": null, + "PipelineExecutionNotStoppableException$message": null, "TooManyTagsException$message": null } }, @@ -1125,7 +1134,9 @@ "RetryStageExecutionInput$pipelineExecutionId": "The ID of the pipeline execution in the failed stage to be retried. Use the GetPipelineState action to retrieve the current pipelineExecutionId of the failed stage
", "RetryStageExecutionOutput$pipelineExecutionId": "The ID of the current workflow execution in the failed stage.
", "StageExecution$pipelineExecutionId": "The ID of the pipeline execution associated with the stage.
", - "StartPipelineExecutionOutput$pipelineExecutionId": "The unique system-generated ID of the pipeline execution that was started.
" + "StartPipelineExecutionOutput$pipelineExecutionId": "The unique system-generated ID of the pipeline execution that was started.
", + "StopPipelineExecutionInput$pipelineExecutionId": "The ID of the pipeline execution to be stopped in the current stage. Use the GetPipelineState
action to retrieve the current pipelineExecutionId.
The unique system-generated ID of the pipeline execution that was stopped.
" } }, "PipelineExecutionNotFoundException": { @@ -1133,11 +1144,16 @@ "refs": { } }, + "PipelineExecutionNotStoppableException": { + "base": "Unable to stop the pipeline execution. The execution might already be in a Stopped
state, or it might no longer be in progress.
The status of the pipeline execution.
InProgress: The pipeline execution is currently running.
Succeeded: The pipeline execution was completed successfully.
Superseded: While this pipeline execution was waiting for the next stage to be completed, a newer pipeline execution advanced and continued through the pipeline instead.
Failed: The pipeline execution was not completed successfully.
The status of the pipeline execution.
InProgress: The pipeline execution is currently running.
Succeeded: The pipeline execution was completed successfully.
Superseded: While this pipeline execution was waiting for the next stage to be completed, a newer pipeline execution advanced and continued through the pipeline instead.
Failed: The pipeline execution was not completed successfully.
The status of the pipeline execution.
InProgress: The pipeline execution is currently running.
Stopped: The pipeline execution was manually stopped. For more information, see Stopped Executions.
Stopping: The pipeline execution received a request to be manually stopped. Depending on the selected stop mode, the execution is either completing or abandoning in-progress actions. For more information, see Stopped Executions.
Succeeded: The pipeline execution was completed successfully.
Superseded: While this pipeline execution was waiting for the next stage to be completed, a newer pipeline execution advanced and continued through the pipeline instead. For more information, see Superseded Executions.
Failed: The pipeline execution was not completed successfully.
The status of the pipeline execution.
InProgress: The pipeline execution is currently running.
Stopped: The pipeline execution was manually stopped. For more information, see Stopped Executions.
Stopping: The pipeline execution received a request to be manually stopped. Depending on the selected stop mode, the execution is either completing or abandoning in-progress actions. For more information, see Stopped Executions.
Succeeded: The pipeline execution was completed successfully.
Superseded: While this pipeline execution was waiting for the next stage to be completed, a newer pipeline execution advanced and continued through the pipeline instead. For more information, see Superseded Executions.
Failed: The pipeline execution was not completed successfully.
The name of the pipeline for which you want to get execution summary information.
", "PipelineContext$pipelineName": "The name of the pipeline. This is a user-specified value. Pipeline names must be unique across all pipeline names under an Amazon Web Services account.
", "PipelineDeclaration$name": "The name of the action to be performed.
", - "PipelineExecution$pipelineName": "The name of the pipeline that was executed.
", + "PipelineExecution$pipelineName": "The name of the pipeline with the specified pipeline execution.
", "PipelineSummary$name": "The name of the pipeline.
", "PutActionRevisionInput$pipelineName": "The name of the pipeline that starts processing the revision to the source.
", "PutApprovalResultInput$pipelineName": "The name of the pipeline that contains the action.
", "RetryStageExecutionInput$pipelineName": "The name of the pipeline that contains the failed stage.
", "StartPipelineExecutionInput$name": "The name of the pipeline to start.
", + "StopPipelineExecutionInput$pipelineName": "The name of the pipeline to stop.
", "WebhookDefinition$targetPipeline": "The name of the pipeline you want to connect to the webhook.
" } }, @@ -1216,7 +1233,7 @@ "GetPipelineInput$version": "The version number of the pipeline. If you do not specify a version, defaults to the current version.
", "GetPipelineStateOutput$pipelineVersion": "The version number of the pipeline.
A newly created pipeline is always assigned a version number of 1
.
The version number of the pipeline. A new pipeline always has a version number of 1. This number is incremented when a pipeline is updated.
", - "PipelineExecution$pipelineVersion": "The version number of the pipeline that was executed.
", + "PipelineExecution$pipelineVersion": "The version number of the pipeline with the specified pipeline execution.
", "PipelineSummary$version": "The version number of the pipeline.
" } }, @@ -1375,9 +1392,9 @@ } }, "S3ArtifactLocation": { - "base": "The location of the Amazon S3 bucket that contains a revision.
", + "base": "The location of the S3 bucket that contains a revision.
", "refs": { - "ArtifactLocation$s3Location": "The Amazon S3 bucket that contains the artifact.
" + "ArtifactLocation$s3Location": "The S3 bucket that contains the artifact.
" } }, "S3Bucket": { @@ -1389,7 +1406,7 @@ "S3BucketName": { "base": null, "refs": { - "S3ArtifactLocation$bucketName": "The name of the Amazon S3 bucket.
" + "S3ArtifactLocation$bucketName": "The name of the S3 bucket.
" } }, "S3Key": { @@ -1407,7 +1424,7 @@ "S3ObjectKey": { "base": null, "refs": { - "S3ArtifactLocation$objectKey": "The key of the object in the Amazon S3 bucket, which uniquely identifies the object in the bucket.
" + "S3ArtifactLocation$objectKey": "The key of the object in the S3 bucket, which uniquely identifies the object in the bucket.
" } }, "SecretAccessKey": { @@ -1529,6 +1546,29 @@ "refs": { } }, + "StopExecutionTrigger": { + "base": "The interaction that stopped a pipeline execution.
", + "refs": { + "PipelineExecutionSummary$stopTrigger": "The interaction that stopped a pipeline execution.
" + } + }, + "StopPipelineExecutionInput": { + "base": null, + "refs": { + } + }, + "StopPipelineExecutionOutput": { + "base": null, + "refs": { + } + }, + "StopPipelineExecutionReason": { + "base": null, + "refs": { + "StopExecutionTrigger$reason": "The user-specified reason the pipeline was stopped.
", + "StopPipelineExecutionInput$reason": "Use this option to enter comments, such as the reason the pipeline was stopped.
" + } + }, "String": { "base": null, "refs": { diff --git a/models/apis/codestar-connections/2019-12-01/api-2.json b/models/apis/codestar-connections/2019-12-01/api-2.json new file mode 100644 index 00000000000..1b85e371cd2 --- /dev/null +++ b/models/apis/codestar-connections/2019-12-01/api-2.json @@ -0,0 +1,194 @@ +{ + "version":"2.0", + "metadata":{ + "apiVersion":"2019-12-01", + "endpointPrefix":"codestar-connections", + "jsonVersion":"1.0", + "protocol":"json", + "serviceFullName":"AWS CodeStar connections", + "serviceId":"CodeStar connections", + "signatureVersion":"v4", + "signingName":"codestar-connections", + "targetPrefix":"com.amazonaws.codestar.connections.CodeStar_connections_20191201", + "uid":"codestar-connections-2019-12-01" + }, + "operations":{ + "CreateConnection":{ + "name":"CreateConnection", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateConnectionInput"}, + "output":{"shape":"CreateConnectionOutput"}, + "errors":[ + {"shape":"LimitExceededException"} + ] + }, + "DeleteConnection":{ + "name":"DeleteConnection", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteConnectionInput"}, + "output":{"shape":"DeleteConnectionOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"} + ] + }, + "GetConnection":{ + "name":"GetConnection", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetConnectionInput"}, + "output":{"shape":"GetConnectionOutput"}, + "errors":[ + {"shape":"ResourceNotFoundException"} + ] + }, + "ListConnections":{ + "name":"ListConnections", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListConnectionsInput"}, + "output":{"shape":"ListConnectionsOutput"} + } + }, + "shapes":{ + "AccountId":{ + "type":"string", + "max":12, + "min":12, + "pattern":"[0-9]{12}" + }, + "Connection":{ + "type":"structure", + "members":{ + "ConnectionName":{"shape":"ConnectionName"}, + "ConnectionArn":{"shape":"ConnectionArn"}, + "ProviderType":{"shape":"ProviderType"}, + "OwnerAccountId":{"shape":"AccountId"}, + "ConnectionStatus":{"shape":"ConnectionStatus"} + } + }, + "ConnectionArn":{ + "type":"string", + "max":256, + "min":0, + "pattern":"arn:aws(-[\\w]+)*:.+:.+:[0-9]{12}:.+" + }, + "ConnectionList":{ + "type":"list", + "member":{"shape":"Connection"} + }, + "ConnectionName":{ + "type":"string", + "max":32, + "min":1 + }, + "ConnectionStatus":{ + "type":"string", + "enum":[ + "PENDING", + "AVAILABLE", + "ERROR" + ] + }, + "CreateConnectionInput":{ + "type":"structure", + "required":[ + "ProviderType", + "ConnectionName" + ], + "members":{ + "ProviderType":{"shape":"ProviderType"}, + "ConnectionName":{"shape":"ConnectionName"} + } + }, + "CreateConnectionOutput":{ + "type":"structure", + "required":["ConnectionArn"], + "members":{ + "ConnectionArn":{"shape":"ConnectionArn"} + } + }, + "DeleteConnectionInput":{ + "type":"structure", + "required":["ConnectionArn"], + "members":{ + "ConnectionArn":{"shape":"ConnectionArn"} + } + }, + "DeleteConnectionOutput":{ + "type":"structure", + "members":{ + } + }, + "ErrorMessage":{ + "type":"string", + "max":600 + }, + "GetConnectionInput":{ + "type":"structure", + "required":["ConnectionArn"], + "members":{ + "ConnectionArn":{"shape":"ConnectionArn"} + } + }, + "GetConnectionOutput":{ + "type":"structure", + "members":{ + "Connection":{"shape":"Connection"} + } + }, + "LimitExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "exception":true + }, + "ListConnectionsInput":{ + "type":"structure", + "members":{ + "ProviderTypeFilter":{"shape":"ProviderType"}, + "MaxResults":{"shape":"MaxResults"}, + "NextToken":{"shape":"NextToken"} + } + }, + "ListConnectionsOutput":{ + "type":"structure", + "members":{ + "Connections":{"shape":"ConnectionList"}, + "NextToken":{"shape":"NextToken"} + } + }, + "MaxResults":{ + "type":"integer", + "max":50, + "min":1 + }, + "NextToken":{ + "type":"string", + "max":1024, + "min":1, + "pattern":"[a-zA-Z0-9=\\-\\\\/]+" + }, + "ProviderType":{ + "type":"string", + "enum":["Bitbucket"] + }, + "ResourceNotFoundException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "exception":true + } + } +} diff --git a/models/apis/codestar-connections/2019-12-01/docs-2.json b/models/apis/codestar-connections/2019-12-01/docs-2.json new file mode 100644 index 00000000000..a11c33508e2 --- /dev/null +++ b/models/apis/codestar-connections/2019-12-01/docs-2.json @@ -0,0 +1,131 @@ +{ + "version": "2.0", + "service": "This AWS CodeStar Connections API Reference provides descriptions and usage examples of the operations and data types for the AWS CodeStar Connections API. You can use the Connections API to work with connections and installations.
Connections are configurations that you use to connect AWS resources to external code repositories. Each connection is a resource that can be given to services such as CodePipeline to connect to a third-party repository such as Bitbucket. For example, you can add the connection in CodePipeline so that it triggers your pipeline when a code change is made to your third-party code repository. Each connection is named and associated with a unique ARN that is used to reference the connection.
When you create a connection, the console initiates a third-party connection handshake. Installations are the apps that are used to conduct this handshake. For example, the installation for the Bitbucket provider type is the Bitbucket Cloud app. When you create a connection, you can choose an existing installation or create one.
You can work with connections by calling:
CreateConnection, which creates a uniquely named connection that can be referenced by services such as CodePipeline.
DeleteConnection, which deletes the specified connection.
GetConnection, which returns information about the connection, including the connection status.
ListConnections, which lists the connections associated with your account.
For information about how to use AWS CodeStar Connections, see the AWS CodePipeline User Guide.
", + "operations": { + "CreateConnection": "Creates a connection that can then be given to other AWS services like CodePipeline so that it can access third-party code repositories. The connection is in pending status until the third-party connection handshake is completed from the console.
", + "DeleteConnection": "The connection to be deleted.
", + "GetConnection": "Returns the connection ARN and details such as status, owner, and provider type.
", + "ListConnections": "Lists the connections associated with your account.
" + }, + "shapes": { + "AccountId": { + "base": null, + "refs": { + "Connection$OwnerAccountId": "The name of the external provider where your third-party code repository is configured. For Bitbucket, this is the account ID of the owner of the Bitbucket repository.
" + } + }, + "Connection": { + "base": "The configuration that allows a service such as CodePipeline to connect to a third-party code repository.
", + "refs": { + "ConnectionList$member": null, + "GetConnectionOutput$Connection": "The connection details, such as status, owner, and provider type.
" + } + }, + "ConnectionArn": { + "base": null, + "refs": { + "Connection$ConnectionArn": "The Amazon Resource Name (ARN) of the connection. The ARN is used as the connection reference when the connection is shared between AWS services.
The ARN is never reused if the connection is deleted.
The Amazon Resource Name (ARN) of the connection to be created. The ARN is used as the connection reference when the connection is shared between AWS services.
The ARN is never reused if the connection is deleted.
The Amazon Resource Name (ARN) of the connection to be deleted.
The ARN is never reused if the connection is deleted.
The Amazon Resource Name (ARN) of a connection.
" + } + }, + "ConnectionList": { + "base": null, + "refs": { + "ListConnectionsOutput$Connections": "A list of connections and the details for each connection, such as status, owner, and provider type.
" + } + }, + "ConnectionName": { + "base": null, + "refs": { + "Connection$ConnectionName": "The name of the connection. Connection names must be unique in an AWS user account.
", + "CreateConnectionInput$ConnectionName": "The name of the connection to be created. The name must be unique in the calling AWS account.
" + } + }, + "ConnectionStatus": { + "base": null, + "refs": { + "Connection$ConnectionStatus": "The current status of the connection.
" + } + }, + "CreateConnectionInput": { + "base": null, + "refs": { + } + }, + "CreateConnectionOutput": { + "base": null, + "refs": { + } + }, + "DeleteConnectionInput": { + "base": null, + "refs": { + } + }, + "DeleteConnectionOutput": { + "base": null, + "refs": { + } + }, + "ErrorMessage": { + "base": null, + "refs": { + "LimitExceededException$Message": null, + "ResourceNotFoundException$Message": null + } + }, + "GetConnectionInput": { + "base": null, + "refs": { + } + }, + "GetConnectionOutput": { + "base": null, + "refs": { + } + }, + "LimitExceededException": { + "base": "Exceeded the maximum limit for connections.
", + "refs": { + } + }, + "ListConnectionsInput": { + "base": null, + "refs": { + } + }, + "ListConnectionsOutput": { + "base": null, + "refs": { + } + }, + "MaxResults": { + "base": null, + "refs": { + "ListConnectionsInput$MaxResults": "The maximum number of results to return in a single call. To retrieve the remaining results, make another call with the returned nextToken
value.
The token that was returned from the previous ListConnections
call, which can be used to return the next set of connections in the list.
A token that can be used in the next ListConnections
call. To view all items in the list, continue to call this operation with each subsequent token until no more nextToken
values are returned.
The name of the external provider where your third-party code repository is configured. Currently, the valid provider type is Bitbucket.
", + "CreateConnectionInput$ProviderType": "The name of the external provider where your third-party code repository is configured. Currently, the valid provider type is Bitbucket.
", + "ListConnectionsInput$ProviderTypeFilter": "Filters the list of connections to those associated with a specified provider, such as Bitbucket.
" + } + }, + "ResourceNotFoundException": { + "base": "Resource not found. Verify the connection resource ARN and try again.
", + "refs": { + } + } + } +} diff --git a/models/apis/codestar-connections/2019-12-01/examples-1.json b/models/apis/codestar-connections/2019-12-01/examples-1.json new file mode 100644 index 00000000000..0ea7e3b0bbe --- /dev/null +++ b/models/apis/codestar-connections/2019-12-01/examples-1.json @@ -0,0 +1,5 @@ +{ + "version": "1.0", + "examples": { + } +} diff --git a/models/apis/codestar-connections/2019-12-01/paginators-1.json b/models/apis/codestar-connections/2019-12-01/paginators-1.json new file mode 100644 index 00000000000..deffa71f04d --- /dev/null +++ b/models/apis/codestar-connections/2019-12-01/paginators-1.json @@ -0,0 +1,9 @@ +{ + "pagination": { + "ListConnections": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + } + } +} diff --git a/models/apis/comprehend/2017-11-27/api-2.json b/models/apis/comprehend/2017-11-27/api-2.json index 63b8fd77bfa..ebecf1115ad 100644 --- a/models/apis/comprehend/2017-11-27/api-2.json +++ b/models/apis/comprehend/2017-11-27/api-2.json @@ -971,7 +971,11 @@ "Accuracy":{"shape":"Double"}, "Precision":{"shape":"Double"}, "Recall":{"shape":"Double"}, - "F1Score":{"shape":"Double"} + "F1Score":{"shape":"Double"}, + "MicroPrecision":{"shape":"Double"}, + "MicroRecall":{"shape":"Double"}, + "MicroF1Score":{"shape":"Double"}, + "HammingLoss":{"shape":"Double"} } }, "ClassifierMetadata":{ @@ -997,7 +1001,8 @@ "ClassifyDocumentResponse":{ "type":"structure", "members":{ - "Classes":{"shape":"ListOfClasses"} + "Classes":{"shape":"ListOfClasses"}, + "Labels":{"shape":"ListOfLabels"} } }, "ClientRequestTokenString":{ @@ -1058,7 +1063,8 @@ }, "LanguageCode":{"shape":"LanguageCode"}, "VolumeKmsKeyId":{"shape":"KmsKeyId"}, - "VpcConfig":{"shape":"VpcConfig"} + "VpcConfig":{"shape":"VpcConfig"}, + "Mode":{"shape":"DocumentClassifierMode"} } }, "CreateDocumentClassifierResponse":{ @@ -1413,9 +1419,17 @@ "type":"structure", "required":["S3Uri"], "members":{ - "S3Uri":{"shape":"S3Uri"} + "S3Uri":{"shape":"S3Uri"}, + "LabelDelimiter":{"shape":"LabelDelimiter"} } }, + "DocumentClassifierMode":{ + "type":"string", + "enum":[ + "MULTI_CLASS", + "MULTI_LABEL" + ] + }, "DocumentClassifierOutputDataConfig":{ "type":"structure", "members":{ @@ -1439,13 +1453,21 @@ "ClassifierMetadata":{"shape":"ClassifierMetadata"}, "DataAccessRoleArn":{"shape":"IamRoleArn"}, "VolumeKmsKeyId":{"shape":"KmsKeyId"}, - "VpcConfig":{"shape":"VpcConfig"} + "VpcConfig":{"shape":"VpcConfig"}, + "Mode":{"shape":"DocumentClassifierMode"} } }, "DocumentClassifierPropertiesList":{ "type":"list", "member":{"shape":"DocumentClassifierProperties"} }, + "DocumentLabel":{ + "type":"structure", + "members":{ + "Name":{"shape":"String"}, + "Score":{"shape":"Float"} + } + }, "DominantLanguage":{ "type":"structure", "members":{ @@ -1825,6 +1847,12 @@ }, "exception":true }, + "LabelDelimiter":{ + "type":"string", + "max":1, + "min":1, + "pattern":"^[ ~!@#$%^*\\-_+=|\\\\:;\\t>?/]$" + }, "LanguageCode":{ "type":"string", "enum":[ @@ -1983,6 +2011,10 @@ "type":"list", "member":{"shape":"KeyPhrase"} }, + "ListOfLabels":{ + "type":"list", + "member":{"shape":"DocumentLabel"} + }, "ListOfSyntaxTokens":{ "type":"list", "member":{"shape":"SyntaxToken"} diff --git a/models/apis/comprehend/2017-11-27/docs-2.json b/models/apis/comprehend/2017-11-27/docs-2.json index 54568726346..0e8c6a739bf 100644 --- a/models/apis/comprehend/2017-11-27/docs-2.json +++ b/models/apis/comprehend/2017-11-27/docs-2.json @@ -508,6 +508,13 @@ "DocumentClassifierProperties$InputDataConfig": "The input data configuration that you supplied when you created the document classifier for training.
" } }, + "DocumentClassifierMode": { + "base": null, + "refs": { + "CreateDocumentClassifierRequest$Mode": "Indicates the mode in which the classifier will be trained. The classifier can be trained in multi-class mode, which identifies one and only one class for each document, or multi-label mode, which identifies one or more labels for each document. In multi-label mode, multiple labels for an individual document are separated by a delimiter. The default delimiter between labels is a pipe (|).
", + "DocumentClassifierProperties$Mode": "Indicates the mode in which the specific classifier was trained. This also indicates the format of input documents and the format of the confusion matrix. Each classifier can only be trained in one mode and this cannot be changed once the classifier is trained.
" + } + }, "DocumentClassifierOutputDataConfig": { "base": "Provides output results configuration parameters for custom classifier jobs.
", "refs": { @@ -528,6 +535,12 @@ "ListDocumentClassifiersResponse$DocumentClassifierPropertiesList": "A list containing the properties of each job returned.
" } }, + "DocumentLabel": { + "base": "Specifies one of the label or labels that categorize the document being analyzed.
", + "refs": { + "ListOfLabels$member": null + } + }, "DominantLanguage": { "base": "Returns the code for the dominant language in the input text and the level of confidence that Amazon Comprehend has in the accuracy of the detection.
", "refs": { @@ -560,6 +573,10 @@ "ClassifierEvaluationMetrics$Precision": "A measure of the usefulness of the classifier results in the test data. High precision means that the classifier returned substantially more relevant results than irrelevant ones.
", "ClassifierEvaluationMetrics$Recall": "A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results.
", "ClassifierEvaluationMetrics$F1Score": "A measure of how accurate the classifier results are for the test data. It is derived from the Precision
and Recall
values. The F1Score
is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones. Unlike the Precision metric which comes from averaging the precision of all available labels, this is based on the overall score of all precision scores added together.
", + "ClassifierEvaluationMetrics$MicroRecall": "A measure of how complete the classifier results are for the test data. High recall means that the classifier returned most of the relevant results. Specifically, this indicates how many of the correct categories in the text that the model can predict. It is a percentage of correct categories in the text that can found. Instead of averaging the recall scores of all labels (as with Recall), micro Recall is based on the overall score of all recall scores added together.
", + "ClassifierEvaluationMetrics$MicroF1Score": "A measure of how accurate the classifier results are for the test data. It is a combination of the Micro Precision
and Micro Recall
values. The Micro F1Score
is the harmonic mean of the two scores. The highest score is 1, and the worst score is 0.
Indicates the fraction of labels that are incorrectly predicted. Also seen as the fraction of wrong labels compared to the total number of labels. Scores closer to zero are better.
", "EntityRecognizerEvaluationMetrics$Precision": "A measure of the usefulness of the recognizer results in the test data. High precision means that the recognizer returned substantially more relevant results than irrelevant ones.
", "EntityRecognizerEvaluationMetrics$Recall": "A measure of how complete the recognizer results are for the test data. High recall means that the recognizer returned most of the relevant results.
", "EntityRecognizerEvaluationMetrics$F1Score": "A measure of how accurate the recognizer results are for the test data. It is derived from the Precision
and Recall
values. The F1Score
is the harmonic average of the two scores. The highest score is 1, and the worst score is 0.
The confidence score that Amazon Comprehend has this class correctly attributed.
", + "DocumentLabel$Score": "The confidence score that Amazon Comprehend has this label correctly attributed.
", "DominantLanguage$Score": "The level of confidence that Amazon Comprehend has in the accuracy of the detection.
", "Entity$Score": "The level of confidence that Amazon Comprehend has in the accuracy of the detection.
", "KeyPhrase$Score": "The level of confidence that Amazon Comprehend has in the accuracy of the detection.
", @@ -975,6 +993,12 @@ "refs": { } }, + "LabelDelimiter": { + "base": null, + "refs": { + "DocumentClassifierInputDataConfig$LabelDelimiter": "Indicates the delimiter used to separate each label for training a multi-label classifier. The default delimiter between labels is a pipe (|). You can use a different character as a delimiter (if it's an allowed character) by specifying it under Delimiter for labels. If the training documents use a delimiter other than the default or the delimiter you specify, the labels on that line will be combined to make a single unique label, such as LABELLABELLABEL.
" + } + }, "LanguageCode": { "base": null, "refs": { @@ -1123,6 +1147,12 @@ "DetectKeyPhrasesResponse$KeyPhrases": "A collection of key phrases that Amazon Comprehend identified in the input text. For each key phrase, the response provides the text of the key phrase, where the key phrase begins and ends, and the level of confidence that Amazon Comprehend has in the accuracy of the detection.
" } }, + "ListOfLabels": { + "base": null, + "refs": { + "ClassifyDocumentResponse$Labels": "The labels used the document being analyzed. These are used for multi-label trained models. Individual labels represent different categories that are related in some manner and are not multually exclusive. For example, a movie can be just an action movie, or it can be an action movie, a science fiction movie, and a comedy, all at the same time.
" + } + }, "ListOfSyntaxTokens": { "base": null, "refs": { @@ -1429,6 +1459,7 @@ "DetectSentimentRequest$Text": "A UTF-8 text string. Each string must contain fewer that 5,000 bytes of UTF-8 encoded characters.
", "DetectSyntaxRequest$Text": "A UTF-8 string. Each string must contain fewer that 5,000 bytes of UTF encoded characters.
", "DocumentClass$Name": "The name of the class.
", + "DocumentLabel$Name": "The name of the label.
", "DominantLanguage$LanguageCode": "The RFC 5646 language code for the dominant language. For more information about RFC 5646, see Tags for Identifying Languages on the IETF Tools web site.
", "Entity$Text": "The text of the entity.
", "InternalServerException$Message": null, diff --git a/models/apis/comprehendmedical/2018-10-30/api-2.json b/models/apis/comprehendmedical/2018-10-30/api-2.json index 2e6bbd8c160..96769dbe584 100644 --- a/models/apis/comprehendmedical/2018-10-30/api-2.json +++ b/models/apis/comprehendmedical/2018-10-30/api-2.json @@ -97,6 +97,40 @@ {"shape":"TextSizeLimitExceededException"} ] }, + "InferICD10CM":{ + "name":"InferICD10CM", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"InferICD10CMRequest"}, + "output":{"shape":"InferICD10CMResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InvalidRequestException"}, + {"shape":"InvalidEncodingException"}, + {"shape":"TextSizeLimitExceededException"} + ] + }, + "InferRxNorm":{ + "name":"InferRxNorm", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"InferRxNormRequest"}, + "output":{"shape":"InferRxNormResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ServiceUnavailableException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InvalidRequestException"}, + {"shape":"InvalidEncodingException"}, + {"shape":"TextSizeLimitExceededException"} + ] + }, "ListEntitiesDetectionV2Jobs":{ "name":"ListEntitiesDetectionV2Jobs", "http":{ @@ -404,12 +438,130 @@ ] }, "Float":{"type":"float"}, + "ICD10CMAttribute":{ + "type":"structure", + "members":{ + "Type":{"shape":"ICD10CMAttributeType"}, + "Score":{"shape":"Float"}, + "RelationshipScore":{"shape":"Float"}, + "Id":{"shape":"Integer"}, + "BeginOffset":{"shape":"Integer"}, + "EndOffset":{"shape":"Integer"}, + "Text":{"shape":"String"}, + "Traits":{"shape":"ICD10CMTraitList"} + } + }, + "ICD10CMAttributeList":{ + "type":"list", + "member":{"shape":"ICD10CMAttribute"} + }, + "ICD10CMAttributeType":{ + "type":"string", + "enum":[ + "ACUITY", + "DIRECTION", + "SYSTEM_ORGAN_SITE", + "QUALITY", + "QUANTITY" + ] + }, + "ICD10CMConcept":{ + "type":"structure", + "members":{ + "Description":{"shape":"String"}, + "Code":{"shape":"String"}, + "Score":{"shape":"Float"} + } + }, + "ICD10CMConceptList":{ + "type":"list", + "member":{"shape":"ICD10CMConcept"} + }, + "ICD10CMEntity":{ + "type":"structure", + "members":{ + "Id":{"shape":"Integer"}, + "Text":{"shape":"OntologyLinkingBoundedLengthString"}, + "Category":{"shape":"ICD10CMEntityCategory"}, + "Type":{"shape":"ICD10CMEntityType"}, + "Score":{"shape":"Float"}, + "BeginOffset":{"shape":"Integer"}, + "EndOffset":{"shape":"Integer"}, + "Attributes":{"shape":"ICD10CMAttributeList"}, + "Traits":{"shape":"ICD10CMTraitList"}, + "ICD10CMConcepts":{"shape":"ICD10CMConceptList"} + } + }, + "ICD10CMEntityCategory":{ + "type":"string", + "enum":["MEDICAL_CONDITION"] + }, + "ICD10CMEntityList":{ + "type":"list", + "member":{"shape":"ICD10CMEntity"} + }, + "ICD10CMEntityType":{ + "type":"string", + "enum":["DX_NAME"] + }, + "ICD10CMTrait":{ + "type":"structure", + "members":{ + "Name":{"shape":"ICD10CMTraitName"}, + "Score":{"shape":"Float"} + } + }, + "ICD10CMTraitList":{ + "type":"list", + "member":{"shape":"ICD10CMTrait"} + }, + "ICD10CMTraitName":{ + "type":"string", + "enum":[ + "NEGATION", + "DIAGNOSIS", + "SIGN", + "SYMPTOM" + ] + }, "IamRoleArn":{ "type":"string", "max":2048, "min":20, "pattern":"arn:aws(-[^:]+)?:iam::[0-9]{12}:role/.+" }, + "InferICD10CMRequest":{ + "type":"structure", + "required":["Text"], + "members":{ + "Text":{"shape":"OntologyLinkingBoundedLengthString"} + } + }, + "InferICD10CMResponse":{ + "type":"structure", + "required":["Entities"], + "members":{ + "Entities":{"shape":"ICD10CMEntityList"}, + "PaginationToken":{"shape":"String"}, + "ModelVersion":{"shape":"String"} + } + }, + "InferRxNormRequest":{ + "type":"structure", + "required":["Text"], + "members":{ + "Text":{"shape":"OntologyLinkingBoundedLengthString"} + } + }, + "InferRxNormResponse":{ + "type":"structure", + "required":["Entities"], + "members":{ + "Entities":{"shape":"RxNormEntityList"}, + "PaginationToken":{"shape":"String"}, + "ModelVersion":{"shape":"String"} + } + }, "InputDataConfig":{ "type":"structure", "required":["S3Bucket"], @@ -516,6 +668,11 @@ "min":1 }, "ModelVersion":{"type":"string"}, + "OntologyLinkingBoundedLengthString":{ + "type":"string", + "max":10000, + "min":1 + }, "OutputDataConfig":{ "type":"structure", "required":["S3Bucket"], @@ -531,6 +688,92 @@ }, "exception":true }, + "RxNormAttribute":{ + "type":"structure", + "members":{ + "Type":{"shape":"RxNormAttributeType"}, + "Score":{"shape":"Float"}, + "RelationshipScore":{"shape":"Float"}, + "Id":{"shape":"Integer"}, + "BeginOffset":{"shape":"Integer"}, + "EndOffset":{"shape":"Integer"}, + "Text":{"shape":"String"}, + "Traits":{"shape":"RxNormTraitList"} + } + }, + "RxNormAttributeList":{ + "type":"list", + "member":{"shape":"RxNormAttribute"} + }, + "RxNormAttributeType":{ + "type":"string", + "enum":[ + "DOSAGE", + "DURATION", + "FORM", + "FREQUENCY", + "RATE", + "ROUTE_OR_MODE", + "STRENGTH" + ] + }, + "RxNormConcept":{ + "type":"structure", + "members":{ + "Description":{"shape":"String"}, + "Code":{"shape":"String"}, + "Score":{"shape":"Float"} + } + }, + "RxNormConceptList":{ + "type":"list", + "member":{"shape":"RxNormConcept"} + }, + "RxNormEntity":{ + "type":"structure", + "members":{ + "Id":{"shape":"Integer"}, + "Text":{"shape":"OntologyLinkingBoundedLengthString"}, + "Category":{"shape":"RxNormEntityCategory"}, + "Type":{"shape":"RxNormEntityType"}, + "Score":{"shape":"Float"}, + "BeginOffset":{"shape":"Integer"}, + "EndOffset":{"shape":"Integer"}, + "Attributes":{"shape":"RxNormAttributeList"}, + "Traits":{"shape":"RxNormTraitList"}, + "RxNormConcepts":{"shape":"RxNormConceptList"} + } + }, + "RxNormEntityCategory":{ + "type":"string", + "enum":["MEDICATION"] + }, + "RxNormEntityList":{ + "type":"list", + "member":{"shape":"RxNormEntity"} + }, + "RxNormEntityType":{ + "type":"string", + "enum":[ + "BRAND_NAME", + "GENERIC_NAME" + ] + }, + "RxNormTrait":{ + "type":"structure", + "members":{ + "Name":{"shape":"RxNormTraitName"}, + "Score":{"shape":"Float"} + } + }, + "RxNormTraitList":{ + "type":"list", + "member":{"shape":"RxNormTrait"} + }, + "RxNormTraitName":{ + "type":"string", + "enum":["NEGATION"] + }, "S3Bucket":{ "type":"string", "max":63, diff --git a/models/apis/comprehendmedical/2018-10-30/docs-2.json b/models/apis/comprehendmedical/2018-10-30/docs-2.json index 6e629f91ee5..133933ba44b 100644 --- a/models/apis/comprehendmedical/2018-10-30/docs-2.json +++ b/models/apis/comprehendmedical/2018-10-30/docs-2.json @@ -5,8 +5,10 @@ "DescribeEntitiesDetectionV2Job": "Gets the properties associated with a medical entities detection job. Use this operation to get the status of a detection job.
", "DescribePHIDetectionJob": "Gets the properties associated with a protected health information (PHI) detection job. Use this operation to get the status of a detection job.
", "DetectEntities": "The DetectEntities
operation is deprecated. You should use the DetectEntitiesV2 operation instead.
Inspects the clinical text for a variety of medical entities and returns specific information about them such as entity category, location, and confidence score on that information .
", - "DetectEntitiesV2": "Inspects the clinical text for a variety of medical entities and returns specific information about them such as entity category, location, and confidence score on that information.
The DetectEntitiesV2
operation replaces the DetectEntities operation. This new action uses a different model for determining the entities in your medical text and changes the way that some entities are returned in the output. You should use the DetectEntitiesV2
operation in all new applications.
The DetectEntitiesV2
operation returns the Acuity
and Direction
entities as attributes instead of types. It does not return the Quality
or Quantity
entities.
Inspects the clinical text for a variety of medical entities and returns specific information about them such as entity category, location, and confidence score on that information.
The DetectEntitiesV2
operation replaces the DetectEntities operation. This new action uses a different model for determining the entities in your medical text and changes the way that some entities are returned in the output. You should use the DetectEntitiesV2
operation in all new applications.
The DetectEntitiesV2
operation returns the Acuity
and Direction
entities as attributes instead of types.
Inspects the clinical text for protected health information (PHI) entities and entity category, location, and confidence score on that information.
", + "InferICD10CM": "InferICD10CM detects medical conditions as entities listed in a patient record and links those entities to normalized concept identifiers in the ICD-10-CM knowledge base from the Centers for Disease Control.
", + "InferRxNorm": "InferRxNorm detects medications as entities listed in a patient record and links to the normalized concept identifiers in the RxNorm database from the National Library of Medicine.
", "ListEntitiesDetectionV2Jobs": "Gets a list of medical entity detection jobs that you have submitted.
", "ListPHIDetectionJobs": "Gets a list of protected health information (PHI) detection jobs that you have submitted.
", "StartEntitiesDetectionV2Job": "Starts an asynchronous medical entity detection job for a collection of documents. Use the DescribeEntitiesDetectionV2Job
operation to track the status of a job.
A UTF-8 text string containing the clinical content being examined for entities. Each string must contain fewer than 20,000 bytes of characters.
", "DetectEntitiesV2Request$Text": "A UTF-8 string containing the clinical content being examined for entities. Each string must contain fewer than 20,000 bytes of characters.
", - "DetectPHIRequest$Text": "A UTF-8 text string containing the clinical content being examined for PHI entities. Each string must contain fewer than 20,000 bytes of characters.
" + "DetectPHIRequest$Text": "A UTF-8 text string containing the clinical content being examined for PHI entities. Each string must contain fewer than 20,000 bytes of characters.
" } }, "ClientRequestTokenString": { @@ -145,7 +147,7 @@ "base": null, "refs": { "Attribute$Type": "The type of attribute.
", - "Entity$Type": "Describes the specific type of entity with category of entities.
" + "Entity$Type": "Describes the specific type of entity with category of entities.
" } }, "EntityType": { @@ -161,9 +163,92 @@ "Attribute$Score": "The level of confidence that Amazon Comprehend Medical has that the segment of text is correctly recognized as an attribute.
", "Attribute$RelationshipScore": "The level of confidence that Amazon Comprehend Medical has that this attribute is correctly related to this entity.
", "Entity$Score": "The level of confidence that Amazon Comprehend Medical has in the accuracy of the detection.
", + "ICD10CMAttribute$Score": "The level of confidence that Amazon Comprehend Medical has that the segment of text is correctly recognized as an attribute.
", + "ICD10CMAttribute$RelationshipScore": "The level of confidence that Amazon Comprehend Medical has that this attribute is correctly related to this entity.
", + "ICD10CMConcept$Score": "The level of confidence that Amazon Comprehend Medical has that the entity is accurately linked to an ICD-10-CM concept.
", + "ICD10CMEntity$Score": "The level of confidence that Amazon Comprehend Medical has in the accuracy of the detection.
", + "ICD10CMTrait$Score": "The level of confidence that Amazon Comprehend Medical has that the segment of text is correctly recognized as a trait.
", + "RxNormAttribute$Score": "The level of confidence that Comprehend Medical has that the segment of text is correctly recognized as an attribute.
", + "RxNormAttribute$RelationshipScore": "The level of confidence that Amazon Comprehend Medical has that the attribute is accurately linked to an entity.
", + "RxNormConcept$Score": "The level of confidence that Amazon Comprehend Medical has that the entity is accurately linked to the reported RxNorm concept.
", + "RxNormEntity$Score": "The level of confidence that Amazon Comprehend Medical has in the accuracy of the detected entity.
", + "RxNormTrait$Score": "The level of confidence that Amazon Comprehend Medical has in the accuracy of the detected trait.
", "Trait$Score": "The level of confidence that Amazon Comprehend Medical has in the accuracy of this trait.
" } }, + "ICD10CMAttribute": { + "base": "The detected attributes that relate to an entity. This includes an extracted segment of the text that is an attribute of an entity, or otherwise related to an entity. InferICD10CM detects the following attributes: Direction
, System, Organ or Site
, and Acuity
.
The detected attributes that relate to the entity. An extracted segment of the text that is an attribute of an entity, or otherwise related to an entity, such as the nature of a medical condition.
" + } + }, + "ICD10CMAttributeType": { + "base": null, + "refs": { + "ICD10CMAttribute$Type": "The type of attribute. InferICD10CM detects entities of the type DX_NAME
.
The ICD-10-CM concepts that the entity could refer to, along with a score indicating the likelihood of the match.
", + "refs": { + "ICD10CMConceptList$member": null + } + }, + "ICD10CMConceptList": { + "base": null, + "refs": { + "ICD10CMEntity$ICD10CMConcepts": "The ICD-10-CM concepts that the entity could refer to, along with a score indicating the likelihood of the match.
" + } + }, + "ICD10CMEntity": { + "base": "The collection of medical entities extracted from the input text and their associated information. For each entity, the response provides the entity text, the entity category, where the entity text begins and ends, and the level of confidence that Amazon Comprehend Medical has in the detection and analysis. Attributes and traits of the entity are also returned.
", + "refs": { + "ICD10CMEntityList$member": null + } + }, + "ICD10CMEntityCategory": { + "base": null, + "refs": { + "ICD10CMEntity$Category": " The category of the entity. InferICD10CM detects entities in the MEDICAL_CONDITION
category.
The medical conditions detected in the text linked to ICD-10-CM concepts. If the action is successful, the service sends back an HTTP 200 response, as well as the entities detected.
" + } + }, + "ICD10CMEntityType": { + "base": null, + "refs": { + "ICD10CMEntity$Type": "Describes the specific type of entity with category of entities. InferICD10CM detects entities of the type DX_NAME
.
Contextual information for the entity. The traits recognized by InferICD10CM are DIAGNOSIS
, SIGN
, SYMPTOM
, and NEGATION
.
The contextual information for the attribute. The traits recognized by InferICD10CM are DIAGNOSIS
, SIGN
, SYMPTOM
, and NEGATION
.
Provides Contextual information for the entity. The traits recognized by InferICD10CM are DIAGNOSIS
, SIGN
, SYMPTOM
, and NEGATION.
Provides a name or contextual description about the trait.
" + } + }, "IamRoleArn": { "base": null, "refs": { @@ -172,6 +257,26 @@ "StartPHIDetectionJobRequest$DataAccessRoleArn": "The Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants Amazon Comprehend Medical read access to your input data. For more information, see Role-Based Permissions Required for Asynchronous Operations.
" } }, + "InferICD10CMRequest": { + "base": null, + "refs": { + } + }, + "InferICD10CMResponse": { + "base": null, + "refs": { + } + }, + "InferRxNormRequest": { + "base": null, + "refs": { + } + }, + "InferRxNormResponse": { + "base": null, + "refs": { + } + }, "InputDataConfig": { "base": "The input properties for an entities detection job
", "refs": { @@ -185,10 +290,22 @@ "refs": { "Attribute$Id": "The numeric identifier for this attribute. This is a monotonically increasing id unique within this response rather than a global unique identifier.
", "Attribute$BeginOffset": "The 0-based character offset in the input text that shows where the attribute begins. The offset returns the UTF-8 code point in the string.
", - "Attribute$EndOffset": "The 0-based character offset in the input text that shows where the attribute ends. The offset returns the UTF-8 code point in the string.
", + "Attribute$EndOffset": "The 0-based character offset in the input text that shows where the attribute ends. The offset returns the UTF-8 code point in the string.
", "Entity$Id": "The numeric identifier for the entity. This is a monotonically increasing id unique within this response rather than a global unique identifier.
", "Entity$BeginOffset": "The 0-based character offset in the input text that shows where the entity begins. The offset returns the UTF-8 code point in the string.
", - "Entity$EndOffset": "The 0-based character offset in the input text that shows where the entity ends. The offset returns the UTF-8 code point in the string.
" + "Entity$EndOffset": "The 0-based character offset in the input text that shows where the entity ends. The offset returns the UTF-8 code point in the string.
", + "ICD10CMAttribute$Id": "The numeric identifier for this attribute. This is a monotonically increasing id unique within this response rather than a global unique identifier.
", + "ICD10CMAttribute$BeginOffset": "The 0-based character offset in the input text that shows where the attribute begins. The offset returns the UTF-8 code point in the string.
", + "ICD10CMAttribute$EndOffset": "The 0-based character offset in the input text that shows where the attribute ends. The offset returns the UTF-8 code point in the string.
", + "ICD10CMEntity$Id": "The numeric identifier for the entity. This is a monotonically increasing id unique within this response rather than a global unique identifier.
", + "ICD10CMEntity$BeginOffset": "The 0-based character offset in the input text that shows where the entity begins. The offset returns the UTF-8 code point in the string.
", + "ICD10CMEntity$EndOffset": "The 0-based character offset in the input text that shows where the entity ends. The offset returns the UTF-8 code point in the string.
", + "RxNormAttribute$Id": "The numeric identifier for this attribute. This is a monotonically increasing id unique within this response rather than a global unique identifier.
", + "RxNormAttribute$BeginOffset": "The 0-based character offset in the input text that shows where the attribute begins. The offset returns the UTF-8 code point in the string.
", + "RxNormAttribute$EndOffset": "The 0-based character offset in the input text that shows where the attribute ends. The offset returns the UTF-8 code point in the string.
", + "RxNormEntity$Id": "The numeric identifier for the entity. This is a monotonically increasing id unique within this response rather than a global unique identifier.
", + "RxNormEntity$BeginOffset": "The 0-based character offset in the input text that shows where the entity begins. The offset returns the UTF-8 code point in the string.
", + "RxNormEntity$EndOffset": "The 0-based character offset in the input text that shows where the entity ends. The offset returns the UTF-8 code point in the string.
" } }, "InternalServerException": { @@ -291,6 +408,15 @@ "ComprehendMedicalAsyncJobProperties$ModelVersion": "The version of the model used to analyze the documents. The version number looks like X.X.X. You can use this information to track the model used for a particular batch of documents.
" } }, + "OntologyLinkingBoundedLengthString": { + "base": null, + "refs": { + "ICD10CMEntity$Text": "The segment of input text that is matched to the detected entity.
", + "InferICD10CMRequest$Text": "The input text used for analysis. The input for InferICD10CM is a string from 1 to 10000 characters.
", + "InferRxNormRequest$Text": "The input text used for analysis. The input for InferRxNorm is a string from 1 to 10000 characters.
", + "RxNormEntity$Text": "The segment of input text extracted from which the entity was detected.
" + } + }, "OutputDataConfig": { "base": "The output properties for a detection job.
", "refs": { @@ -304,6 +430,79 @@ "refs": { } }, + "RxNormAttribute": { + "base": "The extracted attributes that relate to this entity. The attributes recognized by InferRxNorm are DOSAGE
, DURATION
, FORM
, FREQUENCY
, RATE
, ROUTE_OR_MODE
.
The extracted attributes that relate to the entity. The attributes recognized by InferRxNorm are DOSAGE
, DURATION
, FORM
, FREQUENCY
, RATE
, ROUTE_OR_MODE
, and STRENGTH
.
The type of attribute. The types of attributes recognized by InferRxNorm are BRAND_NAME
and GENERIC_NAME
.
The RxNorm concept that the entity could refer to, along with a score indicating the likelihood of the match.
", + "refs": { + "RxNormConceptList$member": null + } + }, + "RxNormConceptList": { + "base": null, + "refs": { + "RxNormEntity$RxNormConcepts": "The RxNorm concepts that the entity could refer to, along with a score indicating the likelihood of the match.
" + } + }, + "RxNormEntity": { + "base": "The collection of medical entities extracted from the input text and their associated information. For each entity, the response provides the entity text, the entity category, where the entity text begins and ends, and the level of confidence that Amazon Comprehend Medical has in the detection and analysis. Attributes and traits of the entity are also returned.
", + "refs": { + "RxNormEntityList$member": null + } + }, + "RxNormEntityCategory": { + "base": null, + "refs": { + "RxNormEntity$Category": "The category of the entity. The recognized categories are GENERIC
or BRAND_NAME
.
The medication entities detected in the text linked to RxNorm concepts. If the action is successful, the service sends back an HTTP 200 response, as well as the entities detected.
" + } + }, + "RxNormEntityType": { + "base": null, + "refs": { + "RxNormEntity$Type": " Describes the specific type of entity. For InferRxNorm, the recognized entity type is MEDICATION
.
The contextual information for the entity. InferRxNorm recognizes the trait NEGATION
, which is any indication that the patient is not taking a medication.
Contextual information for the attribute. InferRxNorm recognizes the trait NEGATION
for attributes, i.e. that the patient is not taking a specific dose or form of a medication.
Contextual information for the entity.
" + } + }, + "RxNormTraitName": { + "base": null, + "refs": { + "RxNormTrait$Name": "Provides a name or contextual description about the trait.
" + } + }, "S3Bucket": { "base": null, "refs": { @@ -374,6 +573,13 @@ "DetectPHIResponse$PaginationToken": " If the result of the previous request to DetectPHI
was truncated, include the PaginationToken
to fetch the next page of PHI entities.
The version of the model used to analyze the documents. The version number looks like X.X.X. You can use this information to track the model used for a particular batch of documents.
", "Entity$Text": "The segment of input text extracted as this entity.
", + "ICD10CMAttribute$Text": "The segment of input text which contains the detected attribute.
", + "ICD10CMConcept$Description": "The long description of the ICD-10-CM code in the ontology.
", + "ICD10CMConcept$Code": "The ICD-10-CM code that identifies the concept found in the knowledge base from the Centers for Disease Control.
", + "InferICD10CMResponse$PaginationToken": "If the result of the previous request to InferICD10CM
was truncated, include the PaginationToken
to fetch the next page of medical condition entities.
The version of the model used to analyze the documents, in the format n.n.n You can use this information to track the model used for a particular batch of documents.
", + "InferRxNormResponse$PaginationToken": "If the result of the previous request to InferRxNorm
was truncated, include the PaginationToken
to fetch the next page of medication entities.
The version of the model used to analyze the documents, in the format n.n.n You can use this information to track the model used for a particular batch of documents.
", "InternalServerException$Message": null, "InvalidEncodingException$Message": null, "InvalidRequestException$Message": null, @@ -382,6 +588,9 @@ "ListPHIDetectionJobsRequest$NextToken": "Identifies the next page of results to return.
", "ListPHIDetectionJobsResponse$NextToken": "Identifies the next page of results to return.
", "ResourceNotFoundException$Message": null, + "RxNormAttribute$Text": "The segment of input text which corresponds to the detected attribute.
", + "RxNormConcept$Description": "The description of the RxNorm concept.
", + "RxNormConcept$Code": "RxNorm concept ID, also known as the RxCUI.
", "ServiceUnavailableException$Message": null, "TextSizeLimitExceededException$Message": null, "TooManyRequestsException$Message": null, @@ -418,7 +627,7 @@ "base": null, "refs": { "Attribute$Traits": "Contextual information for this attribute.
", - "Entity$Traits": "Contextual information for the entity
" + "Entity$Traits": "Contextual information for the entity.
" } }, "UnmappedAttribute": { diff --git a/models/apis/datasync/2018-11-09/api-2.json b/models/apis/datasync/2018-11-09/api-2.json index 0906e9121f7..5ca332364cc 100644 --- a/models/apis/datasync/2018-11-09/api-2.json +++ b/models/apis/datasync/2018-11-09/api-2.json @@ -53,6 +53,19 @@ {"shape":"InternalException"} ] }, + "CreateLocationFsxWindows":{ + "name":"CreateLocationFsxWindows", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateLocationFsxWindowsRequest"}, + "output":{"shape":"CreateLocationFsxWindowsResponse"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InternalException"} + ] + }, "CreateLocationNfs":{ "name":"CreateLocationNfs", "http":{ @@ -170,6 +183,19 @@ {"shape":"InternalException"} ] }, + "DescribeLocationFsxWindows":{ + "name":"DescribeLocationFsxWindows", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeLocationFsxWindowsRequest"}, + "output":{"shape":"DescribeLocationFsxWindowsResponse"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"InternalException"} + ] + }, "DescribeLocationNfs":{ "name":"DescribeLocationNfs", "http":{ @@ -450,7 +476,7 @@ "Ec2Config" ], "members":{ - "Subdirectory":{"shape":"Subdirectory"}, + "Subdirectory":{"shape":"EfsSubdirectory"}, "EfsFilesystemArn":{"shape":"EfsFilesystemArn"}, "Ec2Config":{"shape":"Ec2Config"}, "Tags":{"shape":"TagList"} @@ -462,6 +488,30 @@ "LocationArn":{"shape":"LocationArn"} } }, + "CreateLocationFsxWindowsRequest":{ + "type":"structure", + "required":[ + "FsxFilesystemArn", + "SecurityGroupArns", + "User", + "Password" + ], + "members":{ + "Subdirectory":{"shape":"FsxWindowsSubdirectory"}, + "FsxFilesystemArn":{"shape":"FsxFilesystemArn"}, + "SecurityGroupArns":{"shape":"Ec2SecurityGroupArnList"}, + "Tags":{"shape":"TagList"}, + "User":{"shape":"SmbUser"}, + "Domain":{"shape":"SmbDomain"}, + "Password":{"shape":"SmbPassword"} + } + }, + "CreateLocationFsxWindowsResponse":{ + "type":"structure", + "members":{ + "LocationArn":{"shape":"LocationArn"} + } + }, "CreateLocationNfsRequest":{ "type":"structure", "required":[ @@ -470,7 +520,7 @@ "OnPremConfig" ], "members":{ - "Subdirectory":{"shape":"NonEmptySubdirectory"}, + "Subdirectory":{"shape":"NfsSubdirectory"}, "ServerHostname":{"shape":"ServerHostname"}, "OnPremConfig":{"shape":"OnPremConfig"}, "MountOptions":{"shape":"NfsMountOptions"}, @@ -490,7 +540,7 @@ "S3Config" ], "members":{ - "Subdirectory":{"shape":"Subdirectory"}, + "Subdirectory":{"shape":"S3Subdirectory"}, "S3BucketArn":{"shape":"S3BucketArn"}, "S3StorageClass":{"shape":"S3StorageClass"}, "S3Config":{"shape":"S3Config"}, @@ -513,7 +563,7 @@ "AgentArns" ], "members":{ - "Subdirectory":{"shape":"NonEmptySubdirectory"}, + "Subdirectory":{"shape":"SmbSubdirectory"}, "ServerHostname":{"shape":"ServerHostname"}, "User":{"shape":"SmbUser"}, "Domain":{"shape":"SmbDomain"}, @@ -623,6 +673,24 @@ "CreationTime":{"shape":"Time"} } }, + "DescribeLocationFsxWindowsRequest":{ + "type":"structure", + "required":["LocationArn"], + "members":{ + "LocationArn":{"shape":"LocationArn"} + } + }, + "DescribeLocationFsxWindowsResponse":{ + "type":"structure", + "members":{ + "LocationArn":{"shape":"LocationArn"}, + "LocationUri":{"shape":"LocationUri"}, + "SecurityGroupArns":{"shape":"Ec2SecurityGroupArnList"}, + "CreationTime":{"shape":"Time"}, + "User":{"shape":"SmbUser"}, + "Domain":{"shape":"SmbDomain"} + } + }, "DescribeLocationNfsRequest":{ "type":"structure", "required":["LocationArn"], @@ -767,6 +835,11 @@ "max":128, "pattern":"^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):elasticfilesystem:[a-z\\-0-9]*:[0-9]{12}:file-system/fs-.*$" }, + "EfsSubdirectory":{ + "type":"string", + "max":4096, + "pattern":"^[a-zA-Z0-9_\\-\\+\\./\\(\\)\\p{Zs}]*$" + }, "Endpoint":{ "type":"string", "max":15, @@ -805,6 +878,16 @@ "max":409600, "pattern":"^[^\\x00]+$" }, + "FsxFilesystemArn":{ + "type":"string", + "max":128, + "pattern":"^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):fsx:[a-z\\-0-9]*:[0-9]{12}:file-system/fs-.*$" + }, + "FsxWindowsSubdirectory":{ + "type":"string", + "max":4096, + "pattern":"^[a-zA-Z0-9_\\-\\+\\./\\(\\)\\$\\p{Zs}]+$" + }, "Gid":{ "type":"string", "enum":[ @@ -927,14 +1010,22 @@ }, "LocationUri":{ "type":"string", - "max":4355, - "pattern":"^(efs|nfs|s3|smb)://[a-zA-Z0-9.\\-]+$" + "max":4356, + "pattern":"^(efs|nfs|s3|smb|fsxw)://[a-zA-Z0-9.\\-]+$" }, "LogGroupArn":{ "type":"string", "max":562, "pattern":"^arn:(aws|aws-cn|aws-us-gov|aws-iso|aws-iso-b):logs:[a-z\\-0-9]*:[0-9]{12}:log-group:([^:\\*]*)$" }, + "LogLevel":{ + "type":"string", + "enum":[ + "OFF", + "BASIC", + "TRANSFER" + ] + }, "MaxResults":{ "type":"integer", "max":100, @@ -963,6 +1054,11 @@ "Version":{"shape":"NfsVersion"} } }, + "NfsSubdirectory":{ + "type":"string", + "max":4096, + "pattern":"^[a-zA-Z0-9_\\-\\+\\./\\(\\)\\p{Zs}]+$" + }, "NfsVersion":{ "type":"string", "enum":[ @@ -972,11 +1068,6 @@ "NFS4_1" ] }, - "NonEmptySubdirectory":{ - "type":"string", - "max":4096, - "pattern":"^[a-zA-Z0-9_\\-\\+\\./\\(\\)\\p{Zs}]+$" - }, "OnPremConfig":{ "type":"structure", "required":["AgentArns"], @@ -997,7 +1088,8 @@ "PreserveDevices":{"shape":"PreserveDevices"}, "PosixPermissions":{"shape":"PosixPermissions"}, "BytesPerSecond":{"shape":"BytesPerSecond"}, - "TaskQueueing":{"shape":"TaskQueueing"} + "TaskQueueing":{"shape":"TaskQueueing"}, + "LogLevel":{"shape":"LogLevel"} } }, "OverwriteMode":{ @@ -1080,6 +1172,11 @@ "DEEP_ARCHIVE" ] }, + "S3Subdirectory":{ + "type":"string", + "max":4096, + "pattern":"^[a-zA-Z0-9_\\-\\+\\./\\(\\)\\p{Zs}]*$" + }, "ScheduleExpressionCron":{ "type":"string", "max":256, @@ -1107,6 +1204,11 @@ "pattern":"^.{0,104}$", "sensitive":true }, + "SmbSubdirectory":{ + "type":"string", + "max":4096, + "pattern":"^[a-zA-Z0-9_\\-\\+\\./\\(\\)\\$\\p{Zs}]+$" + }, "SmbUser":{ "type":"string", "max":104, @@ -1139,11 +1241,6 @@ "TaskExecutionArn":{"shape":"TaskExecutionArn"} } }, - "Subdirectory":{ - "type":"string", - "max":4096, - "pattern":"^[a-zA-Z0-9_\\-\\+\\./\\(\\)\\p{Zs}]*$" - }, "TagKey":{ "type":"string", "max":256, @@ -1190,7 +1287,7 @@ "type":"string", "max":256, "min":1, - "pattern":"^[a-zA-Z0-9\\s+=._:/-]+$" + "pattern":"^[a-zA-Z0-9\\s+=._:@/-]+$" }, "TaggableResourceArn":{ "type":"string", diff --git a/models/apis/datasync/2018-11-09/docs-2.json b/models/apis/datasync/2018-11-09/docs-2.json index 73231a0ec73..f41e68db338 100644 --- a/models/apis/datasync/2018-11-09/docs-2.json +++ b/models/apis/datasync/2018-11-09/docs-2.json @@ -5,6 +5,7 @@ "CancelTaskExecution": "Cancels execution of a task.
When you cancel a task execution, the transfer of some files are abruptly interrupted. The contents of files that are transferred to the destination might be incomplete or inconsistent with the source files. However, if you start a new task execution on the same task and you allow the task execution to complete, file content on the destination is complete and consistent. This applies to other unexpected failures that interrupt a task execution. In all of these cases, AWS DataSync successfully complete the transfer when you start the next task execution.
", "CreateAgent": "Activates an AWS DataSync agent that you have deployed on your host. The activation process associates your agent with your account. In the activation process, you specify information such as the AWS Region that you want to activate the agent in. You activate the agent in the AWS Region where your target locations (in Amazon S3 or Amazon EFS) reside. Your tasks are created in this AWS Region.
You can activate the agent in a VPC (Virtual private Cloud) or provide the agent access to a VPC endpoint so you can run tasks without going over the public Internet.
You can use an agent for more than one location. If a task uses multiple agents, all of them need to have status AVAILABLE for the task to run. If you use multiple agents for a source location, the status of all the agents must be AVAILABLE for the task to run.
Agents are automatically updated by AWS on a regular basis, using a mechanism that ensures minimal interruption to your tasks.
", "CreateLocationEfs": "Creates an endpoint for an Amazon EFS file system.
", + "CreateLocationFsxWindows": "Creates an endpoint for an Amazon FSx for Windows file system.
", "CreateLocationNfs": "Defines a file system on a Network File System (NFS) server that can be read from or written to
", "CreateLocationS3": "Creates an endpoint for an Amazon S3 bucket.
For AWS DataSync to access a destination S3 bucket, it needs an AWS Identity and Access Management (IAM) role that has the required permissions. You can set up the required permissions by creating an IAM policy that grants the required permissions and attaching the policy to the role. An example of such a policy is shown in the examples section.
For more information, see https://docs.aws.amazon.com/datasync/latest/userguide/working-with-locations.html#create-s3-location in the AWS DataSync User Guide.
", "CreateLocationSmb": "Defines a file system on an Server Message Block (SMB) server that can be read from or written to.
", @@ -14,6 +15,7 @@ "DeleteTask": "Deletes a task.
", "DescribeAgent": "Returns metadata such as the name, the network interfaces, and the status (that is, whether the agent is running or not) for an agent. To specify which agent to describe, use the Amazon Resource Name (ARN) of the agent in your request.
", "DescribeLocationEfs": "Returns metadata, such as the path information about an Amazon EFS location.
", + "DescribeLocationFsxWindows": "Returns metadata, such as the path information about an Amazon FSx for Windows location.
", "DescribeLocationNfs": "Returns metadata, such as the path information, about a NFS location.
", "DescribeLocationS3": "Returns metadata, such as bucket name, about an Amazon S3 bucket location.
", "DescribeLocationSmb": "Returns metadata, such as the path and user information about a SMB location.
", @@ -118,6 +120,16 @@ "refs": { } }, + "CreateLocationFsxWindowsRequest": { + "base": null, + "refs": { + } + }, + "CreateLocationFsxWindowsResponse": { + "base": null, + "refs": { + } + }, "CreateLocationNfsRequest": { "base": "CreateLocationNfsRequest
", "refs": { @@ -208,6 +220,16 @@ "refs": { } }, + "DescribeLocationFsxWindowsRequest": { + "base": null, + "refs": { + } + }, + "DescribeLocationFsxWindowsResponse": { + "base": null, + "refs": { + } + }, "DescribeLocationNfsRequest": { "base": "DescribeLocationNfsRequest
", "refs": { @@ -290,6 +312,8 @@ "Ec2SecurityGroupArnList": { "base": null, "refs": { + "CreateLocationFsxWindowsRequest$SecurityGroupArns": "The Amazon Resource Names (ARNs) of the security groups that are to use to configure the FSx for Windows file system.
", + "DescribeLocationFsxWindowsResponse$SecurityGroupArns": "The Amazon Resource Names (ARNs) of the security groups that are configured for the for the FSx for Windows file system.
", "Ec2Config$SecurityGroupArns": "The Amazon Resource Names (ARNs) of the security groups that are configured for the Amazon EC2 resource.
" } }, @@ -306,6 +330,12 @@ "CreateLocationEfsRequest$EfsFilesystemArn": "The Amazon Resource Name (ARN) for the Amazon EFS file system.
" } }, + "EfsSubdirectory": { + "base": null, + "refs": { + "CreateLocationEfsRequest$Subdirectory": "A subdirectory in the location’s path. This subdirectory in the EFS file system is used to read data from the EFS source location or write data to the EFS destination. By default, AWS DataSync uses the root directory.
Subdirectory
must be specified with forward slashes. For example /path/to/folder
.
A single filter string that consists of the patterns to include or exclude. The patterns are delimited by \"|\" (that is, a pipe), for example: /folder1|/folder2
" } }, + "FsxFilesystemArn": { + "base": null, + "refs": { + "CreateLocationFsxWindowsRequest$FsxFilesystemArn": "
The Amazon Resource Name (ARN) for the FSx for Windows file system.
" + } + }, + "FsxWindowsSubdirectory": { + "base": null, + "refs": { + "CreateLocationFsxWindowsRequest$Subdirectory": "A subdirectory in the location’s path. This subdirectory in the Amazon FSx for Windows file system is used to read data from the Amazon FSx for Windows source location or write data to the FSx for Windows destination.
" + } + }, "Gid": { "base": null, "refs": { @@ -423,6 +465,7 @@ "base": null, "refs": { "CreateLocationEfsResponse$LocationArn": "The Amazon Resource Name (ARN) of the Amazon EFS file system location that is created.
", + "CreateLocationFsxWindowsResponse$LocationArn": "The Amazon Resource Name (ARN) of the FSx for Windows file system location that is created.
", "CreateLocationNfsResponse$LocationArn": "The Amazon Resource Name (ARN) of the source NFS file system location that is created.
", "CreateLocationS3Response$LocationArn": "The Amazon Resource Name (ARN) of the source Amazon S3 bucket location that is created.
", "CreateLocationSmbResponse$LocationArn": "The Amazon Resource Name (ARN) of the source SMB file system location that is created.
", @@ -431,6 +474,8 @@ "DeleteLocationRequest$LocationArn": "The Amazon Resource Name (ARN) of the location to delete.
", "DescribeLocationEfsRequest$LocationArn": "The Amazon Resource Name (ARN) of the EFS location to describe.
", "DescribeLocationEfsResponse$LocationArn": "The Amazon resource Name (ARN) of the EFS location that was described.
", + "DescribeLocationFsxWindowsRequest$LocationArn": "The Amazon Resource Name (ARN) of the FSx for Windows location to describe.
", + "DescribeLocationFsxWindowsResponse$LocationArn": "The Amazon resource Name (ARN) of the FSx for Windows location that was described.
", "DescribeLocationNfsRequest$LocationArn": "The Amazon resource Name (ARN) of the NFS location to describe.
", "DescribeLocationNfsResponse$LocationArn": "The Amazon resource Name (ARN) of the NFS location that was described.
", "DescribeLocationS3Request$LocationArn": "The Amazon Resource Name (ARN) of the Amazon S3 bucket location to describe.
", @@ -458,6 +503,7 @@ "base": null, "refs": { "DescribeLocationEfsResponse$LocationUri": "The URL of the EFS location that was described.
", + "DescribeLocationFsxWindowsResponse$LocationUri": "The URL of the FSx for Windows location that was described.
", "DescribeLocationNfsResponse$LocationUri": "The URL of the source NFS location that was described.
", "DescribeLocationS3Response$LocationUri": "The URL of the Amazon S3 location that was described.
", "DescribeLocationSmbResponse$LocationUri": "The URL of the source SBM location that was described.
", @@ -472,6 +518,12 @@ "UpdateTaskRequest$CloudWatchLogGroupArn": "The Amazon Resource Name (ARN) of the resource name of the CloudWatch LogGroup.
" } }, + "LogLevel": { + "base": null, + "refs": { + "Options$LogLevel": "A value that determines the type of logs DataSync will deliver to your AWS CloudWatch Logs file. If set to OFF
, no logs will be delivered. BASIC
will deliver a few logs per transfer operation and TRANSFER
will deliver a verbose log that contains logs for every file that is transferred.
The NFS mount options that DataSync used to mount your NFS share.
" } }, - "NfsVersion": { + "NfsSubdirectory": { "base": null, "refs": { - "NfsMountOptions$Version": "The specific NFS version that you want DataSync to use to mount your NFS share. If the server refuses to use the version specified, the sync will fail. If you don't specify a version, DataSync defaults to AUTOMATIC
. That is, DataSync automatically selects a version based on negotiation with the NFS server.
You can specify the following NFS versions:
NFSv3 - stateless protocol version that allows for asynchronous writes on the server.
NFSv4.0 - stateful, firewall-friendly protocol version that supports delegations and pseudo filesystems.
NFSv4.1 - stateful protocol version that supports sessions, directory delegations, and parallel data processing. Version 4.1 also includes all features available in version 4.0.
The subdirectory in the NFS file system that is used to read data from the NFS source location or write data to the NFS destination. The NFS path should be a path that's exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network.
To see all the paths exported by your NFS server. run \"showmount -e nfs-server-name
\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication.
To transfer all the data in the folder you specified, DataSync needs to have permissions to read all the data. To ensure this, either configure the NFS export with no_root_squash,
or ensure that the permissions for all of the files that you want DataSync allow read access for all users. Doing either enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.
For information about NFS export configuration, see 18.7. The /etc/exports Configuration File in the Red Hat Enterprise Linux documentation.
" } }, - "NonEmptySubdirectory": { + "NfsVersion": { "base": null, "refs": { - "CreateLocationNfsRequest$Subdirectory": "The subdirectory in the NFS file system that is used to read data from the NFS source location or write data to the NFS destination. The NFS path should be a path that's exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network.
To see all the paths exported by your NFS server. run \"showmount -e nfs-server-name
\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication.
To transfer all the data in the folder you specified, DataSync needs to have permissions to read all the data. To ensure this, either configure the NFS export with no_root_squash,
or ensure that the permissions for all of the files that you want DataSync allow read access for all users. Doing either enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.
For information about NFS export configuration, see 18.7. The /etc/exports Configuration File in the Red Hat Enterprise Linux documentation.
", - "CreateLocationSmbRequest$Subdirectory": "The subdirectory in the SMB file system that is used to read data from the SMB source location or write data to the SMB destination. The SMB path should be a path that's exported by the SMB server, or a subdirectory of that path. The path should be such that it can be mounted by other SMB clients in your network.
Subdirectory
must be specified with forward slashes. For example /path/to/folder
.
To transfer all the data in the folder you specified, DataSync needs to have permissions to mount the SMB share, as well as to access all the data in that share. To ensure this, either ensure that the user/password specified belongs to the user who can mount the share, and who has the appropriate permissions for all of the files and directories that you want DataSync to access, or use credentials of a member of the Backup Operators group to mount the share. Doing either enables the agent to access the data. For the agent to access directories, you must additionally enable all execute access.
" + "NfsMountOptions$Version": "The specific NFS version that you want DataSync to use to mount your NFS share. If the server refuses to use the version specified, the sync will fail. If you don't specify a version, DataSync defaults to AUTOMATIC
. That is, DataSync automatically selects a version based on negotiation with the NFS server.
You can specify the following NFS versions:
NFSv3 - stateless protocol version that allows for asynchronous writes on the server.
NFSv4.0 - stateful, firewall-friendly protocol version that supports delegations and pseudo filesystems.
NFSv4.1 - stateful protocol version that supports sessions, directory delegations, and parallel data processing. Version 4.1 also includes all features available in version 4.0.
The Amazon S3 storage class that you chose to store your files in when this location is used as a task destination. For more information about S3 storage classes, see Amazon S3 Storage Classes in the Amazon Simple Storage Service Developer Guide. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see using-storage-classes.
" } }, + "S3Subdirectory": { + "base": null, + "refs": { + "CreateLocationS3Request$Subdirectory": "A subdirectory in the Amazon S3 bucket. This subdirectory in Amazon S3 is used to read data from the S3 source location or write data to the S3 destination.
" + } + }, "ScheduleExpressionCron": { "base": null, "refs": { @@ -635,7 +692,9 @@ "SmbDomain": { "base": null, "refs": { + "CreateLocationFsxWindowsRequest$Domain": "The name of the Windows domain that the FSx for Windows server belongs to.
", "CreateLocationSmbRequest$Domain": "The name of the Windows domain that the SMB server belongs to.
", + "DescribeLocationFsxWindowsResponse$Domain": "The name of the Windows domain that the FSx for Windows server belongs to.
", "DescribeLocationSmbResponse$Domain": "The name of the Windows domain that the SMB server belongs to.
" } }, @@ -649,13 +708,22 @@ "SmbPassword": { "base": null, "refs": { + "CreateLocationFsxWindowsRequest$Password": "The password of the user who has the permissions to access files and folders in the FSx for Windows file system.
", "CreateLocationSmbRequest$Password": "The password of the user who can mount the share, has the permissions to access files and folders in the SMB share.
" } }, + "SmbSubdirectory": { + "base": null, + "refs": { + "CreateLocationSmbRequest$Subdirectory": "The subdirectory in the SMB file system that is used to read data from the SMB source location or write data to the SMB destination. The SMB path should be a path that's exported by the SMB server, or a subdirectory of that path. The path should be such that it can be mounted by other SMB clients in your network.
Subdirectory
must be specified with forward slashes. For example /path/to/folder
.
To transfer all the data in the folder you specified, DataSync needs to have permissions to mount the SMB share, as well as to access all the data in that share. To ensure this, either ensure that the user/password specified belongs to the user who can mount the share, and who has the appropriate permissions for all of the files and directories that you want DataSync to access, or use credentials of a member of the Backup Operators group to mount the share. Doing either enables the agent to access the data. For the agent to access directories, you must additionally enable all execute access.
" + } + }, "SmbUser": { "base": null, "refs": { + "CreateLocationFsxWindowsRequest$User": "The user who has the permissions to access files and folders in the FSx for Windows file system.
", "CreateLocationSmbRequest$User": "The user who can mount the share, has the permissions to access files and folders in the SMB share.
", + "DescribeLocationFsxWindowsResponse$User": "The user who has the permissions to access files and folders in the FSx for Windows file system.
", "DescribeLocationSmbResponse$User": "The user who can mount the share, has the permissions to access files and folders in the SMB share.
" } }, @@ -681,13 +749,6 @@ "refs": { } }, - "Subdirectory": { - "base": null, - "refs": { - "CreateLocationEfsRequest$Subdirectory": "A subdirectory in the location’s path. This subdirectory in the EFS file system is used to read data from the EFS source location or write data to the EFS destination. By default, AWS DataSync uses the root directory.
Subdirectory
must be specified with forward slashes. For example /path/to/folder
.
A subdirectory in the Amazon S3 bucket. This subdirectory in Amazon S3 is used to read data from the S3 source location or write data to the S3 destination.
" - } - }, "TagKey": { "base": null, "refs": { @@ -706,6 +767,7 @@ "refs": { "CreateAgentRequest$Tags": "The key-value pair that represents the tag that you want to associate with the agent. The value can be an empty string. This value helps you manage, filter, and search for your agents.
Valid characters for key and value are letters, spaces, and numbers representable in UTF-8 format, and the following special characters: + - = . _ : / @.
The key-value pair that represents a tag that you want to add to the resource. The value can be an empty string. This value helps you manage, filter, and search for your resources. We recommend that you create a name tag for your location.
", + "CreateLocationFsxWindowsRequest$Tags": "The key-value pair that represents a tag that you want to add to the resource. The value can be an empty string. This value helps you manage, filter, and search for your resources. We recommend that you create a name tag for your location.
", "CreateLocationNfsRequest$Tags": "The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
", "CreateLocationS3Request$Tags": "The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
", "CreateLocationSmbRequest$Tags": "The key-value pair that represents the tag that you want to add to the location. The value can be an empty string. We recommend using tags to name your resources.
", @@ -840,6 +902,7 @@ "DescribeAgentResponse$LastConnectionTime": "The time that the agent last connected to DataSyc.
", "DescribeAgentResponse$CreationTime": "The time that the agent was activated (that is, created in your account).
", "DescribeLocationEfsResponse$CreationTime": "The time that the EFS location was created.
", + "DescribeLocationFsxWindowsResponse$CreationTime": "The time that the FSx for Windows location was created.
", "DescribeLocationNfsResponse$CreationTime": "The time that the NFS location was created.
", "DescribeLocationS3Response$CreationTime": "The time that the Amazon S3 bucket location was created.
", "DescribeLocationSmbResponse$CreationTime": "The time that the SMB location was created.
", diff --git a/models/apis/detective/2018-10-26/api-2.json b/models/apis/detective/2018-10-26/api-2.json new file mode 100644 index 00000000000..ac4453e4045 --- /dev/null +++ b/models/apis/detective/2018-10-26/api-2.json @@ -0,0 +1,447 @@ +{ + "version":"2.0", + "metadata":{ + "apiVersion":"2018-10-26", + "endpointPrefix":"api.detective", + "jsonVersion":"1.1", + "protocol":"rest-json", + "serviceFullName":"Amazon Detective", + "serviceId":"Detective", + "signatureVersion":"v4", + "signingName":"detective", + "uid":"detective-2018-10-26" + }, + "operations":{ + "AcceptInvitation":{ + "name":"AcceptInvitation", + "http":{ + "method":"PUT", + "requestUri":"/invitation" + }, + "input":{"shape":"AcceptInvitationRequest"}, + "errors":[ + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "CreateGraph":{ + "name":"CreateGraph", + "http":{ + "method":"POST", + "requestUri":"/graph" + }, + "output":{"shape":"CreateGraphResponse"}, + "errors":[ + {"shape":"ConflictException"}, + {"shape":"InternalServerException"} + ] + }, + "CreateMembers":{ + "name":"CreateMembers", + "http":{ + "method":"POST", + "requestUri":"/graph/members" + }, + "input":{"shape":"CreateMembersRequest"}, + "output":{"shape":"CreateMembersResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"ServiceQuotaExceededException"} + ] + }, + "DeleteGraph":{ + "name":"DeleteGraph", + "http":{ + "method":"POST", + "requestUri":"/graph/removal" + }, + "input":{"shape":"DeleteGraphRequest"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "DeleteMembers":{ + "name":"DeleteMembers", + "http":{ + "method":"POST", + "requestUri":"/graph/members/removal" + }, + "input":{"shape":"DeleteMembersRequest"}, + "output":{"shape":"DeleteMembersResponse"}, + "errors":[ + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "DisassociateMembership":{ + "name":"DisassociateMembership", + "http":{ + "method":"POST", + "requestUri":"/membership/removal" + }, + "input":{"shape":"DisassociateMembershipRequest"}, + "errors":[ + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "GetMembers":{ + "name":"GetMembers", + "http":{ + "method":"POST", + "requestUri":"/graph/members/get" + }, + "input":{"shape":"GetMembersRequest"}, + "output":{"shape":"GetMembersResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "ListGraphs":{ + "name":"ListGraphs", + "http":{ + "method":"POST", + "requestUri":"/graphs/list" + }, + "input":{"shape":"ListGraphsRequest"}, + "output":{"shape":"ListGraphsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ] + }, + "ListInvitations":{ + "name":"ListInvitations", + "http":{ + "method":"POST", + "requestUri":"/invitations/list" + }, + "input":{"shape":"ListInvitationsRequest"}, + "output":{"shape":"ListInvitationsResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ValidationException"} + ] + }, + "ListMembers":{ + "name":"ListMembers", + "http":{ + "method":"POST", + "requestUri":"/graph/members/list" + }, + "input":{"shape":"ListMembersRequest"}, + "output":{"shape":"ListMembersResponse"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, + "RejectInvitation":{ + "name":"RejectInvitation", + "http":{ + "method":"POST", + "requestUri":"/invitation/removal" + }, + "input":{"shape":"RejectInvitationRequest"}, + "errors":[ + {"shape":"ConflictException"}, + {"shape":"InternalServerException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + } + }, + "shapes":{ + "AcceptInvitationRequest":{ + "type":"structure", + "required":["GraphArn"], + "members":{ + "GraphArn":{"shape":"GraphArn"} + } + }, + "Account":{ + "type":"structure", + "required":[ + "AccountId", + "EmailAddress" + ], + "members":{ + "AccountId":{"shape":"AccountId"}, + "EmailAddress":{"shape":"EmailAddress"} + } + }, + "AccountId":{ + "type":"string", + "max":12, + "min":12, + "pattern":"^[0-9]+$" + }, + "AccountIdList":{ + "type":"list", + "member":{"shape":"AccountId"}, + "max":50, + "min":1 + }, + "AccountList":{ + "type":"list", + "member":{"shape":"Account"}, + "max":50, + "min":1 + }, + "ConflictException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":409}, + "exception":true + }, + "CreateGraphResponse":{ + "type":"structure", + "members":{ + "GraphArn":{"shape":"GraphArn"} + } + }, + "CreateMembersRequest":{ + "type":"structure", + "required":[ + "GraphArn", + "Accounts" + ], + "members":{ + "GraphArn":{"shape":"GraphArn"}, + "Message":{"shape":"EmailMessage"}, + "Accounts":{"shape":"AccountList"} + } + }, + "CreateMembersResponse":{ + "type":"structure", + "members":{ + "Members":{"shape":"MemberDetailList"}, + "UnprocessedAccounts":{"shape":"UnprocessedAccountList"} + } + }, + "DeleteGraphRequest":{ + "type":"structure", + "required":["GraphArn"], + "members":{ + "GraphArn":{"shape":"GraphArn"} + } + }, + "DeleteMembersRequest":{ + "type":"structure", + "required":[ + "GraphArn", + "AccountIds" + ], + "members":{ + "GraphArn":{"shape":"GraphArn"}, + "AccountIds":{"shape":"AccountIdList"} + } + }, + "DeleteMembersResponse":{ + "type":"structure", + "members":{ + "AccountIds":{"shape":"AccountIdList"}, + "UnprocessedAccounts":{"shape":"UnprocessedAccountList"} + } + }, + "DisassociateMembershipRequest":{ + "type":"structure", + "required":["GraphArn"], + "members":{ + "GraphArn":{"shape":"GraphArn"} + } + }, + "EmailAddress":{ + "type":"string", + "max":64, + "min":1, + "pattern":"^.+@.+$" + }, + "EmailMessage":{ + "type":"string", + "max":1000, + "min":1 + }, + "ErrorMessage":{"type":"string"}, + "GetMembersRequest":{ + "type":"structure", + "required":[ + "GraphArn", + "AccountIds" + ], + "members":{ + "GraphArn":{"shape":"GraphArn"}, + "AccountIds":{"shape":"AccountIdList"} + } + }, + "GetMembersResponse":{ + "type":"structure", + "members":{ + "MemberDetails":{"shape":"MemberDetailList"}, + "UnprocessedAccounts":{"shape":"UnprocessedAccountList"} + } + }, + "Graph":{ + "type":"structure", + "members":{ + "Arn":{"shape":"GraphArn"}, + "CreatedTime":{"shape":"Timestamp"} + } + }, + "GraphArn":{ + "type":"string", + "pattern":"^arn:aws[-\\w]{0,10}?:detective:[-\\w]{2,20}?:\\d{12}?:graph:[abcdef\\d]{32}?$" + }, + "GraphList":{ + "type":"list", + "member":{"shape":"Graph"} + }, + "InternalServerException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":500}, + "exception":true + }, + "ListGraphsRequest":{ + "type":"structure", + "members":{ + "NextToken":{"shape":"PaginationToken"}, + "MaxResults":{"shape":"MemberResultsLimit"} + } + }, + "ListGraphsResponse":{ + "type":"structure", + "members":{ + "GraphList":{"shape":"GraphList"}, + "NextToken":{"shape":"PaginationToken"} + } + }, + "ListInvitationsRequest":{ + "type":"structure", + "members":{ + "NextToken":{"shape":"PaginationToken"}, + "MaxResults":{"shape":"MemberResultsLimit"} + } + }, + "ListInvitationsResponse":{ + "type":"structure", + "members":{ + "Invitations":{"shape":"MemberDetailList"}, + "NextToken":{"shape":"PaginationToken"} + } + }, + "ListMembersRequest":{ + "type":"structure", + "required":["GraphArn"], + "members":{ + "GraphArn":{"shape":"GraphArn"}, + "NextToken":{"shape":"PaginationToken"}, + "MaxResults":{"shape":"MemberResultsLimit"} + } + }, + "ListMembersResponse":{ + "type":"structure", + "members":{ + "MemberDetails":{"shape":"MemberDetailList"}, + "NextToken":{"shape":"PaginationToken"} + } + }, + "MemberDetail":{ + "type":"structure", + "members":{ + "AccountId":{"shape":"AccountId"}, + "EmailAddress":{"shape":"EmailAddress"}, + "GraphArn":{"shape":"GraphArn"}, + "MasterId":{"shape":"AccountId"}, + "Status":{"shape":"MemberStatus"}, + "InvitedTime":{"shape":"Timestamp"}, + "UpdatedTime":{"shape":"Timestamp"} + } + }, + "MemberDetailList":{ + "type":"list", + "member":{"shape":"MemberDetail"} + }, + "MemberResultsLimit":{ + "type":"integer", + "box":true, + "max":200, + "min":1 + }, + "MemberStatus":{ + "type":"string", + "enum":[ + "INVITED", + "VERIFICATION_IN_PROGRESS", + "VERIFICATION_FAILED", + "ENABLED" + ] + }, + "PaginationToken":{ + "type":"string", + "max":1024, + "min":1 + }, + "RejectInvitationRequest":{ + "type":"structure", + "required":["GraphArn"], + "members":{ + "GraphArn":{"shape":"GraphArn"} + } + }, + "ResourceNotFoundException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":404}, + "exception":true + }, + "ServiceQuotaExceededException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":402}, + "exception":true + }, + "Timestamp":{"type":"timestamp"}, + "UnprocessedAccount":{ + "type":"structure", + "members":{ + "AccountId":{"shape":"AccountId"}, + "Reason":{"shape":"UnprocessedReason"} + } + }, + "UnprocessedAccountList":{ + "type":"list", + "member":{"shape":"UnprocessedAccount"} + }, + "UnprocessedReason":{"type":"string"}, + "ValidationException":{ + "type":"structure", + "members":{ + "Message":{"shape":"ErrorMessage"} + }, + "error":{"httpStatusCode":400}, + "exception":true + } + } +} diff --git a/models/apis/detective/2018-10-26/docs-2.json b/models/apis/detective/2018-10-26/docs-2.json new file mode 100644 index 00000000000..01d629ffbbf --- /dev/null +++ b/models/apis/detective/2018-10-26/docs-2.json @@ -0,0 +1,278 @@ +{ + "version": "2.0", + "service": "Amazon Detective is currently in preview. The Detective API can only be used by accounts that are admitted into the preview.
Detective uses machine learning and purpose-built visualizations to help you analyze and investigate security issues across your Amazon Web Services (AWS) workloads. Detective automatically extracts time-based events such as login attempts, API calls, and network traffic from AWS CloudTrail and Amazon Virtual Private Cloud (Amazon VPC) flow logs. It also extracts findings detected by Amazon GuardDuty.
The Detective API primarily supports the creation and management of behavior graphs. A behavior graph contains the extracted data from a set of member accounts, and is created and managed by a master account.
Every behavior graph is specific to a Region. You can only use the API to manage graphs that belong to the Region that is associated with the currently selected endpoint.
A Detective master account can use the Detective API to do the following:
Enable and disable Detective. Enabling Detective creates a new behavior graph.
View the list of member accounts in a behavior graph.
Add member accounts to a behavior graph.
Remove member accounts from a behavior graph.
A member account can use the Detective API to do the following:
View the list of behavior graphs that they are invited to.
Accept an invitation to contribute to a behavior graph.
Decline an invitation to contribute to a behavior graph.
Remove their account from a behavior graph.
All API actions are logged as CloudTrail events. See Logging Detective API Calls with CloudTrail.
", + "operations": { + "AcceptInvitation": "Amazon Detective is currently in preview.
Accepts an invitation for the member account to contribute data to a behavior graph. This operation can only be called by an invited member account.
The request provides the ARN of behavior graph.
The member account status in the graph must be INVITED
.
Amazon Detective is currently in preview.
Creates a new behavior graph for the calling account, and sets that account as the master account. This operation is called by the account that is enabling Detective.
The operation also enables Detective for the calling account in the currently selected Region. It returns the ARN of the new behavior graph.
CreateGraph
triggers a process to create the corresponding data tables for the new behavior graph.
An account can only be the master account for one behavior graph within a Region. If the same account calls CreateGraph
with the same master account, it always returns the same behavior graph ARN. It does not create a new behavior graph.
Amazon Detective is currently in preview.
Sends a request to invite the specified AWS accounts to be member accounts in the behavior graph. This operation can only be called by the master account for a behavior graph.
CreateMembers
verifies the accounts and then sends invitations to the verified accounts.
The request provides the behavior graph ARN and the list of accounts to invite.
The response separates the requested accounts into two lists:
The accounts that CreateMembers
was able to start the verification for. This list includes member accounts that are being verified, that have passed verification and are being sent an invitation, and that have failed verification.
The accounts that CreateMembers
was unable to process. This list includes accounts that were already invited to be member accounts in the behavior graph.
Amazon Detective is currently in preview.
Disables the specified behavior graph and queues it to be deleted. This operation removes the graph from each member account's list of behavior graphs.
DeleteGraph
can only be called by the master account for a behavior graph.
Amazon Detective is currently in preview.
Deletes one or more member accounts from the master account behavior graph. This operation can only be called by a Detective master account. That account cannot use DeleteMembers
to delete their own account from the behavior graph. To disable a behavior graph, the master account uses the DeleteGraph
API method.
Amazon Detective is currently in preview.
Removes the member account from the specified behavior graph. This operation can only be called by a member account that has the ENABLED
status.
Amazon Detective is currently in preview.
Returns the membership details for specified member accounts for a behavior graph.
", + "ListGraphs": "Amazon Detective is currently in preview.
Returns the list of behavior graphs that the calling account is a master of. This operation can only be called by a master account.
Because an account can currently only be the master of one behavior graph within a Region, the results always contain a single graph.
", + "ListInvitations": "Amazon Detective is currently in preview.
Retrieves the list of open and accepted behavior graph invitations for the member account. This operation can only be called by a member account.
Open invitations are invitations that the member account has not responded to.
The results do not include behavior graphs for which the member account declined the invitation. The results also do not include behavior graphs that the member account resigned from or was removed from.
", + "ListMembers": "Amazon Detective is currently in preview.
Retrieves the list of member accounts for a behavior graph. Does not return member accounts that were removed from the behavior graph.
", + "RejectInvitation": "Amazon Detective is currently in preview.
Rejects an invitation to contribute the account data to a behavior graph. This operation must be called by a member account that has the INVITED
status.
Amazon Detective is currently in preview.
An AWS account that is the master of or a member of a behavior graph.
", + "refs": { + "AccountList$member": null + } + }, + "AccountId": { + "base": null, + "refs": { + "Account$AccountId": "The account identifier of the AWS account.
", + "AccountIdList$member": null, + "MemberDetail$AccountId": "The AWS account identifier for the member account.
", + "MemberDetail$MasterId": "The AWS account identifier of the master account for the behavior graph.
", + "UnprocessedAccount$AccountId": "The AWS account identifier of the member account that was not processed.
" + } + }, + "AccountIdList": { + "base": null, + "refs": { + "DeleteMembersRequest$AccountIds": "The list of AWS account identifiers for the member accounts to delete from the behavior graph.
", + "DeleteMembersResponse$AccountIds": "The list of AWS account identifiers for the member accounts that Detective successfully deleted from the behavior graph.
", + "GetMembersRequest$AccountIds": "The list of AWS account identifiers for the member account for which to return member details.
You cannot use GetMembers
to retrieve information about member accounts that were removed from the behavior graph.
The list of AWS accounts to invite to become member accounts in the behavior graph. For each invited account, the account list contains the account identifier and the AWS account root user email address.
" + } + }, + "ConflictException": { + "base": "The request attempted an invalid action.
", + "refs": { + } + }, + "CreateGraphResponse": { + "base": null, + "refs": { + } + }, + "CreateMembersRequest": { + "base": null, + "refs": { + } + }, + "CreateMembersResponse": { + "base": null, + "refs": { + } + }, + "DeleteGraphRequest": { + "base": null, + "refs": { + } + }, + "DeleteMembersRequest": { + "base": null, + "refs": { + } + }, + "DeleteMembersResponse": { + "base": null, + "refs": { + } + }, + "DisassociateMembershipRequest": { + "base": null, + "refs": { + } + }, + "EmailAddress": { + "base": null, + "refs": { + "Account$EmailAddress": "The AWS account root user email address for the AWS account.
", + "MemberDetail$EmailAddress": "The AWS account root user email address for the member account.
" + } + }, + "EmailMessage": { + "base": null, + "refs": { + "CreateMembersRequest$Message": "Customized message text to include in the invitation email message to the invited member accounts.
" + } + }, + "ErrorMessage": { + "base": null, + "refs": { + "ConflictException$Message": null, + "InternalServerException$Message": null, + "ResourceNotFoundException$Message": null, + "ServiceQuotaExceededException$Message": null, + "ValidationException$Message": null + } + }, + "GetMembersRequest": { + "base": null, + "refs": { + } + }, + "GetMembersResponse": { + "base": null, + "refs": { + } + }, + "Graph": { + "base": "Amazon Detective is currently in preview.
A behavior graph in Detective.
", + "refs": { + "GraphList$member": null + } + }, + "GraphArn": { + "base": null, + "refs": { + "AcceptInvitationRequest$GraphArn": "The ARN of the behavior graph that the member account is accepting the invitation for.
The member account status in the behavior graph must be INVITED
.
The ARN of the new behavior graph.
", + "CreateMembersRequest$GraphArn": "The ARN of the behavior graph to invite the member accounts to contribute their data to.
", + "DeleteGraphRequest$GraphArn": "The ARN of the behavior graph to disable.
", + "DeleteMembersRequest$GraphArn": "The ARN of the behavior graph to delete members from.
", + "DisassociateMembershipRequest$GraphArn": "The ARN of the behavior graph to remove the member account from.
The member account's member status in the behavior graph must be ENABLED
.
The ARN of the behavior graph for which to request the member details.
", + "Graph$Arn": "The ARN of the behavior graph.
", + "ListMembersRequest$GraphArn": "The ARN of the behavior graph for which to retrieve the list of member accounts.
", + "MemberDetail$GraphArn": "The ARN of the behavior graph that the member account was invited to.
", + "RejectInvitationRequest$GraphArn": "The ARN of the behavior graph to reject the invitation to.
The member account's current member status in the behavior graph must be INVITED
.
A list of behavior graphs that the account is a master for.
" + } + }, + "InternalServerException": { + "base": "The request was valid but failed because of a problem with the service.
", + "refs": { + } + }, + "ListGraphsRequest": { + "base": null, + "refs": { + } + }, + "ListGraphsResponse": { + "base": null, + "refs": { + } + }, + "ListInvitationsRequest": { + "base": null, + "refs": { + } + }, + "ListInvitationsResponse": { + "base": null, + "refs": { + } + }, + "ListMembersRequest": { + "base": null, + "refs": { + } + }, + "ListMembersResponse": { + "base": null, + "refs": { + } + }, + "MemberDetail": { + "base": "Amazon Detective is currently in preview.
Details about a member account that was invited to contribute to a behavior graph.
", + "refs": { + "MemberDetailList$member": null + } + }, + "MemberDetailList": { + "base": null, + "refs": { + "CreateMembersResponse$Members": "The set of member account invitation requests that Detective was able to process. This includes accounts that are being verified, that failed verification, and that passed verification and are being sent an invitation.
", + "GetMembersResponse$MemberDetails": "The member account details that Detective is returning in response to the request.
", + "ListInvitationsResponse$Invitations": "The list of behavior graphs for which the member account has open or accepted invitations.
", + "ListMembersResponse$MemberDetails": "The list of member accounts in the behavior graph.
The results include member accounts that did not pass verification and member accounts that have not yet accepted the invitation to the behavior graph. The results do not include member accounts that were removed from the behavior graph.
" + } + }, + "MemberResultsLimit": { + "base": null, + "refs": { + "ListGraphsRequest$MaxResults": "The maximum number of graphs to return at a time. The total must be less than the overall limit on the number of results to return, which is currently 200.
", + "ListInvitationsRequest$MaxResults": "The maximum number of behavior graph invitations to return in the response. The total must be less than the overall limit on the number of results to return, which is currently 200.
", + "ListMembersRequest$MaxResults": "The maximum number of member accounts to include in the response. The total must be less than the overall limit on the number of results to return, which is currently 200.
" + } + }, + "MemberStatus": { + "base": null, + "refs": { + "MemberDetail$Status": "The current membership status of the member account. The status can have one of the following values:
INVITED
- Indicates that the member was sent an invitation but has not yet responded.
VERIFICATION_IN_PROGRESS
- Indicates that Detective is verifying that the account identifier and email address provided for the member account match. If they do match, then Detective sends the invitation. If the email address and account identifier don't match, then the member cannot be added to the behavior graph.
VERIFICATION_FAILED
- Indicates that the account and email address provided for the member account do not match, and Detective did not send an invitation to the account.
ENABLED
- Indicates that the member account accepted the invitation to contribute to the behavior graph.
Member accounts that declined an invitation or that were removed from the behavior graph are not included.
" + } + }, + "PaginationToken": { + "base": null, + "refs": { + "ListGraphsRequest$NextToken": "For requests to get the next page of results, the pagination token that was returned with the previous set of results. The initial request does not include a pagination token.
", + "ListGraphsResponse$NextToken": "If there are more behavior graphs remaining in the results, then this is the pagination token to use to request the next page of behavior graphs.
", + "ListInvitationsRequest$NextToken": "For requests to retrieve the next page of results, the pagination token that was returned with the previous page of results. The initial request does not include a pagination token.
", + "ListInvitationsResponse$NextToken": "If there are more behavior graphs remaining in the results, then this is the pagination token to use to request the next page of behavior graphs.
", + "ListMembersRequest$NextToken": "For requests to retrieve the next page of member account results, the pagination token that was returned with the previous page of results. The initial request does not include a pagination token.
", + "ListMembersResponse$NextToken": "If there are more member accounts remaining in the results, then this is the pagination token to use to request the next page of member accounts.
" + } + }, + "RejectInvitationRequest": { + "base": null, + "refs": { + } + }, + "ResourceNotFoundException": { + "base": "The request refers to a nonexistent resource.
", + "refs": { + } + }, + "ServiceQuotaExceededException": { + "base": "This request would cause the number of member accounts in the behavior graph to exceed the maximum allowed. A behavior graph cannot have more than 1000 member accounts.
", + "refs": { + } + }, + "Timestamp": { + "base": null, + "refs": { + "Graph$CreatedTime": "The date and time that the behavior graph was created. The value is in milliseconds since the epoch.
", + "MemberDetail$InvitedTime": "The date and time that Detective sent the invitation to the member account. The value is in milliseconds since the epoch.
", + "MemberDetail$UpdatedTime": "The date and time that the member account was last updated. The value is in milliseconds since the epoch.
" + } + }, + "UnprocessedAccount": { + "base": "Amazon Detective is currently in preview.
A member account that was included in a request but for which the request could not be processed.
", + "refs": { + "UnprocessedAccountList$member": null + } + }, + "UnprocessedAccountList": { + "base": null, + "refs": { + "CreateMembersResponse$UnprocessedAccounts": "The list of accounts for which Detective was unable to process the invitation request. For each account, the list provides the reason why the request could not be processed. The list includes accounts that are already member accounts in the behavior graph.
", + "DeleteMembersResponse$UnprocessedAccounts": "The list of member accounts that Detective was not able to delete from the behavior graph. For each member account, provides the reason that the deletion could not be processed.
", + "GetMembersResponse$UnprocessedAccounts": "The requested member accounts for which Detective was unable to return member details.
For each account, provides the reason why the request could not be processed.
" + } + }, + "UnprocessedReason": { + "base": null, + "refs": { + "UnprocessedAccount$Reason": "The reason that the member account request could not be processed.
" + } + }, + "ValidationException": { + "base": "The request parameters are invalid.
", + "refs": { + } + } + } +} diff --git a/models/apis/detective/2018-10-26/examples-1.json b/models/apis/detective/2018-10-26/examples-1.json new file mode 100644 index 00000000000..0ea7e3b0bbe --- /dev/null +++ b/models/apis/detective/2018-10-26/examples-1.json @@ -0,0 +1,5 @@ +{ + "version": "1.0", + "examples": { + } +} diff --git a/models/apis/detective/2018-10-26/paginators-1.json b/models/apis/detective/2018-10-26/paginators-1.json new file mode 100644 index 00000000000..8ead136f5ee --- /dev/null +++ b/models/apis/detective/2018-10-26/paginators-1.json @@ -0,0 +1,19 @@ +{ + "pagination": { + "ListGraphs": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, + "ListInvitations": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, + "ListMembers": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + } + } +} diff --git a/models/apis/devicefarm/2015-06-23/api-2.json b/models/apis/devicefarm/2015-06-23/api-2.json index 3a71a4027f8..f25e1ab28dd 100644 --- a/models/apis/devicefarm/2015-06-23/api-2.json +++ b/models/apis/devicefarm/2015-06-23/api-2.json @@ -88,6 +88,32 @@ {"shape":"ServiceAccountException"} ] }, + "CreateTestGridProject":{ + "name":"CreateTestGridProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateTestGridProjectRequest"}, + "output":{"shape":"CreateTestGridProjectResult"}, + "errors":[ + {"shape":"InternalServiceException"} + ] + }, + "CreateTestGridUrl":{ + "name":"CreateTestGridUrl", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateTestGridUrlRequest"}, + "output":{"shape":"CreateTestGridUrlResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, "CreateUpload":{ "name":"CreateUpload", "http":{ @@ -207,6 +233,21 @@ {"shape":"ServiceAccountException"} ] }, + "DeleteTestGridProject":{ + "name":"DeleteTestGridProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteTestGridProjectRequest"}, + "output":{"shape":"DeleteTestGridProjectResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"CannotDeleteException"}, + {"shape":"InternalServiceException"} + ] + }, "DeleteUpload":{ "name":"DeleteUpload", "http":{ @@ -448,6 +489,34 @@ {"shape":"ServiceAccountException"} ] }, + "GetTestGridProject":{ + "name":"GetTestGridProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetTestGridProjectRequest"}, + "output":{"shape":"GetTestGridProjectResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, + "GetTestGridSession":{ + "name":"GetTestGridSession", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetTestGridSessionRequest"}, + "output":{"shape":"GetTestGridSessionResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, "GetUpload":{ "name":"GetUpload", "http":{ @@ -729,10 +798,66 @@ "input":{"shape":"ListTagsForResourceRequest"}, "output":{"shape":"ListTagsForResourceResponse"}, "errors":[ + {"shape":"ArgumentException"}, {"shape":"NotFoundException"}, {"shape":"TagOperationException"} ] }, + "ListTestGridProjects":{ + "name":"ListTestGridProjects", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTestGridProjectsRequest"}, + "output":{"shape":"ListTestGridProjectsResult"}, + "errors":[ + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, + "ListTestGridSessionActions":{ + "name":"ListTestGridSessionActions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTestGridSessionActionsRequest"}, + "output":{"shape":"ListTestGridSessionActionsResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, + "ListTestGridSessionArtifacts":{ + "name":"ListTestGridSessionArtifacts", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTestGridSessionArtifactsRequest"}, + "output":{"shape":"ListTestGridSessionArtifactsResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, + "ListTestGridSessions":{ + "name":"ListTestGridSessions", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTestGridSessionsRequest"}, + "output":{"shape":"ListTestGridSessionsResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, "ListTests":{ "name":"ListTests", "http":{ @@ -893,6 +1018,7 @@ "input":{"shape":"TagResourceRequest"}, "output":{"shape":"TagResourceResponse"}, "errors":[ + {"shape":"ArgumentException"}, {"shape":"NotFoundException"}, {"shape":"TagOperationException"}, {"shape":"TooManyTagsException"}, @@ -908,6 +1034,7 @@ "input":{"shape":"UntagResourceRequest"}, "output":{"shape":"UntagResourceResponse"}, "errors":[ + {"shape":"ArgumentException"}, {"shape":"NotFoundException"}, {"shape":"TagOperationException"} ] @@ -987,6 +1114,20 @@ {"shape":"ServiceAccountException"} ] }, + "UpdateTestGridProject":{ + "name":"UpdateTestGridProject", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateTestGridProjectRequest"}, + "output":{"shape":"UpdateTestGridProjectResult"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"ArgumentException"}, + {"shape":"InternalServiceException"} + ] + }, "UpdateUpload":{ "name":"UpdateUpload", "http":{ @@ -1041,7 +1182,8 @@ "AmazonResourceName":{ "type":"string", "max":1011, - "min":32 + "min":32, + "pattern":"^arn:.+" }, "AmazonResourceNames":{ "type":"list", @@ -1130,6 +1272,13 @@ "clock":{"shape":"Double"} } }, + "CannotDeleteException":{ + "type":"structure", + "members":{ + "message":{"shape":"Message"} + }, + "exception":true + }, "ClientId":{ "type":"string", "max":64, @@ -1265,6 +1414,38 @@ "remoteAccessSession":{"shape":"RemoteAccessSession"} } }, + "CreateTestGridProjectRequest":{ + "type":"structure", + "required":["name"], + "members":{ + "name":{"shape":"ResourceName"}, + "description":{"shape":"ResourceDescription"} + } + }, + "CreateTestGridProjectResult":{ + "type":"structure", + "members":{ + "testGridProject":{"shape":"TestGridProject"} + } + }, + "CreateTestGridUrlRequest":{ + "type":"structure", + "required":[ + "projectArn", + "expiresInSeconds" + ], + "members":{ + "projectArn":{"shape":"DeviceFarmArn"}, + "expiresInSeconds":{"shape":"TestGridUrlExpiresInSecondsInput"} + } + }, + "CreateTestGridUrlResult":{ + "type":"structure", + "members":{ + "url":{"shape":"String"}, + "expires":{"shape":"DateTime"} + } + }, "CreateUploadRequest":{ "type":"structure", "required":[ @@ -1390,6 +1571,18 @@ "members":{ } }, + "DeleteTestGridProjectRequest":{ + "type":"structure", + "required":["projectArn"], + "members":{ + "projectArn":{"shape":"DeviceFarmArn"} + } + }, + "DeleteTestGridProjectResult":{ + "type":"structure", + "members":{ + } + }, "DeleteUploadRequest":{ "type":"structure", "required":["arn"], @@ -1467,6 +1660,12 @@ "HIGHLY_AVAILABLE" ] }, + "DeviceFarmArn":{ + "type":"string", + "max":1011, + "min":32, + "pattern":"^arn:aws:devicefarm:.+" + }, "DeviceFilter":{ "type":"structure", "members":{ @@ -1821,6 +2020,33 @@ "suite":{"shape":"Suite"} } }, + "GetTestGridProjectRequest":{ + "type":"structure", + "required":["projectArn"], + "members":{ + "projectArn":{"shape":"DeviceFarmArn"} + } + }, + "GetTestGridProjectResult":{ + "type":"structure", + "members":{ + "testGridProject":{"shape":"TestGridProject"} + } + }, + "GetTestGridSessionRequest":{ + "type":"structure", + "members":{ + "projectArn":{"shape":"DeviceFarmArn"}, + "sessionId":{"shape":"ResourceId"}, + "sessionArn":{"shape":"DeviceFarmArn"} + } + }, + "GetTestGridSessionResult":{ + "type":"structure", + "members":{ + "testGridSession":{"shape":"TestGridSession"} + } + }, "GetTestRequest":{ "type":"structure", "required":["arn"], @@ -1938,6 +2164,14 @@ "max":64, "min":0 }, + "InternalServiceException":{ + "type":"structure", + "members":{ + "message":{"shape":"Message"} + }, + "exception":true, + "fault":true + }, "InvalidOperationException":{ "type":"structure", "members":{ @@ -2207,7 +2441,7 @@ "type":"structure", "required":["ResourceARN"], "members":{ - "ResourceARN":{"shape":"AmazonResourceName"} + "ResourceARN":{"shape":"DeviceFarmArn"} } }, "ListTagsForResourceResponse":{ @@ -2216,6 +2450,74 @@ "Tags":{"shape":"TagList"} } }, + "ListTestGridProjectsRequest":{ + "type":"structure", + "members":{ + "maxResult":{"shape":"MaxPageSize"}, + "nextToken":{"shape":"PaginationToken"} + } + }, + "ListTestGridProjectsResult":{ + "type":"structure", + "members":{ + "testGridProjects":{"shape":"TestGridProjects"}, + "nextToken":{"shape":"PaginationToken"} + } + }, + "ListTestGridSessionActionsRequest":{ + "type":"structure", + "required":["sessionArn"], + "members":{ + "sessionArn":{"shape":"DeviceFarmArn"}, + "maxResult":{"shape":"MaxPageSize"}, + "nextToken":{"shape":"PaginationToken"} + } + }, + "ListTestGridSessionActionsResult":{ + "type":"structure", + "members":{ + "actions":{"shape":"TestGridSessionActions"}, + "nextToken":{"shape":"PaginationToken"} + } + }, + "ListTestGridSessionArtifactsRequest":{ + "type":"structure", + "required":["sessionArn"], + "members":{ + "sessionArn":{"shape":"DeviceFarmArn"}, + "type":{"shape":"TestGridSessionArtifactCategory"}, + "maxResult":{"shape":"MaxPageSize"}, + "nextToken":{"shape":"PaginationToken"} + } + }, + "ListTestGridSessionArtifactsResult":{ + "type":"structure", + "members":{ + "artifacts":{"shape":"TestGridSessionArtifacts"}, + "nextToken":{"shape":"PaginationToken"} + } + }, + "ListTestGridSessionsRequest":{ + "type":"structure", + "required":["projectArn"], + "members":{ + "projectArn":{"shape":"DeviceFarmArn"}, + "status":{"shape":"TestGridSessionStatus"}, + "creationTimeAfter":{"shape":"DateTime"}, + "creationTimeBefore":{"shape":"DateTime"}, + "endTimeAfter":{"shape":"DateTime"}, + "endTimeBefore":{"shape":"DateTime"}, + "maxResult":{"shape":"MaxPageSize"}, + "nextToken":{"shape":"PaginationToken"} + } + }, + "ListTestGridSessionsResult":{ + "type":"structure", + "members":{ + "testGridSessions":{"shape":"TestGridSessions"}, + "nextToken":{"shape":"PaginationToken"} + } + }, "ListTestsRequest":{ "type":"structure", "required":["arn"], @@ -2288,6 +2590,11 @@ } }, "Long":{"type":"long"}, + "MaxPageSize":{ + "type":"integer", + "max":1000, + "min":1 + }, "MaxSlotMap":{ "type":"map", "key":{"shape":"String"}, @@ -2573,6 +2880,24 @@ "height":{"shape":"Integer"} } }, + "ResourceDescription":{ + "type":"string", + "max":2048, + "min":1, + "pattern":".*\\S.*" + }, + "ResourceId":{ + "type":"string", + "max":128, + "min":1, + "pattern":".*\\S.*" + }, + "ResourceName":{ + "type":"string", + "max":64, + "min":1, + "pattern":".*\\S.*" + }, "Rule":{ "type":"structure", "members":{ @@ -2847,7 +3172,7 @@ "Tags" ], "members":{ - "ResourceARN":{"shape":"AmazonResourceName"}, + "ResourceARN":{"shape":"DeviceFarmArn"}, "Tags":{"shape":"TagList"} } }, @@ -2877,6 +3202,88 @@ "deviceMinutes":{"shape":"DeviceMinutes"} } }, + "TestGridProject":{ + "type":"structure", + "members":{ + "arn":{"shape":"DeviceFarmArn"}, + "name":{"shape":"String"}, + "description":{"shape":"String"}, + "created":{"shape":"DateTime"} + } + }, + "TestGridProjects":{ + "type":"list", + "member":{"shape":"TestGridProject"} + }, + "TestGridSession":{ + "type":"structure", + "members":{ + "arn":{"shape":"DeviceFarmArn"}, + "status":{"shape":"TestGridSessionStatus"}, + "created":{"shape":"DateTime"}, + "ended":{"shape":"DateTime"}, + "billingMinutes":{"shape":"Double"}, + "seleniumProperties":{"shape":"String"} + } + }, + "TestGridSessionAction":{ + "type":"structure", + "members":{ + "action":{"shape":"String"}, + "started":{"shape":"DateTime"}, + "duration":{"shape":"Long"}, + "statusCode":{"shape":"String"}, + "requestMethod":{"shape":"String"} + } + }, + "TestGridSessionActions":{ + "type":"list", + "member":{"shape":"TestGridSessionAction"} + }, + "TestGridSessionArtifact":{ + "type":"structure", + "members":{ + "filename":{"shape":"String"}, + "type":{"shape":"TestGridSessionArtifactType"}, + "url":{"shape":"String"} + } + }, + "TestGridSessionArtifactCategory":{ + "type":"string", + "enum":[ + "VIDEO", + "LOG" + ] + }, + "TestGridSessionArtifactType":{ + "type":"string", + "enum":[ + "UNKNOWN", + "VIDEO", + "SELENIUM_LOG" + ] + }, + "TestGridSessionArtifacts":{ + "type":"list", + "member":{"shape":"TestGridSessionArtifact"} + }, + "TestGridSessionStatus":{ + "type":"string", + "enum":[ + "ACTIVE", + "CLOSED", + "ERRORED" + ] + }, + "TestGridSessions":{ + "type":"list", + "member":{"shape":"TestGridSession"} + }, + "TestGridUrlExpiresInSecondsInput":{ + "type":"integer", + "max":86400, + "min":60 + }, "TestParameters":{ "type":"map", "key":{"shape":"String"}, @@ -2959,7 +3366,7 @@ "TagKeys" ], "members":{ - "ResourceARN":{"shape":"AmazonResourceName"}, + "ResourceARN":{"shape":"DeviceFarmArn"}, "TagKeys":{"shape":"TagKeyList"} } }, @@ -3058,6 +3465,21 @@ "project":{"shape":"Project"} } }, + "UpdateTestGridProjectRequest":{ + "type":"structure", + "required":["projectArn"], + "members":{ + "projectArn":{"shape":"DeviceFarmArn"}, + "name":{"shape":"ResourceName"}, + "description":{"shape":"ResourceDescription"} + } + }, + "UpdateTestGridProjectResult":{ + "type":"structure", + "members":{ + "testGridProject":{"shape":"TestGridProject"} + } + }, "UpdateUploadRequest":{ "type":"structure", "required":["arn"], diff --git a/models/apis/devicefarm/2015-06-23/docs-2.json b/models/apis/devicefarm/2015-06-23/docs-2.json index 15c03d12f72..beccbd6b5c2 100644 --- a/models/apis/devicefarm/2015-06-23/docs-2.json +++ b/models/apis/devicefarm/2015-06-23/docs-2.json @@ -1,36 +1,41 @@ { "version": "2.0", - "service": "AWS Device Farm is a service that enables mobile app developers to test Android, iOS, and Fire OS apps on physical phones, tablets, and other devices in the cloud.
", + "service": "Welcome to the AWS Device Farm API documentation, which contains APIs for:
Testing on desktop browsers
Device Farm makes it possible for you to test your web applications on desktop browsers using Selenium. The APIs for desktop browser testing contain TestGrid
in their names. For more information, see Testing Web Applications on Selenium with Device Farm.
Testing on real mobile devices
Device Farm makes it possible for you to test apps on physical phones, tablets, and other devices in the cloud. For more information, see the Device Farm Developer Guide.
Creates a device pool.
", "CreateInstanceProfile": "Creates a profile that can be applied to one or more private fleet device instances.
", "CreateNetworkProfile": "Creates a network profile.
", - "CreateProject": "Creates a new project.
", + "CreateProject": "Creates a project.
", "CreateRemoteAccessSession": "Specifies and starts a remote access session.
", + "CreateTestGridProject": "Creates a Selenium testing project. Projects are used to track TestGridSession instances.
", + "CreateTestGridUrl": "Creates a signed, short-term URL that can be passed to a Selenium RemoteWebDriver
constructor.
Uploads an app or test scripts.
", "CreateVPCEConfiguration": "Creates a configuration record in Device Farm for your Amazon Virtual Private Cloud (VPC) endpoint.
", "DeleteDevicePool": "Deletes a device pool given the pool ARN. Does not allow deletion of curated pools owned by the system.
", "DeleteInstanceProfile": "Deletes a profile that can be applied to one or more private device instances.
", "DeleteNetworkProfile": "Deletes a network profile.
", - "DeleteProject": "Deletes an AWS Device Farm project, given the project ARN.
Note Deleting this resource does not stop an in-progress run.
", + "DeleteProject": "Deletes an AWS Device Farm project, given the project ARN.
Deleting this resource does not stop an in-progress run.
", "DeleteRemoteAccessSession": "Deletes a completed remote access session and its results.
", - "DeleteRun": "Deletes the run, given the run ARN.
Note Deleting this resource does not stop an in-progress run.
", + "DeleteRun": "Deletes the run, given the run ARN.
Deleting this resource does not stop an in-progress run.
", + "DeleteTestGridProject": "Deletes a Selenium testing project and all content generated under it.
You cannot undo this operation.
You cannot delete a project if it has active sessions.
Deletes an upload given the upload ARN.
", "DeleteVPCEConfiguration": "Deletes a configuration for your Amazon Virtual Private Cloud (VPC) endpoint.
", - "GetAccountSettings": "Returns the number of unmetered iOS and/or unmetered Android devices that have been purchased by the account.
", + "GetAccountSettings": "Returns the number of unmetered iOS or unmetered Android devices that have been purchased by the account.
", "GetDevice": "Gets information about a unique device type.
", - "GetDeviceInstance": "Returns information about a device instance belonging to a private device fleet.
", + "GetDeviceInstance": "Returns information about a device instance that belongs to a private device fleet.
", "GetDevicePool": "Gets information about a device pool.
", "GetDevicePoolCompatibility": "Gets information about compatibility with a device pool.
", "GetInstanceProfile": "Returns information about the specified instance profile.
", "GetJob": "Gets information about a job.
", "GetNetworkProfile": "Returns information about a network profile.
", - "GetOfferingStatus": "Gets the current status and future status of all offerings purchased by an AWS account. The response indicates how many offerings are currently available and the offerings that will be available in the next period. The API returns a NotEligible
error if the user is not permitted to invoke the operation. Please contact aws-devicefarm-support@amazon.com if you believe that you should be able to invoke this operation.
Gets the current status and future status of all offerings purchased by an AWS account. The response indicates how many offerings are currently available and the offerings that will be available in the next period. The API returns a NotEligible
error if the user is not permitted to invoke the operation. If you must be able to invoke this operation, contact aws-devicefarm-support@amazon.com.
Gets information about a project.
", "GetRemoteAccessSession": "Returns a link to a currently running remote access session.
", "GetRun": "Gets information about a run.
", "GetSuite": "Gets information about a suite.
", "GetTest": "Gets information about a test.
", + "GetTestGridProject": "Retrieves information about a Selenium testing project.
", + "GetTestGridSession": "A session is an instance of a browser created through a RemoteWebDriver
with the URL from CreateTestGridUrlResult$url. You can use the following to look up sessions:
The session ARN (GetTestGridSessionRequest$sessionArn).
The project ARN and a session ID (GetTestGridSessionRequest$projectArn and GetTestGridSessionRequest$sessionId).
Gets information about an upload.
", "GetVPCEConfiguration": "Returns information about the configuration settings for your Amazon Virtual Private Cloud (VPC) endpoint.
", "InstallToRemoteAccessSession": "Installs an application to the device in a remote access session. For Android applications, the file must be in .apk format. For iOS applications, the file must be in .ipa format.
", @@ -41,34 +46,39 @@ "ListInstanceProfiles": "Returns information about all the instance profiles in an AWS account.
", "ListJobs": "Gets information about jobs for a given test run.
", "ListNetworkProfiles": "Returns the list of available network profiles.
", - "ListOfferingPromotions": "Returns a list of offering promotions. Each offering promotion record contains the ID and description of the promotion. The API returns a NotEligible
error if the caller is not permitted to invoke the operation. Contact aws-devicefarm-support@amazon.com if you believe that you should be able to invoke this operation.
Returns a list of all historical purchases, renewals, and system renewal transactions for an AWS account. The list is paginated and ordered by a descending timestamp (most recent transactions are first). The API returns a NotEligible
error if the user is not permitted to invoke the operation. Please contact aws-devicefarm-support@amazon.com if you believe that you should be able to invoke this operation.
Returns a list of products or offerings that the user can manage through the API. Each offering record indicates the recurring price per unit and the frequency for that offering. The API returns a NotEligible
error if the user is not permitted to invoke the operation. Please contact aws-devicefarm-support@amazon.com if you believe that you should be able to invoke this operation.
Returns a list of offering promotions. Each offering promotion record contains the ID and description of the promotion. The API returns a NotEligible
error if the caller is not permitted to invoke the operation. Contact aws-devicefarm-support@amazon.com if you must be able to invoke this operation.
Returns a list of all historical purchases, renewals, and system renewal transactions for an AWS account. The list is paginated and ordered by a descending timestamp (most recent transactions are first). The API returns a NotEligible
error if the user is not permitted to invoke the operation. If you must be able to invoke this operation, contact aws-devicefarm-support@amazon.com.
Returns a list of products or offerings that the user can manage through the API. Each offering record indicates the recurring price per unit and the frequency for that offering. The API returns a NotEligible
error if the user is not permitted to invoke the operation. If you must be able to invoke this operation, contact aws-devicefarm-support@amazon.com.
Gets information about projects.
", "ListRemoteAccessSessions": "Returns a list of all currently running remote access sessions.
", "ListRuns": "Gets information about runs, given an AWS Device Farm project ARN.
", "ListSamples": "Gets information about samples, given an AWS Device Farm job ARN.
", "ListSuites": "Gets information about test suites for a given job.
", "ListTagsForResource": "List the tags for an AWS Device Farm resource.
", + "ListTestGridProjects": "Gets a list of all Selenium testing projects in your account.
", + "ListTestGridSessionActions": "Returns a list of the actions taken in a TestGridSession.
", + "ListTestGridSessionArtifacts": "Retrieves a list of artifacts created during the session.
", + "ListTestGridSessions": "Retrieves a list of sessions for a TestGridProject.
", "ListTests": "Gets information about tests in a given test suite.
", - "ListUniqueProblems": "Gets information about unique problems.
", + "ListUniqueProblems": "Gets information about unique problems, such as exceptions or crashes.
Unique problems are defined as a single instance of an error across a run, job, or suite. For example, if a call in your application consistently raises an exception (OutOfBoundsException in MyActivity.java:386
), ListUniqueProblems
returns a single entry instead of many individual entries for that exception.
Gets information about uploads, given an AWS Device Farm project ARN.
", "ListVPCEConfigurations": "Returns information about all Amazon Virtual Private Cloud (VPC) endpoint configurations in the AWS account.
", - "PurchaseOffering": "Immediately purchases offerings for an AWS account. Offerings renew with the latest total purchased quantity for an offering, unless the renewal was overridden. The API returns a NotEligible
error if the user is not permitted to invoke the operation. Please contact aws-devicefarm-support@amazon.com if you believe that you should be able to invoke this operation.
Explicitly sets the quantity of devices to renew for an offering, starting from the effectiveDate
of the next period. The API returns a NotEligible
error if the user is not permitted to invoke the operation. Please contact aws-devicefarm-support@amazon.com if you believe that you should be able to invoke this operation.
Immediately purchases offerings for an AWS account. Offerings renew with the latest total purchased quantity for an offering, unless the renewal was overridden. The API returns a NotEligible
error if the user is not permitted to invoke the operation. If you must be able to invoke this operation, contact aws-devicefarm-support@amazon.com.
Explicitly sets the quantity of devices to renew for an offering, starting from the effectiveDate
of the next period. The API returns a NotEligible
error if the user is not permitted to invoke the operation. If you must be able to invoke this operation, contact aws-devicefarm-support@amazon.com.
Schedules a run.
", - "StopJob": "Initiates a stop request for the current job. AWS Device Farm will immediately stop the job on the device where tests have not started executing, and you will not be billed for this device. On the device where tests have started executing, Setup Suite and Teardown Suite tests will run to completion before stopping execution on the device. You will be billed for Setup, Teardown, and any tests that were in progress or already completed.
", + "StopJob": "Initiates a stop request for the current job. AWS Device Farm immediately stops the job on the device where tests have not started. You are not billed for this device. On the device where tests have started, setup suite and teardown suite tests run to completion on the device. You are billed for setup, teardown, and any tests that were in progress or already completed.
", "StopRemoteAccessSession": "Ends a specified remote access session.
", - "StopRun": "Initiates a stop request for the current test run. AWS Device Farm will immediately stop the run on devices where tests have not started executing, and you will not be billed for these devices. On devices where tests have started executing, Setup Suite and Teardown Suite tests will run to completion before stopping execution on those devices. You will be billed for Setup, Teardown, and any tests that were in progress or already completed.
", - "TagResource": "Associates the specified tags to a resource with the specified resourceArn
. If existing tags on a resource are not specified in the request parameters, they are not changed. When a resource is deleted, the tags associated with that resource are deleted as well.
Initiates a stop request for the current test run. AWS Device Farm immediately stops the run on devices where tests have not started. You are not billed for these devices. On devices where tests have started executing, setup suite and teardown suite tests run to completion on those devices. You are billed for setup, teardown, and any tests that were in progress or already completed.
", + "TagResource": "Associates the specified tags to a resource with the specified resourceArn
. If existing tags on a resource are not specified in the request parameters, they are not changed. When a resource is deleted, the tags associated with that resource are also deleted.
Deletes the specified tags from a resource.
", - "UpdateDeviceInstance": "Updates information about an existing private device instance.
", + "UpdateDeviceInstance": "Updates information about a private device instance.
", "UpdateDevicePool": "Modifies the name, description, and rules in a device pool given the attributes and the pool ARN. Rule updates are all-or-nothing, meaning they can only be updated as a whole (or not at all).
", "UpdateInstanceProfile": "Updates information about an existing private device instance profile.
", - "UpdateNetworkProfile": "Updates the network profile with specific settings.
", + "UpdateNetworkProfile": "Updates the network profile.
", "UpdateProject": "Modifies the specified project name, given the project ARN and a new name.
", - "UpdateUpload": "Update an uploaded test specification (test spec).
", - "UpdateVPCEConfiguration": "Updates information about an existing Amazon Virtual Private Cloud (VPC) endpoint configuration.
" + "UpdateTestGridProject": "Change details of a project.
", + "UpdateUpload": "Updates an uploaded test spec.
", + "UpdateVPCEConfiguration": "Updates information about an Amazon Virtual Private Cloud (VPC) endpoint configuration.
" }, "shapes": { "AWSAccountNumber": { @@ -78,7 +88,7 @@ } }, "AccountSettings": { - "base": "A container for account-level settings within AWS Device Farm.
", + "base": "A container for account-level settings in AWS Device Farm.
", "refs": { "GetAccountSettingsResult$accountSettings": "The account settings.
" } @@ -86,7 +96,7 @@ "AccountsCleanup": { "base": null, "refs": { - "ExecutionConfiguration$accountsCleanup": "True if account cleanup is enabled at the beginning of the test; otherwise, false.
" + "ExecutionConfiguration$accountsCleanup": "True if account cleanup is enabled at the beginning of the test. Otherwise, false.
" } }, "AmazonResourceName": { @@ -97,30 +107,30 @@ "CreateDevicePoolRequest$projectArn": "The ARN of the project for the device pool.
", "CreateNetworkProfileRequest$projectArn": "The Amazon Resource Name (ARN) of the project for which you want to create a network profile.
", "CreateRemoteAccessSessionRequest$projectArn": "The Amazon Resource Name (ARN) of the project for which you want to create a remote access session.
", - "CreateRemoteAccessSessionRequest$deviceArn": "The Amazon Resource Name (ARN) of the device for which you want to create a remote access session.
", + "CreateRemoteAccessSessionRequest$deviceArn": "The ARN of the device for which you want to create a remote access session.
", "CreateRemoteAccessSessionRequest$instanceArn": "The Amazon Resource Name (ARN) of the device instance for which you want to create a remote access session.
", "CreateRemoteAccessSessionRequest$remoteRecordAppArn": "The Amazon Resource Name (ARN) for the app to be recorded in the remote access session.
", "CreateUploadRequest$projectArn": "The ARN of the project for the upload.
", - "DeleteDevicePoolRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm device pool you wish to delete.
", + "DeleteDevicePoolRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm device pool to delete.
", "DeleteInstanceProfileRequest$arn": "The Amazon Resource Name (ARN) of the instance profile you are requesting to delete.
", - "DeleteNetworkProfileRequest$arn": "The Amazon Resource Name (ARN) of the network profile you want to delete.
", - "DeleteProjectRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm project you wish to delete.
", + "DeleteNetworkProfileRequest$arn": "The ARN of the network profile to delete.
", + "DeleteProjectRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm project to delete.
", "DeleteRemoteAccessSessionRequest$arn": "The Amazon Resource Name (ARN) of the session for which you want to delete remote access.
", - "DeleteRunRequest$arn": "The Amazon Resource Name (ARN) for the run you wish to delete.
", - "DeleteUploadRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm upload you wish to delete.
", + "DeleteRunRequest$arn": "The Amazon Resource Name (ARN) for the run to delete.
", + "DeleteUploadRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm upload to delete.
", "DeleteVPCEConfigurationRequest$arn": "The Amazon Resource Name (ARN) of the VPC endpoint configuration you want to delete.
", "Device$arn": "The device's ARN.
", "DeviceInstance$arn": "The Amazon Resource Name (ARN) of the device instance.
", - "DeviceInstance$deviceArn": "The Amazon Resource Name (ARN) of the device.
", + "DeviceInstance$deviceArn": "The ARN of the device.
", "DevicePool$arn": "The device pool's ARN.
", "GetDeviceInstanceRequest$arn": "The Amazon Resource Name (ARN) of the instance you're requesting information about.
", "GetDevicePoolCompatibilityRequest$devicePoolArn": "The device pool's ARN.
", "GetDevicePoolCompatibilityRequest$appArn": "The ARN of the app that is associated with the specified device pool.
", "GetDevicePoolRequest$arn": "The device pool's ARN.
", "GetDeviceRequest$arn": "The device type's ARN.
", - "GetInstanceProfileRequest$arn": "The Amazon Resource Name (ARN) of your instance profile.
", + "GetInstanceProfileRequest$arn": "The Amazon Resource Name (ARN) of an instance profile.
", "GetJobRequest$arn": "The job's ARN.
", - "GetNetworkProfileRequest$arn": "The Amazon Resource Name (ARN) of the network profile you want to return information about.
", + "GetNetworkProfileRequest$arn": "The ARN of the network profile to return information about.
", "GetProjectRequest$arn": "The project's ARN.
", "GetRemoteAccessSessionRequest$arn": "The Amazon Resource Name (ARN) of the remote access session about which you want to get session information.
", "GetRunRequest$arn": "The run's ARN.
", @@ -129,11 +139,11 @@ "GetUploadRequest$arn": "The upload's ARN.
", "GetVPCEConfigurationRequest$arn": "The Amazon Resource Name (ARN) of the VPC endpoint configuration you want to describe.
", "InstallToRemoteAccessSessionRequest$remoteAccessSessionArn": "The Amazon Resource Name (ARN) of the remote access session about which you are requesting information.
", - "InstallToRemoteAccessSessionRequest$appArn": "The Amazon Resource Name (ARN) of the app about which you are requesting information.
", + "InstallToRemoteAccessSessionRequest$appArn": "The ARN of the app about which you are requesting information.
", "InstanceProfile$arn": "The Amazon Resource Name (ARN) of the instance profile.
", "Job$arn": "The job's ARN.
", - "Job$instanceArn": "The Amazon Resource Name (ARN) of the instance.
", - "ListArtifactsRequest$arn": "The Run, Job, Suite, or Test ARN.
", + "Job$instanceArn": "The ARN of the instance.
", + "ListArtifactsRequest$arn": "The run, job, suite, or test ARN.
", "ListDevicePoolsRequest$arn": "The project ARN.
", "ListDevicesRequest$arn": "The Amazon Resource Name (ARN) of the project.
", "ListJobsRequest$arn": "The run's Amazon Resource Name (ARN).
", @@ -143,7 +153,6 @@ "ListRunsRequest$arn": "The Amazon Resource Name (ARN) of the project for which you want to list runs.
", "ListSamplesRequest$arn": "The Amazon Resource Name (ARN) of the job used to list samples.
", "ListSuitesRequest$arn": "The job's Amazon Resource Name (ARN).
", - "ListTagsForResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) of the resource(s) for which to list tags. You can associate tags with the following Device Farm resources: PROJECT
, RUN
, NETWORK_PROFILE
, INSTANCE_PROFILE
, DEVICE_INSTANCE
, SESSION
, DEVICE_POOL
, DEVICE
, and VPCE_CONFIGURATION
.
The test suite's Amazon Resource Name (ARN).
", "ListUniqueProblemsRequest$arn": "The unique problems' ARNs.
", "ListUploadsRequest$arn": "The Amazon Resource Name (ARN) of the project for which you want to list uploads.
", @@ -151,36 +160,34 @@ "ProblemDetail$arn": "The problem detail's ARN.
", "Project$arn": "The project's ARN.
", "RemoteAccessSession$arn": "The Amazon Resource Name (ARN) of the remote access session.
", - "RemoteAccessSession$instanceArn": "The Amazon Resource Name (ARN) of the instance.
", - "RemoteAccessSession$remoteRecordAppArn": "The Amazon Resource Name (ARN) for the app to be recorded in the remote access session.
", + "RemoteAccessSession$instanceArn": "The ARN of the instance.
", + "RemoteAccessSession$remoteRecordAppArn": "The ARN for the app to be recorded in the remote access session.
", "Run$arn": "The run's ARN.
", "Run$appUpload": "An app to upload or that has been uploaded.
", "Run$devicePoolArn": "The ARN of the device pool for the run.
", "Run$testSpecArn": "The ARN of the YAML-formatted test specification for the run.
", "Sample$arn": "The sample's ARN.
", - "ScheduleRunConfiguration$extraDataPackageArn": "The ARN of the extra data for the run. The extra data is a .zip file that AWS Device Farm will extract to external data for Android or the app's sandbox for iOS.
", + "ScheduleRunConfiguration$extraDataPackageArn": "The ARN of the extra data for the run. The extra data is a .zip file that AWS Device Farm extracts to external data for Android or the app's sandbox for iOS.
", "ScheduleRunConfiguration$networkProfileArn": "Reserved for internal use.
", "ScheduleRunRequest$projectArn": "The ARN of the project for the run to be scheduled.
", - "ScheduleRunRequest$appArn": "The ARN of the app to schedule a run.
", + "ScheduleRunRequest$appArn": "The ARN of an application package to run tests against, created with CreateUpload. See ListUploads.
", "ScheduleRunRequest$devicePoolArn": "The ARN of the device pool for the run to be scheduled.
", - "ScheduleRunTest$testPackageArn": "The ARN of the uploaded test that will be run.
", + "ScheduleRunTest$testPackageArn": "The ARN of the uploaded test to be run.
", "ScheduleRunTest$testSpecArn": "The ARN of the YAML-formatted test specification.
", - "StopJobRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm job you wish to stop.
", - "StopRemoteAccessSessionRequest$arn": "The Amazon Resource Name (ARN) of the remote access session you wish to stop.
", - "StopRunRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm run you wish to stop.
", + "StopJobRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm job to stop.
", + "StopRemoteAccessSessionRequest$arn": "The Amazon Resource Name (ARN) of the remote access session to stop.
", + "StopRunRequest$arn": "Represents the Amazon Resource Name (ARN) of the Device Farm run to stop.
", "Suite$arn": "The suite's ARN.
", "TagOperationException$resourceName": null, "TagPolicyException$resourceName": null, - "TagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) of the resource(s) to which to add tags. You can associate tags with the following Device Farm resources: PROJECT
, RUN
, NETWORK_PROFILE
, INSTANCE_PROFILE
, DEVICE_INSTANCE
, SESSION
, DEVICE_POOL
, DEVICE
, and VPCE_CONFIGURATION
.
The test's ARN.
", "TooManyTagsException$resourceName": null, - "UntagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) of the resource(s) from which to delete tags. You can associate tags with the following Device Farm resources: PROJECT
, RUN
, NETWORK_PROFILE
, INSTANCE_PROFILE
, DEVICE_INSTANCE
, SESSION
, DEVICE_POOL
, DEVICE
, and VPCE_CONFIGURATION
.
The Amazon Resource Name (ARN) of the device instance.
", - "UpdateDeviceInstanceRequest$profileArn": "The Amazon Resource Name (ARN) of the profile that you want to associate with the device instance.
", - "UpdateDevicePoolRequest$arn": "The Amazon Resource Name (ARN) of the Device Farm device pool you wish to update.
", + "UpdateDeviceInstanceRequest$profileArn": "The ARN of the profile that you want to associate with the device instance.
", + "UpdateDevicePoolRequest$arn": "The Amazon Resource Name (ARN) of the Device Farm device pool to update.
", "UpdateInstanceProfileRequest$arn": "The Amazon Resource Name (ARN) of the instance profile.
", "UpdateNetworkProfileRequest$arn": "The Amazon Resource Name (ARN) of the project for which you want to update network profile settings.
", - "UpdateProjectRequest$arn": "The Amazon Resource Name (ARN) of the project whose name you wish to update.
", + "UpdateProjectRequest$arn": "The Amazon Resource Name (ARN) of the project whose name to update.
", "UpdateUploadRequest$arn": "The Amazon Resource Name (ARN) of the uploaded test spec.
", "UpdateVPCEConfigurationRequest$arn": "The Amazon Resource Name (ARN) of the VPC endpoint configuration you want to update.
", "Upload$arn": "The upload's ARN.
", @@ -190,21 +197,21 @@ "AmazonResourceNames": { "base": null, "refs": { - "CreateRemoteAccessSessionConfiguration$vpceConfigurationArns": "An array of Amazon Resource Names (ARNs) included in the VPC endpoint configuration.
", - "ScheduleRunConfiguration$vpceConfigurationArns": "An array of Amazon Resource Names (ARNs) for your VPC endpoint configurations.
", - "ScheduleRunConfiguration$auxiliaryApps": "A list of Upload ARNs for app packages that will be installed alongside your app.
" + "CreateRemoteAccessSessionConfiguration$vpceConfigurationArns": "An array of ARNs included in the VPC endpoint configuration.
", + "ScheduleRunConfiguration$vpceConfigurationArns": "An array of ARNs for your VPC endpoint configurations.
", + "ScheduleRunConfiguration$auxiliaryApps": "A list of upload ARNs for app packages to be installed with your app.
" } }, "AndroidPaths": { "base": null, "refs": { - "CustomerArtifactPaths$androidPaths": "Comma-separated list of paths on the Android device where the artifacts generated by the customer's tests will be pulled from.
" + "CustomerArtifactPaths$androidPaths": "Comma-separated list of paths on the Android device where the artifacts generated by the customer's tests are pulled from.
" } }, "AppPackagesCleanup": { "base": null, "refs": { - "ExecutionConfiguration$appPackagesCleanup": "True if app package cleanup is enabled at the beginning of the test; otherwise, false.
" + "ExecutionConfiguration$appPackagesCleanup": "True if app package cleanup is enabled at the beginning of the test. Otherwise, false.
" } }, "ArgumentException": { @@ -221,13 +228,13 @@ "ArtifactCategory": { "base": null, "refs": { - "ListArtifactsRequest$type": "The artifacts' type.
Allowed values include:
FILE: The artifacts are files.
LOG: The artifacts are logs.
SCREENSHOT: The artifacts are screenshots.
The artifacts' type.
Allowed values include:
FILE
LOG
SCREENSHOT
The artifact's type.
Allowed values include the following:
UNKNOWN: An unknown type.
SCREENSHOT: The screenshot type.
DEVICE_LOG: The device log type.
MESSAGE_LOG: The message log type.
VIDEO_LOG: The video log type.
RESULT_LOG: The result log type.
SERVICE_LOG: The service log type.
WEBKIT_LOG: The web kit log type.
INSTRUMENTATION_OUTPUT: The instrumentation type.
EXERCISER_MONKEY_OUTPUT: For Android, the artifact (log) generated by an Android fuzz test.
CALABASH_JSON_OUTPUT: The Calabash JSON output type.
CALABASH_PRETTY_OUTPUT: The Calabash pretty output type.
CALABASH_STANDARD_OUTPUT: The Calabash standard output type.
CALABASH_JAVA_XML_OUTPUT: The Calabash Java XML output type.
AUTOMATION_OUTPUT: The automation output type.
APPIUM_SERVER_OUTPUT: The Appium server output type.
APPIUM_JAVA_OUTPUT: The Appium Java output type.
APPIUM_JAVA_XML_OUTPUT: The Appium Java XML output type.
APPIUM_PYTHON_OUTPUT: The Appium Python output type.
APPIUM_PYTHON_XML_OUTPUT: The Appium Python XML output type.
EXPLORER_EVENT_LOG: The Explorer event log output type.
EXPLORER_SUMMARY_LOG: The Explorer summary log output type.
APPLICATION_CRASH_REPORT: The application crash report output type.
XCTEST_LOG: The Xcode test output type.
VIDEO: The Video output type.
CUSTOMER_ARTIFACT:The Customer Artifact output type.
CUSTOMER_ARTIFACT_LOG: The Customer Artifact Log output type.
TESTSPEC_OUTPUT: The Test Spec Output type.
The artifact's type.
Allowed values include the following:
UNKNOWN
SCREENSHOT
DEVICE_LOG
MESSAGE_LOG
VIDEO_LOG
RESULT_LOG
SERVICE_LOG
WEBKIT_LOG
INSTRUMENTATION_OUTPUT
EXERCISER_MONKEY_OUTPUT: the artifact (log) generated by an Android fuzz test.
CALABASH_JSON_OUTPUT
CALABASH_PRETTY_OUTPUT
CALABASH_STANDARD_OUTPUT
CALABASH_JAVA_XML_OUTPUT
AUTOMATION_OUTPUT
APPIUM_SERVER_OUTPUT
APPIUM_JAVA_OUTPUT
APPIUM_JAVA_XML_OUTPUT
APPIUM_PYTHON_OUTPUT
APPIUM_PYTHON_XML_OUTPUT
EXPLORER_EVENT_LOG
EXPLORER_SUMMARY_LOG
APPLICATION_CRASH_REPORT
XCTEST_LOG
VIDEO
CUSTOMER_ARTIFACT
CUSTOMER_ARTIFACT_LOG
TESTSPEC_OUTPUT
The billing method for the remote access session.
", - "RemoteAccessSession$billingMethod": "The billing method of the remote access session. Possible values include METERED
or UNMETERED
. For more information about metered devices, see AWS Device Farm terminology.\"
Specifies the billing method for a test run: metered
or unmetered
. If the parameter is not specified, the default value is metered
.
Specifies the billing method for a test run: metered
or unmetered
. If the parameter is not specified, the default value is metered
.
The billing method of the remote access session. Possible values include METERED
or UNMETERED
. For more information about metered devices, see AWS Device Farm terminology.
Specifies the billing method for a test run: metered
or unmetered
. If the parameter is not specified, the default value is metered
.
If you have unmetered device slots, you must set this to unmetered
to use them. Otherwise, the run is counted toward metered device minutes.
Specifies the billing method for a test run: metered
or unmetered
. If the parameter is not specified, the default value is metered
.
If you have purchased unmetered device slots, you must set this parameter to unmetered
to make use of them. Otherwise, your run counts against your metered time.
When set to true
, Device Farm will remove app packages after a test run. The default value is false
for private devices.
When set to true
, Device Farm will reboot the instance after a test run. The default value is true
.
Set to true
if you want to access devices remotely for debugging in your remote access session.
Remote debugging is no longer supported.
", + "CreateInstanceProfileRequest$packageCleanup": "When set to true
, Device Farm removes app packages after a test run. The default value is false
for private devices.
When set to true
, Device Farm reboots the instance after a test run. The default value is true
.
Set to true
if you want to access devices remotely for debugging in your remote access session.
Remote debugging is no longer supported.
", "CreateRemoteAccessSessionRequest$remoteRecordEnabled": "Set to true
to enable remote recording for the remote access session.
When set to true
, for private devices, Device Farm will not sign your app again. For public devices, Device Farm always signs your apps again and this parameter has no effect.
For more information about how Device Farm re-signs your app(s), see Do you modify my app? in the AWS Device Farm FAQs.
", + "CreateRemoteAccessSessionRequest$skipAppResign": "When set to true
, for private devices, Device Farm does not sign your app again. For public devices, Device Farm always signs your apps again.
For more information on how Device Farm modifies your uploads during tests, see Do you modify my app?
", "Device$remoteAccessEnabled": "Specifies whether remote access has been enabled for the specified device.
", - "Device$remoteDebugEnabled": "This flag is set to true
if remote debugging is enabled for the device.
Remote debugging is no longer supported.
", + "Device$remoteDebugEnabled": "This flag is set to true
if remote debugging is enabled for the device.
Remote debugging is no longer supported.
", "DevicePoolCompatibilityResult$compatible": "Whether the result was compatible with the device pool.
", - "InstanceProfile$packageCleanup": "When set to true
, Device Farm will remove app packages after a test run. The default value is false
for private devices.
When set to true
, Device Farm will reboot the instance after a test run. The default value is true
.
True if Wi-Fi is enabled at the beginning of the test; otherwise, false.
", - "Radios$bluetooth": "True if Bluetooth is enabled at the beginning of the test; otherwise, false.
", - "Radios$nfc": "True if NFC is enabled at the beginning of the test; otherwise, false.
", - "Radios$gps": "True if GPS is enabled at the beginning of the test; otherwise, false.
", - "RemoteAccessSession$remoteDebugEnabled": "This flag is set to true
if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
", + "InstanceProfile$packageCleanup": "When set to true
, Device Farm removes app packages after a test run. The default value is false
for private devices.
When set to true
, Device Farm reboots the instance after a test run. The default value is true
.
True if Wi-Fi is enabled at the beginning of the test. Otherwise, false.
", + "Radios$bluetooth": "True if Bluetooth is enabled at the beginning of the test. Otherwise, false.
", + "Radios$nfc": "True if NFC is enabled at the beginning of the test. Otherwise, false.
", + "Radios$gps": "True if GPS is enabled at the beginning of the test. Otherwise, false.
", + "RemoteAccessSession$remoteDebugEnabled": "This flag is set to true
if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
", "RemoteAccessSession$remoteRecordEnabled": "This flag is set to true
if remote recording is enabled for the remote access session.
Sets whether the maxDevices
parameter applies to your device pool. If you set this parameter to true
, the maxDevices
parameter does not apply, and Device Farm does not limit the number of devices that it adds to your device pool. In this case, Device Farm adds all available devices that meet the criteria that are specified for the rules
parameter.
If you use this parameter in your request, you cannot use the maxDevices
parameter in the same request.
Sets whether the maxDevices
parameter applies to your device pool. If you set this parameter to true
, the maxDevices
parameter does not apply, and Device Farm does not limit the number of devices that it adds to your device pool. In this case, Device Farm adds all available devices that meet the criteria specified in the rules
parameter.
If you use this parameter in your request, you cannot use the maxDevices
parameter in the same request.
The updated choice for whether you want to specify package cleanup. The default value is false
for private devices.
The updated choice for whether you want to reboot the device after use. The default value is true
.
Set to true if the YAML file has changed and needs to be updated; otherwise, set to false.
" + "UpdateUploadRequest$editContent": "Set to true if the YAML file has changed and must be updated. Otherwise, set to false.
" } }, "CPU": { - "base": "Represents the amount of CPU that an app is using on a physical device.
Note that this does not represent system-wide CPU usage.
", + "base": "Represents the amount of CPU that an app is using on a physical device. Does not represent system-wide CPU usage.
", "refs": { "Device$cpu": "Information about the device's CPU.
" } }, + "CannotDeleteException": { + "base": "The requested object could not be deleted.
", + "refs": { + } + }, "ClientId": { "base": null, "refs": { - "CreateRemoteAccessSessionRequest$clientId": "Unique identifier for the client. If you want access to multiple devices on the same client, you should pass the same clientId
value in each call to CreateRemoteAccessSession
. This is required only if remoteDebugEnabled
is set to true
.
Remote debugging is no longer supported.
", - "RemoteAccessSession$clientId": "Unique identifier of your client for the remote access session. Only returned if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
" + "CreateRemoteAccessSessionRequest$clientId": "Unique identifier for the client. If you want access to multiple devices on the same client, you should pass the same clientId
value in each call to CreateRemoteAccessSession
. This identifier is required only if remoteDebugEnabled
is set to true
.
Remote debugging is no longer supported.
", + "RemoteAccessSession$clientId": "Unique identifier of your client for the remote access session. Only returned if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
" } }, "ContentType": { "base": null, "refs": { - "CreateUploadRequest$contentType": "The upload's content type (for example, \"application/octet-stream\").
", - "UpdateUploadRequest$contentType": "The upload's content type (for example, \"application/x-yaml\").
", - "Upload$contentType": "The upload's content type (for example, \"application/octet-stream\").
" + "CreateUploadRequest$contentType": "The upload's content type (for example, application/octet-stream
).
The upload's content type (for example, application/x-yaml
).
The upload's content type (for example, application/octet-stream
).
Represents a request to the create upload operation.
", "refs": { @@ -379,11 +411,11 @@ "CurrencyCode": { "base": null, "refs": { - "MonetaryAmount$currencyCode": "The currency code of a monetary amount. For example, USD
means \"U.S. dollars.\"
The currency code of a monetary amount. For example, USD
means U.S. dollars.
A JSON object specifying the paths where the artifacts generated by the customer's tests, on the device or in the test environment, will be pulled from.
Specify deviceHostPaths
and optionally specify either iosPaths
or androidPaths
.
For web app tests, you can specify both iosPaths
and androidPaths
.
A JSON object that specifies the paths where the artifacts generated by the customer's tests, on the device or in the test environment, are pulled from.
Specify deviceHostPaths
and optionally specify either iosPaths
or androidPaths
.
For web app tests, you can specify both iosPaths
and androidPaths
.
Output CustomerArtifactPaths
object for the test run.
Input CustomerArtifactPaths
object for the scheduled run configuration.
The number of seconds the URL from CreateTestGridUrlResult$url stays active.
", "Job$created": "When the job was created.
", "Job$started": "The job's start time.
", "Job$stopped": "The job's stop time.
", + "ListTestGridSessionsRequest$creationTimeAfter": "Return only sessions created after this time.
", + "ListTestGridSessionsRequest$creationTimeBefore": "Return only sessions created before this time.
", + "ListTestGridSessionsRequest$endTimeAfter": "Return only sessions that ended after this time.
", + "ListTestGridSessionsRequest$endTimeBefore": "Return only sessions that ended before this time.
", "OfferingStatus$effectiveOn": "The date on which the offering is effective.
", "OfferingTransaction$createdOn": "The date on which an offering transaction was created.
", "Project$created": "When the project was created.
", @@ -410,6 +447,10 @@ "Test$created": "When the test was created.
", "Test$started": "The test's start time.
", "Test$stopped": "The test's stop time.
", + "TestGridProject$created": "When the project was created.
", + "TestGridSession$created": "The time that the session was started.
", + "TestGridSession$ended": "The time the session ended.
", + "TestGridSessionAction$started": "The time that the session invoked the action.
", "Upload$created": "When the upload was created.
" } }, @@ -473,6 +514,16 @@ "refs": { } }, + "DeleteTestGridProjectRequest": { + "base": null, + "refs": { + } + }, + "DeleteTestGridProjectResult": { + "base": null, + "refs": { + } + }, "DeleteUploadRequest": { "base": "Represents a request to the delete upload operation.
", "refs": { @@ -496,9 +547,9 @@ "Device": { "base": "Represents a device type that an app is tested against.
", "refs": { - "DevicePoolCompatibilityResult$device": "The device (phone or tablet) that you wish to return information about.
", + "DevicePoolCompatibilityResult$device": "The device (phone or tablet) to return information about.
", "Devices$member": null, - "GetDeviceResult$device": "An object containing information about the requested device.
", + "GetDeviceResult$device": "An object that contains information about the requested device.
", "Job$device": "The device (phone or tablet).
", "Problem$device": "Information about the associated device.
", "RemoteAccessSession$device": "The device (phone or tablet) used in the remote access session.
" @@ -507,18 +558,37 @@ "DeviceAttribute": { "base": null, "refs": { - "IncompatibilityMessage$type": "The type of incompatibility.
Allowed values include:
ARN: The ARN.
FORM_FACTOR: The form factor (for example, phone or tablet).
MANUFACTURER: The manufacturer.
PLATFORM: The platform (for example, Android or iOS).
REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access.
APPIUM_VERSION: The Appium version for the test.
The rule's stringified attribute. For example, specify the value as \"\\\"abc\\\"\"
.
The supported operators for each attribute are provided in the following list.
The Appium version for the test.
Supported operators: CONTAINS
The Amazon Resource Name (ARN) of the device. For example, \"arn:aws:devicefarm:us-west-2::device:12345Example\".
Supported operators: EQUALS
, IN
, NOT_IN
The current availability of the device. Valid values are \"AVAILABLE\", \"HIGHLY_AVAILABLE\", \"BUSY\", or \"TEMPORARY_NOT_AVAILABLE\".
Supported operators: EQUALS
The fleet type. Valid values are \"PUBLIC\" or \"PRIVATE\".
Supported operators: EQUALS
The device form factor. Valid values are \"PHONE\" or \"TABLET\".
Supported operators: EQUALS
, IN
, NOT_IN
The Amazon Resource Name (ARN) of the device instance.
Supported operators: IN
, NOT_IN
The label of the device instance.
Supported operators: CONTAINS
The device manufacturer. For example, \"Apple\".
Supported operators: EQUALS
, IN
, NOT_IN
The device model, such as \"Apple iPad Air 2\" or \"Google Pixel\".
Supported operators: CONTAINS
, EQUALS
, IN
, NOT_IN
The operating system version. For example, \"10.3.2\".
Supported operators: EQUALS
, GREATER_THAN
, GREATER_THAN_OR_EQUALS
, IN
, LESS_THAN
, LESS_THAN_OR_EQUALS
, NOT_IN
The device platform. Valid values are \"ANDROID\" or \"IOS\".
Supported operators: EQUALS
, IN
, NOT_IN
Whether the device is enabled for remote access. Valid values are \"TRUE\" or \"FALSE\".
Supported operators: EQUALS
Whether the device is enabled for remote debugging. Valid values are \"TRUE\" or \"FALSE\".
Supported operators: EQUALS
This filter will be ignored, as remote debugging is no longer supported.
The type of incompatibility.
Allowed values include:
ARN
FORM_FACTOR (for example, phone or tablet)
MANUFACTURER
PLATFORM (for example, Android or iOS)
REMOTE_ACCESS_ENABLED
APPIUM_VERSION
The rule's stringified attribute. For example, specify the value as \"\\\"abc\\\"\"
.
The supported operators for each attribute are provided in the following list.
The Appium version for the test.
Supported operators: CONTAINS
The Amazon Resource Name (ARN) of the device (for example, arn:aws:devicefarm:us-west-2::device:12345Example
.
Supported operators: EQUALS
, IN
, NOT_IN
The current availability of the device. Valid values are AVAILABLE, HIGHLY_AVAILABLE, BUSY, or TEMPORARY_NOT_AVAILABLE.
Supported operators: EQUALS
The fleet type. Valid values are PUBLIC or PRIVATE.
Supported operators: EQUALS
The device form factor. Valid values are PHONE or TABLET.
Supported operators: EQUALS
, IN
, NOT_IN
The Amazon Resource Name (ARN) of the device instance.
Supported operators: IN
, NOT_IN
The label of the device instance.
Supported operators: CONTAINS
The device manufacturer (for example, Apple).
Supported operators: EQUALS
, IN
, NOT_IN
The device model, such as Apple iPad Air 2 or Google Pixel.
Supported operators: CONTAINS
, EQUALS
, IN
, NOT_IN
The operating system version (for example, 10.3.2).
Supported operators: EQUALS
, GREATER_THAN
, GREATER_THAN_OR_EQUALS
, IN
, LESS_THAN
, LESS_THAN_OR_EQUALS
, NOT_IN
The device platform. Valid values are ANDROID or IOS.
Supported operators: EQUALS
, IN
, NOT_IN
Whether the device is enabled for remote access. Valid values are TRUE or FALSE.
Supported operators: EQUALS
Whether the device is enabled for remote debugging. Valid values are TRUE or FALSE.
Supported operators: EQUALS
Because remote debugging is no longer supported, this filter is ignored.
Reflects how likely a device will be available for a test run. It is currently available in the ListDevices and GetDevice API methods.
" + "Device$availability": "Indicates how likely a device is available for a test run. Currently available in the ListDevices and GetDevice API methods.
" + } + }, + "DeviceFarmArn": { + "base": null, + "refs": { + "CreateTestGridUrlRequest$projectArn": "ARN (from CreateTestGridProject or ListTestGridProjects) to associate with the short-term URL.
", + "DeleteTestGridProjectRequest$projectArn": "The ARN of the project to delete, from CreateTestGridProject or ListTestGridProjects.
", + "GetTestGridProjectRequest$projectArn": "The ARN of the Selenium testing project, from either CreateTestGridProject or ListTestGridProjects.
", + "GetTestGridSessionRequest$projectArn": "The ARN for the project that this session belongs to. See CreateTestGridProject and ListTestGridProjects.
", + "GetTestGridSessionRequest$sessionArn": "An ARN that uniquely identifies a TestGridSession.
", + "ListTagsForResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) of the resource or resources for which to list tags. You can associate tags with the following Device Farm resources: PROJECT
, RUN
, NETWORK_PROFILE
, INSTANCE_PROFILE
, DEVICE_INSTANCE
, SESSION
, DEVICE_POOL
, DEVICE
, and VPCE_CONFIGURATION
.
The ARN of the session to retrieve.
", + "ListTestGridSessionArtifactsRequest$sessionArn": "The ARN of a TestGridSession.
", + "ListTestGridSessionsRequest$projectArn": "ARN of a TestGridProject.
", + "TagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) of the resource or resources to which to add tags. You can associate tags with the following Device Farm resources: PROJECT
, RUN
, NETWORK_PROFILE
, INSTANCE_PROFILE
, DEVICE_INSTANCE
, SESSION
, DEVICE_POOL
, DEVICE
, and VPCE_CONFIGURATION
.
The ARN for the project.
", + "TestGridSession$arn": "The ARN of the session.
", + "UntagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) of the resource or resources from which to delete tags. You can associate tags with the following Device Farm resources: PROJECT
, RUN
, NETWORK_PROFILE
, INSTANCE_PROFILE
, DEVICE_INSTANCE
, SESSION
, DEVICE_POOL
, DEVICE
, and VPCE_CONFIGURATION
.
ARN of the project to update.
" } }, "DeviceFilter": { - "base": "Represents a device filter used to select a set of devices to be included in a test run. This data structure is passed in as the deviceSelectionConfiguration
parameter to ScheduleRun. For an example of the JSON request syntax, see ScheduleRun.
It is also passed in as the filters
parameter to ListDevices. For an example of the JSON request syntax, see ListDevices.
Represents a device filter used to select a set of devices to be included in a test run. This data structure is passed in as the deviceSelectionConfiguration
parameter to ScheduleRun
. For an example of the JSON request syntax, see ScheduleRun.
It is also passed in as the filters
parameter to ListDevices
. For an example of the JSON request syntax, see ListDevices.
The aspect of a device such as platform or model used as the selection criteria in a device filter.
The supported operators for each attribute are provided in the following list.
The Amazon Resource Name (ARN) of the device. For example, \"arn:aws:devicefarm:us-west-2::device:12345Example\".
Supported operators: EQUALS
, IN
, NOT_IN
The device platform. Valid values are \"ANDROID\" or \"IOS\".
Supported operators: EQUALS
The operating system version. For example, \"10.3.2\".
Supported operators: EQUALS
, GREATER_THAN
, GREATER_THAN_OR_EQUALS
, IN
, LESS_THAN
, LESS_THAN_OR_EQUALS
, NOT_IN
The device model. For example, \"iPad 5th Gen\".
Supported operators: CONTAINS
, EQUALS
, IN
, NOT_IN
The current availability of the device. Valid values are \"AVAILABLE\", \"HIGHLY_AVAILABLE\", \"BUSY\", or \"TEMPORARY_NOT_AVAILABLE\".
Supported operators: EQUALS
The device form factor. Valid values are \"PHONE\" or \"TABLET\".
Supported operators: EQUALS
The device manufacturer. For example, \"Apple\".
Supported operators: EQUALS
, IN
, NOT_IN
Whether the device is enabled for remote access. Valid values are \"TRUE\" or \"FALSE\".
Supported operators: EQUALS
Ignored.Whether the device is enabled for remote debugging. Valid values are \"TRUE\" or \"FALSE\".
Supported operators: EQUALS
This filter will be ignored, as remote debugging is no longer supported.
The Amazon Resource Name (ARN) of the device instance.
Supported operators: EQUALS
, IN
, NOT_IN
The label of the device instance.
Supported operators: CONTAINS
The fleet type. Valid values are \"PUBLIC\" or \"PRIVATE\".
Supported operators: EQUALS
The aspect of a device such as platform or model used as the selection criteria in a device filter.
The supported operators for each attribute are provided in the following list.
The Amazon Resource Name (ARN) of the device (for example, arn:aws:devicefarm:us-west-2::device:12345Example
).
Supported operators: EQUALS
, IN
, NOT_IN
The device platform. Valid values are ANDROID or IOS.
Supported operators: EQUALS
The operating system version (for example, 10.3.2).
Supported operators: EQUALS
, GREATER_THAN
, GREATER_THAN_OR_EQUALS
, IN
, LESS_THAN
, LESS_THAN_OR_EQUALS
, NOT_IN
The device model (for example, iPad 5th Gen).
Supported operators: CONTAINS
, EQUALS
, IN
, NOT_IN
The current availability of the device. Valid values are AVAILABLE, HIGHLY_AVAILABLE, BUSY, or TEMPORARY_NOT_AVAILABLE.
Supported operators: EQUALS
The device form factor. Valid values are PHONE or TABLET.
Supported operators: EQUALS
The device manufacturer (for example, Apple).
Supported operators: EQUALS
, IN
, NOT_IN
Whether the device is enabled for remote access. Valid values are TRUE or FALSE.
Supported operators: EQUALS
Whether the device is enabled for remote debugging. Valid values are TRUE or FALSE.
Supported operators: EQUALS
Because remote debugging is no longer supported, this filter is ignored.
The Amazon Resource Name (ARN) of the device instance.
Supported operators: EQUALS
, IN
, NOT_IN
The label of the device instance.
Supported operators: CONTAINS
The fleet type. Valid values are PUBLIC or PRIVATE.
Supported operators: EQUALS
An array of one or more filter values used in a device filter.
Operator Values
The IN and NOT_IN operators can take a values array that has more than one element.
The other operators require an array with a single element.
Attribute Values
The PLATFORM attribute can be set to \"ANDROID\" or \"IOS\".
The AVAILABILITY attribute can be set to \"AVAILABLE\", \"HIGHLY_AVAILABLE\", \"BUSY\", or \"TEMPORARY_NOT_AVAILABLE\".
The FORM_FACTOR attribute can be set to \"PHONE\" or \"TABLET\".
The FLEET_TYPE attribute can be set to \"PUBLIC\" or \"PRIVATE\".
An array of one or more filter values used in a device filter.
Operator Values
The IN and NOT_IN operators can take a values array that has more than one element.
The other operators require an array with a single element.
Attribute Values
The PLATFORM attribute can be set to ANDROID or IOS.
The AVAILABILITY attribute can be set to AVAILABLE, HIGHLY_AVAILABLE, BUSY, or TEMPORARY_NOT_AVAILABLE.
The FORM_FACTOR attribute can be set to PHONE or TABLET.
The FLEET_TYPE attribute can be set to PUBLIC or PRIVATE.
Used to dynamically select a set of devices for a test run. A filter is made up of an attribute, an operator, and one or more values.
Attribute
The aspect of a device such as platform or model used as the selection criteria in a device filter.
Allowed values include:
ARN: The Amazon Resource Name (ARN) of the device. For example, \"arn:aws:devicefarm:us-west-2::device:12345Example\".
PLATFORM: The device platform. Valid values are \"ANDROID\" or \"IOS\".
OS_VERSION: The operating system version. For example, \"10.3.2\".
MODEL: The device model. For example, \"iPad 5th Gen\".
AVAILABILITY: The current availability of the device. Valid values are \"AVAILABLE\", \"HIGHLY_AVAILABLE\", \"BUSY\", or \"TEMPORARY_NOT_AVAILABLE\".
FORM_FACTOR: The device form factor. Valid values are \"PHONE\" or \"TABLET\".
MANUFACTURER: The device manufacturer. For example, \"Apple\".
REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. Valid values are \"TRUE\" or \"FALSE\".
REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. Valid values are \"TRUE\" or \"FALSE\". This filter will be ignored, as remote debugging is no longer supported.
INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance.
INSTANCE_LABELS: The label of the device instance.
FLEET_TYPE: The fleet type. Valid values are \"PUBLIC\" or \"PRIVATE\".
Operator
The filter operator.
The EQUALS operator is available for every attribute except INSTANCE_LABELS.
The CONTAINS operator is available for the INSTANCE_LABELS and MODEL attributes.
The IN and NOT_IN operators are available for the ARN, OS_VERSION, MODEL, MANUFACTURER, and INSTANCE_ARN attributes.
The LESS_THAN, GREATER_THAN, LESS_THAN_OR_EQUALS, and GREATER_THAN_OR_EQUALS operators are also available for the OS_VERSION attribute.
Values
An array of one or more filter values.
Operator Values
The IN and NOT_IN operators can take a values array that has more than one element.
The other operators require an array with a single element.
Attribute Values
The PLATFORM attribute can be set to \"ANDROID\" or \"IOS\".
The AVAILABILITY attribute can be set to \"AVAILABLE\", \"HIGHLY_AVAILABLE\", \"BUSY\", or \"TEMPORARY_NOT_AVAILABLE\".
The FORM_FACTOR attribute can be set to \"PHONE\" or \"TABLET\".
The FLEET_TYPE attribute can be set to \"PUBLIC\" or \"PRIVATE\".
Used to dynamically select a set of devices for a test run. A filter is made up of an attribute, an operator, and one or more values.
Attribute
The aspect of a device such as platform or model used as the selection criteria in a device filter.
Allowed values include:
ARN: The Amazon Resource Name (ARN) of the device (for example, arn:aws:devicefarm:us-west-2::device:12345Example
).
PLATFORM: The device platform. Valid values are ANDROID or IOS.
OS_VERSION: The operating system version (for example, 10.3.2).
MODEL: The device model (for example, iPad 5th Gen).
AVAILABILITY: The current availability of the device. Valid values are AVAILABLE, HIGHLY_AVAILABLE, BUSY, or TEMPORARY_NOT_AVAILABLE.
FORM_FACTOR: The device form factor. Valid values are PHONE or TABLET.
MANUFACTURER: The device manufacturer (for example, Apple).
REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. Valid values are TRUE or FALSE.
REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. Valid values are TRUE or FALSE. Because remote debugging is no longer supported, this filter is ignored.
INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance.
INSTANCE_LABELS: The label of the device instance.
FLEET_TYPE: The fleet type. Valid values are PUBLIC or PRIVATE.
Operator
The filter operator.
The EQUALS operator is available for every attribute except INSTANCE_LABELS.
The CONTAINS operator is available for the INSTANCE_LABELS and MODEL attributes.
The IN and NOT_IN operators are available for the ARN, OS_VERSION, MODEL, MANUFACTURER, and INSTANCE_ARN attributes.
The LESS_THAN, GREATER_THAN, LESS_THAN_OR_EQUALS, and GREATER_THAN_OR_EQUALS operators are also available for the OS_VERSION attribute.
Values
An array of one or more filter values.
Operator Values
The IN and NOT_IN operators can take a values array that has more than one element.
The other operators require an array with a single element.
Attribute Values
The PLATFORM attribute can be set to ANDROID or IOS.
The AVAILABILITY attribute can be set to AVAILABLE, HIGHLY_AVAILABLE, BUSY, or TEMPORARY_NOT_AVAILABLE.
The FORM_FACTOR attribute can be set to PHONE or TABLET.
The FLEET_TYPE attribute can be set to PUBLIC or PRIVATE.
The filters in a device selection result.
", - "ListDevicesRequest$filters": "Used to select a set of devices. A filter is made up of an attribute, an operator, and one or more values.
Attribute: The aspect of a device such as platform or model used as the selection criteria in a device filter.
Allowed values include:
ARN: The Amazon Resource Name (ARN) of the device. For example, \"arn:aws:devicefarm:us-west-2::device:12345Example\".
PLATFORM: The device platform. Valid values are \"ANDROID\" or \"IOS\".
OS_VERSION: The operating system version. For example, \"10.3.2\".
MODEL: The device model. For example, \"iPad 5th Gen\".
AVAILABILITY: The current availability of the device. Valid values are \"AVAILABLE\", \"HIGHLY_AVAILABLE\", \"BUSY\", or \"TEMPORARY_NOT_AVAILABLE\".
FORM_FACTOR: The device form factor. Valid values are \"PHONE\" or \"TABLET\".
MANUFACTURER: The device manufacturer. For example, \"Apple\".
REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. Valid values are \"TRUE\" or \"FALSE\".
REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. Valid values are \"TRUE\" or \"FALSE\". This attribute will be ignored, as remote debugging is no longer supported.
INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance.
INSTANCE_LABELS: The label of the device instance.
FLEET_TYPE: The fleet type. Valid values are \"PUBLIC\" or \"PRIVATE\".
Operator: The filter operator.
The EQUALS operator is available for every attribute except INSTANCE_LABELS.
The CONTAINS operator is available for the INSTANCE_LABELS and MODEL attributes.
The IN and NOT_IN operators are available for the ARN, OS_VERSION, MODEL, MANUFACTURER, and INSTANCE_ARN attributes.
The LESS_THAN, GREATER_THAN, LESS_THAN_OR_EQUALS, and GREATER_THAN_OR_EQUALS operators are also available for the OS_VERSION attribute.
Values: An array of one or more filter values.
The IN and NOT_IN operators take a values array that has one or more elements.
The other operators require an array with a single element.
In a request, the AVAILABILITY attribute takes \"AVAILABLE\", \"HIGHLY_AVAILABLE\", \"BUSY\", or \"TEMPORARY_NOT_AVAILABLE\" as values.
Used to select a set of devices. A filter is made up of an attribute, an operator, and one or more values.
Attribute: The aspect of a device such as platform or model used as the selection criteria in a device filter.
Allowed values include:
ARN: The Amazon Resource Name (ARN) of the device (for example, arn:aws:devicefarm:us-west-2::device:12345Example
).
PLATFORM: The device platform. Valid values are ANDROID or IOS.
OS_VERSION: The operating system version (for example, 10.3.2).
MODEL: The device model (for example, iPad 5th Gen).
AVAILABILITY: The current availability of the device. Valid values are AVAILABLE, HIGHLY_AVAILABLE, BUSY, or TEMPORARY_NOT_AVAILABLE.
FORM_FACTOR: The device form factor. Valid values are PHONE or TABLET.
MANUFACTURER: The device manufacturer (for example, Apple).
REMOTE_ACCESS_ENABLED: Whether the device is enabled for remote access. Valid values are TRUE or FALSE.
REMOTE_DEBUG_ENABLED: Whether the device is enabled for remote debugging. Valid values are TRUE or FALSE. Because remote debugging is no longer supported, this attribute is ignored.
INSTANCE_ARN: The Amazon Resource Name (ARN) of the device instance.
INSTANCE_LABELS: The label of the device instance.
FLEET_TYPE: The fleet type. Valid values are PUBLIC or PRIVATE.
Operator: The filter operator.
The EQUALS operator is available for every attribute except INSTANCE_LABELS.
The CONTAINS operator is available for the INSTANCE_LABELS and MODEL attributes.
The IN and NOT_IN operators are available for the ARN, OS_VERSION, MODEL, MANUFACTURER, and INSTANCE_ARN attributes.
The LESS_THAN, GREATER_THAN, LESS_THAN_OR_EQUALS, and GREATER_THAN_OR_EQUALS operators are also available for the OS_VERSION attribute.
Values: An array of one or more filter values.
The IN and NOT_IN operators take a values array that has one or more elements.
The other operators require an array with a single element.
In a request, the AVAILABILITY attribute takes the following values: AVAILABLE, HIGHLY_AVAILABLE, BUSY, or TEMPORARY_NOT_AVAILABLE.
The device's form factor.
Allowed values include:
PHONE: The phone form factor.
TABLET: The tablet form factor.
The device's form factor.
Allowed values include:
PHONE
TABLET
Comma-separated list of paths in the test execution environment where the artifacts generated by the customer's tests will be pulled from.
" + "CustomerArtifactPaths$deviceHostPaths": "Comma-separated list of paths in the test execution environment where the artifacts generated by the customer's tests are pulled from.
" } }, "DeviceInstance": { "base": "Represents the device instance.
", "refs": { "DeviceInstances$member": null, - "GetDeviceInstanceResult$deviceInstance": "An object containing information about your device instance.
", - "UpdateDeviceInstanceResult$deviceInstance": "An object containing information about your device instance.
" + "GetDeviceInstanceResult$deviceInstance": "An object that contains information about your device instance.
", + "UpdateDeviceInstanceResult$deviceInstance": "An object that contains information about your device instance.
" } }, "DeviceInstances": { "base": null, "refs": { - "Device$instances": "The instances belonging to this device.
", - "ListDeviceInstancesResult$deviceInstances": "An object containing information about your device instances.
" + "Device$instances": "The instances that belong to this device.
", + "ListDeviceInstancesResult$deviceInstances": "An object that contains information about your device instances.
" } }, "DeviceMinutes": { @@ -583,10 +653,10 @@ "DevicePlatform": { "base": null, "refs": { - "Device$platform": "The device's platform.
Allowed values include:
ANDROID: The Android platform.
IOS: The iOS platform.
The platform of the device (e.g., ANDROID or IOS).
", + "Device$platform": "The device's platform.
Allowed values include:
ANDROID
IOS
The platform of the device (for example, ANDROID
or IOS
).
The run's platform.
Allowed values include:
ANDROID: The Android platform.
IOS: The iOS platform.
The run's platform.
Allowed values include:
ANDROID
IOS
The newly created device pool.
", "DevicePools$member": null, - "GetDevicePoolResult$devicePool": "An object containing information about the requested device pool.
", + "GetDevicePoolResult$devicePool": "An object that contains information about the requested device pool.
", "UpdateDevicePoolResult$devicePool": "The device pool you just updated.
" } }, @@ -625,13 +695,13 @@ } }, "DeviceSelectionConfiguration": { - "base": "Represents the device filters used in a test run as well as the maximum number of devices to be included in the run. It is passed in as the deviceSelectionConfiguration
request parameter in ScheduleRun.
Represents the device filters used in a test run and the maximum number of devices to be included in the run. It is passed in as the deviceSelectionConfiguration
request parameter in ScheduleRun.
The filter criteria used to dynamically select a set of devices for a test run, as well as the maximum number of devices to be included in the run.
Either devicePoolArn
or deviceSelectionConfiguration
is required in a request.
The filter criteria used to dynamically select a set of devices for a test run and the maximum number of devices to be included in the run.
Either devicePoolArn
or deviceSelectionConfiguration
is required in a request.
Contains the run results requested by the device selection configuration as well as how many devices were returned. For an example of the JSON response syntax, see ScheduleRun.
", + "base": "Contains the run results requested by the device selection configuration and how many devices were returned. For an example of the JSON response syntax, see ScheduleRun.
", "refs": { "Run$deviceSelectionResult": "The results of a device filter used to select the devices for a test run.
" } @@ -652,6 +722,7 @@ "Location$latitude": "The latitude.
", "Location$longitude": "The longitude.
", "MonetaryAmount$amount": "The numerical amount of an offering or transaction.
", + "TestGridSession$billingMinutes": "The number of billed minutes that were used for this session.
", "TrialMinutes$total": "The total number of free trial minutes that the account started with.
", "TrialMinutes$remaining": "The number of free trial minutes remaining in the account.
" } @@ -673,12 +744,12 @@ "ExecutionResult": { "base": null, "refs": { - "Job$result": "The job's result.
Allowed values include:
PENDING: A pending condition.
PASSED: A passing condition.
WARNED: A warning condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
ERRORED: An error condition.
STOPPED: A stopped condition.
The problem's result.
Allowed values include:
PENDING: A pending condition.
PASSED: A passing condition.
WARNED: A warning condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
ERRORED: An error condition.
STOPPED: A stopped condition.
The result of the remote access session. Can be any of the following:
PENDING: A pending condition.
PASSED: A passing condition.
WARNED: A warning condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
ERRORED: An error condition.
STOPPED: A stopped condition.
The run's result.
Allowed values include:
PENDING: A pending condition.
PASSED: A passing condition.
WARNED: A warning condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
ERRORED: An error condition.
STOPPED: A stopped condition.
The suite's result.
Allowed values include:
PENDING: A pending condition.
PASSED: A passing condition.
WARNED: A warning condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
ERRORED: An error condition.
STOPPED: A stopped condition.
The test's result.
Allowed values include:
PENDING: A pending condition.
PASSED: A passing condition.
WARNED: A warning condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
ERRORED: An error condition.
STOPPED: A stopped condition.
The job's result.
Allowed values include:
PENDING
PASSED
WARNED
FAILED
SKIPPED
ERRORED
STOPPED
The problem's result.
Allowed values include:
PENDING
PASSED
WARNED
FAILED
SKIPPED
ERRORED
STOPPED
The result of the remote access session. Can be any of the following:
PENDING.
PASSED.
WARNED.
FAILED.
SKIPPED.
ERRORED.
STOPPED.
The run's result.
Allowed values include:
PENDING
PASSED
WARNED
FAILED
SKIPPED
ERRORED
STOPPED
The suite's result.
Allowed values include:
PENDING
PASSED
WARNED
FAILED
SKIPPED
ERRORED
STOPPED
The test's result.
Allowed values include:
PENDING
PASSED
WARNED
FAILED
SKIPPED
ERRORED
STOPPED
The job's status.
Allowed values include:
PENDING: A pending status.
PENDING_CONCURRENCY: A pending concurrency status.
PENDING_DEVICE: A pending device status.
PROCESSING: A processing status.
SCHEDULING: A scheduling status.
PREPARING: A preparing status.
RUNNING: A running status.
COMPLETED: A completed status.
STOPPING: A stopping status.
The status of the remote access session. Can be any of the following:
PENDING: A pending status.
PENDING_CONCURRENCY: A pending concurrency status.
PENDING_DEVICE: A pending device status.
PROCESSING: A processing status.
SCHEDULING: A scheduling status.
PREPARING: A preparing status.
RUNNING: A running status.
COMPLETED: A completed status.
STOPPING: A stopping status.
The run's status.
Allowed values include:
PENDING: A pending status.
PENDING_CONCURRENCY: A pending concurrency status.
PENDING_DEVICE: A pending device status.
PROCESSING: A processing status.
SCHEDULING: A scheduling status.
PREPARING: A preparing status.
RUNNING: A running status.
COMPLETED: A completed status.
STOPPING: A stopping status.
The suite's status.
Allowed values include:
PENDING: A pending status.
PENDING_CONCURRENCY: A pending concurrency status.
PENDING_DEVICE: A pending device status.
PROCESSING: A processing status.
SCHEDULING: A scheduling status.
PREPARING: A preparing status.
RUNNING: A running status.
COMPLETED: A completed status.
STOPPING: A stopping status.
The test's status.
Allowed values include:
PENDING: A pending status.
PENDING_CONCURRENCY: A pending concurrency status.
PENDING_DEVICE: A pending device status.
PROCESSING: A processing status.
SCHEDULING: A scheduling status.
PREPARING: A preparing status.
RUNNING: A running status.
COMPLETED: A completed status.
STOPPING: A stopping status.
The job's status.
Allowed values include:
PENDING
PENDING_CONCURRENCY
PENDING_DEVICE
PROCESSING
SCHEDULING
PREPARING
RUNNING
COMPLETED
STOPPING
The status of the remote access session. Can be any of the following:
PENDING.
PENDING_CONCURRENCY.
PENDING_DEVICE.
PROCESSING.
SCHEDULING.
PREPARING.
RUNNING.
COMPLETED.
STOPPING.
The run's status.
Allowed values include:
PENDING
PENDING_CONCURRENCY
PENDING_DEVICE
PROCESSING
SCHEDULING
PREPARING
RUNNING
COMPLETED
STOPPING
The suite's status.
Allowed values include:
PENDING
PENDING_CONCURRENCY
PENDING_DEVICE
PROCESSING
SCHEDULING
PREPARING
RUNNING
COMPLETED
STOPPING
The test's status.
Allowed values include:
PENDING
PENDING_CONCURRENCY
PENDING_DEVICE
PROCESSING
SCHEDULING
PREPARING
RUNNING
COMPLETED
STOPPING
Represents a request to the get test operation.
", "refs": { @@ -867,7 +958,7 @@ "HostAddress": { "base": null, "refs": { - "RemoteAccessSession$hostAddress": "IP address of the EC2 host where you need to connect to remotely debug devices. Only returned if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
" + "RemoteAccessSession$hostAddress": "IP address of the EC2 host where you need to connect to remotely debug devices. Only returned if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
" } }, "IdempotencyException": { @@ -900,30 +991,30 @@ "InstanceLabels": { "base": null, "refs": { - "DeviceInstance$labels": "An array of strings describing the device instance.
", + "DeviceInstance$labels": "An array of strings that describe the device instance.
", "UpdateDeviceInstanceRequest$labels": "An array of strings that you want to associate with the device instance.
" } }, "InstanceProfile": { "base": "Represents the instance profile.
", "refs": { - "CreateInstanceProfileResult$instanceProfile": "An object containing information about your instance profile.
", - "DeviceInstance$instanceProfile": "A object containing information about the instance profile.
", - "GetInstanceProfileResult$instanceProfile": "An object containing information about your instance profile.
", + "CreateInstanceProfileResult$instanceProfile": "An object that contains information about your instance profile.
", + "DeviceInstance$instanceProfile": "A object that contains information about the instance profile.
", + "GetInstanceProfileResult$instanceProfile": "An object that contains information about an instance profile.
", "InstanceProfiles$member": null, - "UpdateInstanceProfileResult$instanceProfile": "An object containing information about your instance profile.
" + "UpdateInstanceProfileResult$instanceProfile": "An object that contains information about your instance profile.
" } }, "InstanceProfiles": { "base": null, "refs": { - "ListInstanceProfilesResult$instanceProfiles": "An object containing information about your instance profiles.
" + "ListInstanceProfilesResult$instanceProfiles": "An object that contains information about your instance profiles.
" } }, "InstanceStatus": { "base": null, "refs": { - "DeviceInstance$status": "The status of the device instance. Valid values are listed below.
" + "DeviceInstance$status": "The status of the device instance. Valid values are listed here.
" } }, "Integer": { @@ -936,17 +1027,17 @@ "Counters$errored": "The number of errored entities.
", "Counters$stopped": "The number of stopped entities.
", "Counters$skipped": "The number of skipped entities.
", - "CreateDevicePoolRequest$maxDevices": "The number of devices that Device Farm can add to your device pool. Device Farm adds devices that are available and that meet the criteria that you assign for the rules
parameter. Depending on how many devices meet these constraints, your device pool might contain fewer devices than the value for this parameter.
By specifying the maximum number of devices, you can control the costs that you incur by running tests.
", - "DevicePool$maxDevices": "The number of devices that Device Farm can add to your device pool. Device Farm adds devices that are available and that meet the criteria that you assign for the rules
parameter. Depending on how many devices meet these constraints, your device pool might contain fewer devices than the value for this parameter.
By specifying the maximum number of devices, you can control the costs that you incur by running tests.
", + "CreateDevicePoolRequest$maxDevices": "The number of devices that Device Farm can add to your device pool. Device Farm adds devices that are available and meet the criteria that you assign for the rules
parameter. Depending on how many devices meet these constraints, your device pool might contain fewer devices than the value for this parameter.
By specifying the maximum number of devices, you can control the costs that you incur by running tests.
", + "DevicePool$maxDevices": "The number of devices that Device Farm can add to your device pool. Device Farm adds devices that are available and meet the criteria that you assign for the rules
parameter. Depending on how many devices meet these constraints, your device pool might contain fewer devices than the value for this parameter.
By specifying the maximum number of devices, you can control the costs that you incur by running tests.
", "DeviceSelectionConfiguration$maxDevices": "The maximum number of devices to be included in a test run.
", "DeviceSelectionResult$matchedDevicesCount": "The number of devices that matched the device filter selection criteria.
", "DeviceSelectionResult$maxDevices": "The maximum number of devices to be selected by a device filter and included in a test run.
", - "ListDeviceInstancesRequest$maxResults": "An integer specifying the maximum number of items you want to return in the API response.
", - "ListInstanceProfilesRequest$maxResults": "An integer specifying the maximum number of items you want to return in the API response.
", - "ListVPCEConfigurationsRequest$maxResults": "An integer specifying the maximum number of items you want to return in the API response.
", + "ListDeviceInstancesRequest$maxResults": "An integer that specifies the maximum number of items you want to return in the API response.
", + "ListInstanceProfilesRequest$maxResults": "An integer that specifies the maximum number of items you want to return in the API response.
", + "ListVPCEConfigurationsRequest$maxResults": "An integer that specifies the maximum number of items you want to return in the API response.
", "MaxSlotMap$value": null, "OfferingStatus$quantity": "The number of available devices in the offering.
", - "PurchaseOfferingRequest$quantity": "The number of device slots you wish to purchase in an offering request.
", + "PurchaseOfferingRequest$quantity": "The number of device slots to purchase in an offering request.
", "PurchasedDevicesMap$value": null, "RenewOfferingRequest$quantity": "The quantity requested in an offering renewal.
", "Resolution$width": "The screen resolution's width, expressed in pixels.
", @@ -961,8 +1052,13 @@ "InteractionMode": { "base": null, "refs": { - "CreateRemoteAccessSessionRequest$interactionMode": "The interaction mode of the remote access session. Valid values are:
INTERACTIVE: You can interact with the iOS device by viewing, touching, and rotating the screen. You cannot run XCUITest framework-based tests in this mode.
NO_VIDEO: You are connected to the device but cannot interact with it or view the screen. This mode has the fastest test execution speed. You can run XCUITest framework-based tests in this mode.
VIDEO_ONLY: You can view the screen but cannot touch or rotate it. You can run XCUITest framework-based tests and watch the screen in this mode.
The interaction mode of the remote access session. Valid values are:
INTERACTIVE: You can interact with the iOS device by viewing, touching, and rotating the screen. You cannot run XCUITest framework-based tests in this mode.
NO_VIDEO: You are connected to the device but cannot interact with it or view the screen. This mode has the fastest test execution speed. You can run XCUITest framework-based tests in this mode.
VIDEO_ONLY: You can view the screen but cannot touch or rotate it. You can run XCUITest framework-based tests and watch the screen in this mode.
The interaction mode of the remote access session. Valid values are:
INTERACTIVE: You can interact with the iOS device by viewing, touching, and rotating the screen. You cannot run XCUITest framework-based tests in this mode.
NO_VIDEO: You are connected to the device, but cannot interact with it or view the screen. This mode has the fastest test execution speed. You can run XCUITest framework-based tests in this mode.
VIDEO_ONLY: You can view the screen, but cannot touch or rotate it. You can run XCUITest framework-based tests and watch the screen in this mode.
The interaction mode of the remote access session. Valid values are:
INTERACTIVE: You can interact with the iOS device by viewing, touching, and rotating the screen. You cannot run XCUITest framework-based tests in this mode.
NO_VIDEO: You are connected to the device, but cannot interact with it or view the screen. This mode has the fastest test execution speed. You can run XCUITest framework-based tests in this mode.
VIDEO_ONLY: You can view the screen, but cannot touch or rotate it. You can run XCUITest framework-based tests and watch the screen in this mode.
An internal exception was raised in the service. Contact aws-devicefarm-support@amazon.com if you see this error.
", + "refs": { } }, "InvalidOperationException": { @@ -973,13 +1069,13 @@ "IosPaths": { "base": null, "refs": { - "CustomerArtifactPaths$iosPaths": "Comma-separated list of paths on the iOS device where the artifacts generated by the customer's tests will be pulled from.
" + "CustomerArtifactPaths$iosPaths": "Comma-separated list of paths on the iOS device where the artifacts generated by the customer's tests are pulled from.
" } }, "Job": { "base": "Represents a device.
", "refs": { - "GetJobResult$job": "An object containing information about the requested job.
", + "GetJobResult$job": "An object that contains information about the requested job.
", "Jobs$member": null, "StopJobResult$job": "The job that was stopped.
" } @@ -987,13 +1083,13 @@ "JobTimeoutMinutes": { "base": null, "refs": { - "AccountSettings$maxJobTimeoutMinutes": "The maximum number of minutes a test run will execute before it times out.
", - "AccountSettings$defaultJobTimeoutMinutes": "The default number of minutes (at the account level) a test run will execute before it times out. The default value is 150 minutes.
", - "CreateProjectRequest$defaultJobTimeoutMinutes": "Sets the execution timeout value (in minutes) for a project. All test runs in this project will use the specified execution timeout value unless overridden when scheduling a run.
", - "ExecutionConfiguration$jobTimeoutMinutes": "The number of minutes a test run will execute before it times out.
", - "Project$defaultJobTimeoutMinutes": "The default number of minutes (at the project level) a test run will execute before it times out. The default value is 150 minutes.
", - "Run$jobTimeoutMinutes": "The number of minutes the job will execute before it times out.
", - "UpdateProjectRequest$defaultJobTimeoutMinutes": "The number of minutes a test run in the project will execute before it times out.
" + "AccountSettings$maxJobTimeoutMinutes": "The maximum number of minutes a test run executes before it times out.
", + "AccountSettings$defaultJobTimeoutMinutes": "The default number of minutes (at the account level) a test run executes before it times out. The default value is 150 minutes.
", + "CreateProjectRequest$defaultJobTimeoutMinutes": "Sets the execution timeout value (in minutes) for a project. All test runs in this project use the specified execution timeout value unless overridden when scheduling a run.
", + "ExecutionConfiguration$jobTimeoutMinutes": "The number of minutes a test run executes before it times out.
", + "Project$defaultJobTimeoutMinutes": "The default number of minutes (at the project level) a test run executes before it times out. The default value is 150 minutes.
", + "Run$jobTimeoutMinutes": "The number of minutes the job executes before it times out.
", + "UpdateProjectRequest$defaultJobTimeoutMinutes": "The number of minutes a test run in the project executes before it times out.
" } }, "Jobs": { @@ -1167,6 +1263,46 @@ "refs": { } }, + "ListTestGridProjectsRequest": { + "base": null, + "refs": { + } + }, + "ListTestGridProjectsResult": { + "base": null, + "refs": { + } + }, + "ListTestGridSessionActionsRequest": { + "base": null, + "refs": { + } + }, + "ListTestGridSessionActionsResult": { + "base": null, + "refs": { + } + }, + "ListTestGridSessionArtifactsRequest": { + "base": null, + "refs": { + } + }, + "ListTestGridSessionArtifactsResult": { + "base": null, + "refs": { + } + }, + "ListTestGridSessionsRequest": { + "base": null, + "refs": { + } + }, + "ListTestGridSessionsResult": { + "base": null, + "refs": { + } + }, "ListTestsRequest": { "base": "Represents a request to the list tests operation.
", "refs": { @@ -1208,7 +1344,7 @@ } }, "Location": { - "base": "Represents a latitude and longitude pair, expressed in geographic coordinate system degrees (for example 47.6204, -122.3491).
Elevation is currently not supported.
", + "base": "Represents a latitude and longitude pair, expressed in geographic coordinate system degrees (for example, 47.6204, -122.3491).
Elevation is currently not supported.
", "refs": { "Run$location": "Information about the location that is used for the run.
", "ScheduleRunConfiguration$location": "Information about the location that is used for the run.
" @@ -1231,6 +1367,7 @@ "NetworkProfile$downlinkDelayMs": "Delay time for all packets to destination in milliseconds as an integer from 0 to 2000.
", "NetworkProfile$uplinkJitterMs": "Time variation in the delay of received packets in milliseconds as an integer from 0 to 2000.
", "NetworkProfile$downlinkJitterMs": "Time variation in the delay of received packets in milliseconds as an integer from 0 to 2000.
", + "TestGridSessionAction$duration": "The time, in milliseconds, that the action took to complete in the browser.
", "UpdateNetworkProfileRequest$uplinkBandwidthBits": "The data throughput rate in bits per second, as an integer from 0 to 104857600.
", "UpdateNetworkProfileRequest$downlinkBandwidthBits": "The data throughput rate in bits per second, as an integer from 0 to 104857600.
", "UpdateNetworkProfileRequest$uplinkDelayMs": "Delay time for all packets to destination in milliseconds as an integer from 0 to 2000.
", @@ -1239,6 +1376,15 @@ "UpdateNetworkProfileRequest$downlinkJitterMs": "Time variation in the delay of received packets in milliseconds as an integer from 0 to 2000.
" } }, + "MaxPageSize": { + "base": null, + "refs": { + "ListTestGridProjectsRequest$maxResult": "Return no more than this number of results.
", + "ListTestGridSessionActionsRequest$maxResult": "The maximum number of sessions to return per response.
", + "ListTestGridSessionArtifactsRequest$maxResult": "The maximum number of results to be returned by a request.
", + "ListTestGridSessionsRequest$maxResult": "Return only this many results at a time.
" + } + }, "MaxSlotMap": { "base": null, "refs": { @@ -1249,6 +1395,7 @@ "base": null, "refs": { "ArgumentException$message": "Any additional information about the exception.
", + "CannotDeleteException$message": null, "CreateDevicePoolRequest$description": "The device pool's description.
", "CreateInstanceProfileRequest$description": "The description of your instance profile.
", "CreateNetworkProfileRequest$description": "The description of the network profile.
", @@ -1256,14 +1403,15 @@ "IdempotencyException$message": "Any additional information about the exception.
", "IncompatibilityMessage$message": "A message about the incompatibility.
", "InstanceProfile$description": "The description of the instance profile.
", + "InternalServiceException$message": null, "InvalidOperationException$message": null, "Job$message": "A message about the job's result.
", "LimitExceededException$message": "Any additional information about the exception.
", "NetworkProfile$description": "The description of the network profile.
", "NotEligibleException$message": "The HTTP response code of a Not Eligible exception.
", "NotFoundException$message": "Any additional information about the exception.
", - "Offering$description": "A string describing the offering.
", - "OfferingPromotion$description": "A string describing the offering promotion.
", + "Offering$description": "A string that describes the offering.
", + "OfferingPromotion$description": "A string that describes the offering promotion.
", "Problem$message": "A message about the problem's result.
", "RemoteAccessSession$message": "A message about the remote access session.
", "Run$message": "A message about the run's result.
", @@ -1271,7 +1419,7 @@ "Suite$message": "A message about the suite's result.
", "Test$message": "A message about the test's result.
", "UniqueProblem$message": "A message about the unique problems' result.
", - "UpdateDevicePoolRequest$description": "A description of the device pool you wish to update.
", + "UpdateDevicePoolRequest$description": "A description of the device pool to update.
", "UpdateInstanceProfileRequest$description": "The updated description for your instance profile.
", "UpdateNetworkProfileRequest$description": "The description of the network profile about which you are returning information.
", "Upload$message": "A message about the upload's result.
" @@ -1284,7 +1432,7 @@ } }, "MonetaryAmount": { - "base": "A number representing the monetary amount for an offering or transaction.
", + "base": "A number that represents the monetary amount for an offering or transaction.
", "refs": { "OfferingTransaction$cost": "The cost of an offering transaction.
", "RecurringCharge$cost": "The cost of the recurring charge.
" @@ -1296,10 +1444,10 @@ "Artifact$name": "The artifact's name.
", "CreateDevicePoolRequest$name": "The device pool's name.
", "CreateInstanceProfileRequest$name": "The name of your instance profile.
", - "CreateNetworkProfileRequest$name": "The name you wish to specify for the new network profile.
", + "CreateNetworkProfileRequest$name": "The name for the new network profile.
", "CreateProjectRequest$name": "The project's name.
", - "CreateRemoteAccessSessionRequest$name": "The name of the remote access session that you wish to create.
", - "CreateUploadRequest$name": "The upload's file name. The name should not contain the '/' character. If uploading an iOS app, the file name needs to end with the .ipa
extension. If uploading an Android app, the file name needs to end with the .apk
extension. For all others, the file name must end with the .zip
file extension.
The name of the remote access session to create.
", + "CreateUploadRequest$name": "The upload's file name. The name should not contain any forward slashes (/
). If you are uploading an iOS app, the file name must end with the .ipa
extension. If you are uploading an Android app, the file name must end with the .apk
extension. For all others, the file name must end with the .zip
file extension.
The device's display name.
", "DevicePool$name": "The device pool's name.
", "InstanceProfile$name": "The name of the instance profile.
", @@ -1312,11 +1460,11 @@ "ScheduleRunRequest$name": "The name for the run to be scheduled.
", "Suite$name": "The suite's name.
", "Test$name": "The test's name.
", - "UpdateDevicePoolRequest$name": "A string representing the name of the device pool you wish to update.
", + "UpdateDevicePoolRequest$name": "A string that represents the name of the device pool to update.
", "UpdateInstanceProfileRequest$name": "The updated name for your instance profile.
", "UpdateNetworkProfileRequest$name": "The name of the network profile about which you are returning information.
", - "UpdateProjectRequest$name": "A string representing the new name of the project that you are updating.
", - "UpdateUploadRequest$name": "The upload's test spec file name. The name should not contain the '/' character. The test spec file name must end with the .yaml
or .yml
file extension.
A string that represents the new name of the project that you are updating.
", + "UpdateUploadRequest$name": "The upload's test spec file name. The name must not contain any forward slashes (/). The test spec file name must end with the .yaml
or .yml
file extension.
The upload's file name.
" } }, @@ -1333,10 +1481,10 @@ "NetworkProfileType": { "base": null, "refs": { - "CreateNetworkProfileRequest$type": "The type of network profile you wish to create. Valid values are listed below.
", - "ListNetworkProfilesRequest$type": "The type of network profile you wish to return information about. Valid values are listed below.
", - "NetworkProfile$type": "The type of network profile. Valid values are listed below.
", - "UpdateNetworkProfileRequest$type": "The type of network profile you wish to return information about. Valid values are listed below.
" + "CreateNetworkProfileRequest$type": "The type of network profile to create. Valid values are listed here.
", + "ListNetworkProfilesRequest$type": "The type of network profile to return information about. Valid values are listed here.
", + "NetworkProfile$type": "The type of network profile. Valid values are listed here.
", + "UpdateNetworkProfileRequest$type": "The type of network profile to return information about. Valid values are listed here.
" } }, "NetworkProfiles": { @@ -1428,21 +1576,21 @@ "OfferingType": { "base": null, "refs": { - "Offering$type": "The type of offering (e.g., \"RECURRING\") for a device.
" + "Offering$type": "The type of offering (for example, RECURRING
) for a device.
A value representing the list offering results.
" + "ListOfferingsResult$offerings": "A value that represents the list offering results.
" } }, "PackageIds": { "base": null, "refs": { - "CreateInstanceProfileRequest$excludeAppPackagesFromCleanup": "An array of strings specifying the list of app packages that should not be cleaned up from the device after a test run is over.
The list of packages is only considered if you set packageCleanup
to true
.
An array of strings specifying the list of app packages that should not be cleaned up from the device after a test run is over.
The list of packages is only considered if you set packageCleanup
to true
.
An array of strings specifying the list of app packages that should not be cleaned up from the device after a test run is over.
The list of packages is only considered if you set packageCleanup
to true
.
An array of strings that specifies the list of app packages that should not be cleaned up from the device after a test run.
The list of packages is considered only if you set packageCleanup
to true
.
An array of strings containing the list of app packages that should not be cleaned up from the device after a test run completes.
The list of packages is considered only if you set packageCleanup
to true
.
An array of strings that specifies the list of app packages that should not be cleaned up from the device after a test run is over.
The list of packages is only considered if you set packageCleanup
to true
.
An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "GetOfferingStatusResult$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListArtifactsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListArtifactsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListArtifactsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListDeviceInstancesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListDeviceInstancesResult$nextToken": "An identifier that can be used in the next call to this operation to return the next set of items in the list.
", "ListDevicePoolsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListDevicePoolsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListDevicePoolsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListDevicesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListDevicesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListDevicesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListInstanceProfilesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListInstanceProfilesResult$nextToken": "An identifier that can be used in the next call to this operation to return the next set of items in the list.
", "ListJobsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListJobsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListJobsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListNetworkProfilesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListNetworkProfilesResult$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListOfferingPromotionsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", @@ -1471,21 +1619,29 @@ "ListOfferingsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListOfferingsResult$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListProjectsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListProjectsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListProjectsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListRemoteAccessSessionsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListRemoteAccessSessionsResult$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListRunsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListRunsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListRunsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListSamplesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListSamplesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListSamplesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListSuitesRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListSuitesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListSuitesResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListTestGridProjectsRequest$nextToken": "From a response, used to continue a paginated listing.
", + "ListTestGridProjectsResult$nextToken": "Used for pagination. Pass into ListTestGridProjects to get more results in a paginated request.
", + "ListTestGridSessionActionsRequest$nextToken": "Pagination token.
", + "ListTestGridSessionActionsResult$nextToken": "Pagination token.
", + "ListTestGridSessionArtifactsRequest$nextToken": "Pagination token.
", + "ListTestGridSessionArtifactsResult$nextToken": "Pagination token.
", + "ListTestGridSessionsRequest$nextToken": "Pagination token.
", + "ListTestGridSessionsResult$nextToken": "Pagination token.
", "ListTestsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListTestsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListTestsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListUniqueProblemsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListUniqueProblemsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListUniqueProblemsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListUploadsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", - "ListUploadsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned, which can be used in a subsequent call to this operation to return the next set of items in the list.
", + "ListUploadsResult$nextToken": "If the number of items that are returned is significantly large, this is an identifier that is also returned. It can be used in a subsequent call to this operation to return the next set of items in the list.
", "ListVPCEConfigurationsRequest$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
", "ListVPCEConfigurationsResult$nextToken": "An identifier that was returned from the previous call to this operation, which can be used to return the next set of items in the list.
" } @@ -1526,9 +1682,9 @@ "base": "Represents an operating-system neutral workspace for running and managing tests.
", "refs": { "CreateProjectResult$project": "The newly created project.
", - "GetProjectResult$project": "The project you wish to get information about.
", + "GetProjectResult$project": "The project to get information about.
", "Projects$member": null, - "UpdateProjectResult$project": "The project you wish to update.
" + "UpdateProjectResult$project": "The project to update.
" } }, "Projects": { @@ -1543,7 +1699,7 @@ } }, "PurchaseOfferingResult": { - "base": "The result of the purchase offering (e.g., success or failure).
", + "base": "The result of the purchase offering (for example, success or failure).
", "refs": { } }, @@ -1562,7 +1718,7 @@ } }, "RecurringCharge": { - "base": "Specifies whether charges for devices will be recurring.
", + "base": "Specifies whether charges for devices are recurring.
", "refs": { "RecurringCharges$member": null } @@ -1570,7 +1726,7 @@ "RecurringChargeFrequency": { "base": null, "refs": { - "RecurringCharge$frequency": "The frequency in which charges will recur.
" + "RecurringCharge$frequency": "The frequency in which charges recur.
" } }, "RecurringCharges": { @@ -1585,17 +1741,17 @@ "CreateRemoteAccessSessionResult$remoteAccessSession": "A container that describes the remote access session when the request to create a remote access session is sent.
", "GetRemoteAccessSessionResult$remoteAccessSession": "A container that lists detailed information about the remote access session.
", "RemoteAccessSessions$member": null, - "StopRemoteAccessSessionResult$remoteAccessSession": "A container representing the metadata from the service about the remote access session you are stopping.
" + "StopRemoteAccessSessionResult$remoteAccessSession": "A container that represents the metadata from the service about the remote access session you are stopping.
" } }, "RemoteAccessSessions": { "base": null, "refs": { - "ListRemoteAccessSessionsResult$remoteAccessSessions": "A container representing the metadata from the service about each remote access session you are requesting.
" + "ListRemoteAccessSessionsResult$remoteAccessSessions": "A container that represents the metadata from the service about each remote access session you are requesting.
" } }, "RenewOfferingRequest": { - "base": "A request representing an offering renewal.
", + "base": "A request that represents an offering renewal.
", "refs": { } }, @@ -1610,6 +1766,26 @@ "Device$resolution": "The resolution of the device.
" } }, + "ResourceDescription": { + "base": null, + "refs": { + "CreateTestGridProjectRequest$description": "Human-readable description of the project.
", + "UpdateTestGridProjectRequest$description": "Human-readable description for the project.
" + } + }, + "ResourceId": { + "base": null, + "refs": { + "GetTestGridSessionRequest$sessionId": "An ID associated with this session.
" + } + }, + "ResourceName": { + "base": null, + "refs": { + "CreateTestGridProjectRequest$name": "Human-readable name of the Selenium testing project.
", + "UpdateTestGridProjectRequest$name": "Human-readable name for the project.
" + } + }, "Rule": { "base": "Represents a condition for a device pool.
", "refs": { @@ -1619,7 +1795,7 @@ "RuleOperator": { "base": null, "refs": { - "DeviceFilter$operator": "Specifies how Device Farm compares the filter's attribute to the value. For the operators that are supported by each attribute, see the attribute descriptions.
", + "DeviceFilter$operator": "Specifies how Device Farm compares the filter's attribute to the value. See the attribute descriptions.
", "Rule$operator": "Specifies how Device Farm compares the rule's attribute to the value. For the operators that are supported by each attribute, see the attribute descriptions.
" } }, @@ -1628,13 +1804,13 @@ "refs": { "CreateDevicePoolRequest$rules": "The device pool's rules.
", "DevicePool$rules": "Information about the device pool's rules.
", - "UpdateDevicePoolRequest$rules": "Represents the rules you wish to modify for the device pool. Updating rules is optional; however, if you choose to update rules for your request, the update will replace the existing rules.
" + "UpdateDevicePoolRequest$rules": "Represents the rules to modify for the device pool. Updating rules is optional. If you update rules for your request, the update replaces the existing rules.
" } }, "Run": { - "base": "Represents a test run on a set of devices with a given app package, test parameters, etc.
", + "base": "Represents a test run on a set of devices with a given app package, test parameters, and so on.
", "refs": { - "GetRunResult$run": "The run you wish to get results from.
", + "GetRunResult$run": "The run to get results from.
", "Runs$member": null, "ScheduleRunResult$run": "Information about the scheduled run.
", "StopRunResult$run": "The run that was stopped.
" @@ -1667,7 +1843,7 @@ "ScheduleRunConfiguration": { "base": "Represents the settings for a run. Includes things like location, radio states, auxiliary apps, and network profiles.
", "refs": { - "GetDevicePoolCompatibilityRequest$configuration": "An object containing information about the settings for a run.
", + "GetDevicePoolCompatibilityRequest$configuration": "An object that contains information about the settings for a run.
", "ScheduleRunRequest$configuration": "Information about the settings for the run to be scheduled.
" } }, @@ -1682,7 +1858,7 @@ } }, "ScheduleRunTest": { - "base": "Represents test settings. This data structure is passed in as the \"test\" parameter to ScheduleRun. For an example of the JSON request syntax, see ScheduleRun.
", + "base": "Represents test settings. This data structure is passed in as the test parameter to ScheduleRun. For an example of the JSON request syntax, see ScheduleRun.
", "refs": { "GetDevicePoolCompatibilityRequest$test": "Information about the uploaded test to be run against the device pool.
", "ScheduleRunRequest$test": "Information about the test for the run to be scheduled.
" @@ -1697,23 +1873,23 @@ "base": null, "refs": { "CreateVPCEConfigurationRequest$serviceDnsName": "The DNS name of the service running in your VPC that you want Device Farm to test.
", - "UpdateVPCEConfigurationRequest$serviceDnsName": "The DNS (domain) name used to connect to your private service in your Amazon VPC. The DNS name must not already be in use on the Internet.
", + "UpdateVPCEConfigurationRequest$serviceDnsName": "The DNS (domain) name used to connect to your private service in your VPC. The DNS name must not already be in use on the internet.
", "VPCEConfiguration$serviceDnsName": "The DNS name that maps to the private IP address of the service you want to access.
" } }, "SkipAppResign": { "base": null, "refs": { - "AccountSettings$skipAppResign": "When set to true
, for private devices, Device Farm will not sign your app again. For public devices, Device Farm always signs your apps again and this parameter has no effect.
For more information about how Device Farm re-signs your app(s), see Do you modify my app? in the AWS Device Farm FAQs.
", - "ExecutionConfiguration$skipAppResign": "When set to true
, for private devices, Device Farm will not sign your app again. For public devices, Device Farm always signs your apps again and this parameter has no effect.
For more information about how Device Farm re-signs your app(s), see Do you modify my app? in the AWS Device Farm FAQs.
", - "RemoteAccessSession$skipAppResign": "When set to true
, for private devices, Device Farm will not sign your app again. For public devices, Device Farm always signs your apps again and this parameter has no effect.
For more information about how Device Farm re-signs your app(s), see Do you modify my app? in the AWS Device Farm FAQs.
", - "Run$skipAppResign": "When set to true
, for private devices, Device Farm will not sign your app again. For public devices, Device Farm always signs your apps again and this parameter has no effect.
For more information about how Device Farm re-signs your app(s), see Do you modify my app? in the AWS Device Farm FAQs.
" + "AccountSettings$skipAppResign": "When set to true
, for private devices, Device Farm does not sign your app again. For public devices, Device Farm always signs your apps again.
For more information about how Device Farm re-signs your apps, see Do you modify my app? in the AWS Device Farm FAQs.
", + "ExecutionConfiguration$skipAppResign": "When set to true
, for private devices, Device Farm does not sign your app again. For public devices, Device Farm always signs your apps again.
For more information about how Device Farm re-signs your apps, see Do you modify my app? in the AWS Device Farm FAQs.
", + "RemoteAccessSession$skipAppResign": "When set to true
, for private devices, Device Farm does not sign your app again. For public devices, Device Farm always signs your apps again.
For more information about how Device Farm re-signs your apps, see Do you modify my app? in the AWS Device Farm FAQs.
", + "Run$skipAppResign": "When set to true
, for private devices, Device Farm does not sign your app again. For public devices, Device Farm always signs your apps again.
For more information about how Device Farm re-signs your apps, see Do you modify my app? in the AWS Device Farm FAQs.
" } }, "SshPublicKey": { "base": null, "refs": { - "CreateRemoteAccessSessionRequest$sshPublicKey": " Ignored. The public key of the ssh
key pair you want to use for connecting to remote devices in your remote debugging session. This is only required if remoteDebugEnabled
is set to true
.
Remote debugging is no longer supported.
" + "CreateRemoteAccessSessionRequest$sshPublicKey": "Ignored. The public key of the ssh
key pair you want to use for connecting to remote devices in your remote debugging session. This key is required only if remoteDebugEnabled
is set to true
.
Remote debugging is no longer supported.
" } }, "StopJobRequest": { @@ -1752,7 +1928,8 @@ "AndroidPaths$member": null, "Artifact$extension": "The artifact's file extension.
", "CPU$frequency": "The CPU's frequency.
", - "CPU$architecture": "The CPU's architecture, for example x86 or ARM.
", + "CPU$architecture": "The CPU's architecture (for example, x86 or ARM).
", + "CreateTestGridUrlResult$url": "A signed URL, expiring in CreateTestGridUrlRequest$expiresInSeconds seconds, to be passed to a RemoteWebDriver
.
The device's manufacturer name.
", "Device$model": "The device's model name.
", "Device$modelId": "The device's model ID.
", @@ -1760,7 +1937,7 @@ "Device$image": "The device's image name.
", "Device$carrier": "The device's carrier.
", "Device$radio": "The device's radio.
", - "Device$fleetType": "The type of fleet to which this device belongs. Possible values for fleet type are PRIVATE and PUBLIC.
", + "Device$fleetType": "The type of fleet to which this device belongs. Possible values are PRIVATE and PUBLIC.
", "Device$fleetName": "The name of the fleet to which this device belongs.
", "DeviceFilterValues$member": null, "DeviceHostPaths$member": null, @@ -1771,12 +1948,20 @@ "MaxSlotMap$key": null, "PackageIds$member": null, "RemoteAccessSession$endpoint": "The endpoint for the remote access sesssion.
", - "RemoteAccessSession$deviceUdid": "Unique device identifier for the remote device. Only returned if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
", + "RemoteAccessSession$deviceUdid": "Unique device identifier for the remote device. Only returned if remote debugging is enabled for the remote access session.
Remote debugging is no longer supported.
", "Rule$value": "The rule's value.
", - "Run$parsingResultUrl": "Read-only URL for an object in S3 bucket where you can get the parsing results of the test package. If the test package doesn't parse, the reason why it doesn't parse appears in the file that this URL points to.
", + "Run$parsingResultUrl": "Read-only URL for an object in an S3 bucket where you can get the parsing results of the test package. If the test package doesn't parse, the reason why it doesn't parse appears in the file that this URL points to.
", "Run$locale": "Information about the locale that is used for the run.
", "Run$webUrl": "The Device Farm console URL for the recording of the run.
", "ScheduleRunConfiguration$locale": "Information about the locale that is used for the run.
", + "TestGridProject$name": "A human-readable name for the project.
", + "TestGridProject$description": "A human-readable description for the project.
", + "TestGridSession$seleniumProperties": "A JSON object of options and parameters passed to the Selenium WebDriver.
", + "TestGridSessionAction$action": "The action taken by the session.
", + "TestGridSessionAction$statusCode": "HTTP status code returned to the browser when the action was taken.
", + "TestGridSessionAction$requestMethod": "HTTP method that the browser used to make the request.
", + "TestGridSessionArtifact$filename": "The file name of the artifact.
", + "TestGridSessionArtifact$url": "A semi-stable URL to the content of the object.
", "TestParameters$key": null, "TestParameters$value": null } @@ -1795,7 +1980,7 @@ } }, "Tag": { - "base": "The metadata that you apply to a resource to help you categorize and organize it. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
", + "base": "The metadata that you apply to a resource to help you categorize and organize it. Each tag consists of a key and an optional value, both of which you define. Tag keys can have a maximum character length of 128 characters. Tag values can have a maximum length of 256 characters.
", "refs": { "TagList$member": null } @@ -1803,7 +1988,7 @@ "TagKey": { "base": null, "refs": { - "Tag$Key": "One part of a key-value pair that make up a tag. A key
is a general label that acts like a category for more specific tag values.
One part of a key-value pair that makes up a tag. A key
is a general label that acts like a category for more specific tag values.
The tags to add to the resource. A tag is an array of key-value pairs. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
", - "TagResourceRequest$Tags": "The tags to add to the resource. A tag is an array of key-value pairs. Tag keys can have a maximum character length of 128 characters, and tag values can have a maximum length of 256 characters.
" + "ListTagsForResourceResponse$Tags": "The tags to add to the resource. A tag is an array of key-value pairs. Tag keys can have a maximum character length of 128 characters. Tag values can have a maximum length of 256 characters.
", + "TagResourceRequest$Tags": "The tags to add to the resource. A tag is an array of key-value pairs. Tag keys can have a maximum character length of 128 characters. Tag values can have a maximum length of 256 characters.
" } }, "TagOperationException": { @@ -1843,7 +2028,7 @@ "TagValue": { "base": null, "refs": { - "Tag$Value": "The optional part of a key-value pair that make up a tag. A value
acts as a descriptor within a tag category (key).
The optional part of a key-value pair that makes up a tag. A value
acts as a descriptor in a tag category (key).
A Selenium testing project. Projects are used to collect and collate sessions.
", + "refs": { + "CreateTestGridProjectResult$testGridProject": "ARN of the Selenium testing project that was created.
", + "GetTestGridProjectResult$testGridProject": "", + "TestGridProjects$member": null, + "UpdateTestGridProjectResult$testGridProject": "The project, including updated information.
" + } + }, + "TestGridProjects": { + "base": null, + "refs": { + "ListTestGridProjectsResult$testGridProjects": "The list of TestGridProjects, based on a ListTestGridProjectsRequest.
" + } + }, + "TestGridSession": { + "base": "A TestGridSession is a single instance of a browser launched from the URL provided by a call to CreateTestGridUrl.
", + "refs": { + "GetTestGridSessionResult$testGridSession": "The TestGridSession that was requested.
", + "TestGridSessions$member": null + } + }, + "TestGridSessionAction": { + "base": "An action taken by a TestGridSession browser instance.
", + "refs": { + "TestGridSessionActions$member": null + } + }, + "TestGridSessionActions": { + "base": null, + "refs": { + "ListTestGridSessionActionsResult$actions": "The action taken by the session.
" + } + }, + "TestGridSessionArtifact": { + "base": "Artifacts are video and other files that are produced in the process of running a browser in an automated context.
Video elements might be broken up into multiple artifacts as they grow in size during creation.
Limit results to a specified type of artifact.
" + } + }, + "TestGridSessionArtifactType": { + "base": null, + "refs": { + "TestGridSessionArtifact$type": "The kind of artifact.
" + } + }, + "TestGridSessionArtifacts": { + "base": null, + "refs": { + "ListTestGridSessionArtifactsResult$artifacts": "A list of test grid session artifacts for a TestGridSession.
" + } + }, + "TestGridSessionStatus": { + "base": null, + "refs": { + "ListTestGridSessionsRequest$status": "Return only sessions in this state.
", + "TestGridSession$status": "The state of the session.
" + } + }, + "TestGridSessions": { + "base": null, + "refs": { + "ListTestGridSessionsResult$testGridSessions": "The sessions that match the criteria in a ListTestGridSessionsRequest.
" + } + }, + "TestGridUrlExpiresInSecondsInput": { + "base": null, + "refs": { + "CreateTestGridUrlRequest$expiresInSeconds": "Lifetime, in seconds, of the URL.
" + } + }, "TestParameters": { "base": null, "refs": { - "ScheduleRunTest$parameters": "The test's parameters, such as test framework parameters and fixture settings. Parameters are represented by name-value pairs of strings.
For all tests:
app_performance_monitoring: Performance monitoring is enabled by default. Set this parameter to \"false\" to disable it.
For Calabash tests:
profile: A cucumber profile, for example, \"my_profile_name\".
tags: You can limit execution to features or scenarios that have (or don't have) certain tags, for example, \"@smoke\" or \"@smoke,~@wip\".
For Appium tests (all types):
appium_version: The Appium version. Currently supported values are \"1.6.5\" (and higher), \"latest\", and \"default\".
“latest” will run the latest Appium version supported by Device Farm (1.9.1).
For “default”, Device Farm will choose a compatible version of Appium for the device. The current behavior is to run 1.7.2 on Android devices and iOS 9 and earlier, 1.7.2 for iOS 10 and later.
This behavior is subject to change.
For Fuzz tests (Android only):
event_count: The number of events, between 1 and 10000, that the UI fuzz test should perform.
throttle: The time, in ms, between 0 and 1000, that the UI fuzz test should wait between events.
seed: A seed to use for randomizing the UI fuzz test. Using the same seed value between tests ensures identical event sequences.
For Explorer tests:
username: A username to use if the Explorer encounters a login form. If not supplied, no username will be inserted.
password: A password to use if the Explorer encounters a login form. If not supplied, no password will be inserted.
For Instrumentation:
filter: A test filter string. Examples:
Running a single test case: \"com.android.abc.Test1\"
Running a single test: \"com.android.abc.Test1#smoke\"
Running multiple tests: \"com.android.abc.Test1,com.android.abc.Test2\"
For XCTest and XCTestUI:
filter: A test filter string. Examples:
Running a single test class: \"LoginTests\"
Running a multiple test classes: \"LoginTests,SmokeTests\"
Running a single test: \"LoginTests/testValid\"
Running multiple tests: \"LoginTests/testValid,LoginTests/testInvalid\"
For UIAutomator:
filter: A test filter string. Examples:
Running a single test case: \"com.android.abc.Test1\"
Running a single test: \"com.android.abc.Test1#smoke\"
Running multiple tests: \"com.android.abc.Test1,com.android.abc.Test2\"
The test's parameters, such as test framework parameters and fixture settings. Parameters are represented by name-value pairs of strings.
For all tests:
app_performance_monitoring
: Performance monitoring is enabled by default. Set this parameter to false to disable it.
For Calabash tests:
profile: A cucumber profile (for example, my_profile_name
).
tags: You can limit execution to features or scenarios that have (or don't have) certain tags (for example, @smoke or @smoke,~@wip).
For Appium tests (all types):
appium_version: The Appium version. Currently supported values are 1.6.5 (and later), latest, and default.
latest runs the latest Appium version supported by Device Farm (1.9.1).
For default, Device Farm selects a compatible version of Appium for the device. The current behavior is to run 1.7.2 on Android devices and iOS 9 and earlier and 1.7.2 for iOS 10 and later.
This behavior is subject to change.
For fuzz tests (Android only):
event_count: The number of events, between 1 and 10000, that the UI fuzz test should perform.
throttle: The time, in ms, between 0 and 1000, that the UI fuzz test should wait between events.
seed: A seed to use for randomizing the UI fuzz test. Using the same seed value between tests ensures identical event sequences.
For Explorer tests:
username: A user name to use if the Explorer encounters a login form. If not supplied, no user name is inserted.
password: A password to use if the Explorer encounters a login form. If not supplied, no password is inserted.
For Instrumentation:
filter: A test filter string. Examples:
Running a single test case: com.android.abc.Test1
Running a single test: com.android.abc.Test1#smoke
Running multiple tests: com.android.abc.Test1,com.android.abc.Test2
For XCTest and XCTestUI:
filter: A test filter string. Examples:
Running a single test class: LoginTests
Running a multiple test classes: LoginTests,SmokeTests
Running a single test: LoginTests/testValid
Running multiple tests: LoginTests/testValid,LoginTests/testInvalid
For UIAutomator:
filter: A test filter string. Examples:
Running a single test case: com.android.abc.Test1
Running a single test: com.android.abc.Test1#smoke
Running multiple tests: com.android.abc.Test1,com.android.abc.Test2
The test type for the specified device pool.
Allowed values include the following:
BUILTIN_FUZZ: The built-in fuzz type.
BUILTIN_EXPLORER: For Android, an app explorer that will traverse an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
APPIUM_PYTHON: The Appium Python type.
APPIUM_NODE: The Appium Node.js type.
APPIUM_RUBY: The Appium Ruby type.
APPIUM_WEB_JAVA_JUNIT: The Appium Java JUnit type for web apps.
APPIUM_WEB_JAVA_TESTNG: The Appium Java TestNG type for web apps.
APPIUM_WEB_PYTHON: The Appium Python type for web apps.
APPIUM_WEB_NODE: The Appium Node.js type for web apps.
APPIUM_WEB_RUBY: The Appium Ruby type for web apps.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATION: The uiautomation type.
UIAUTOMATOR: The uiautomator type.
XCTEST: The Xcode test type.
XCTEST_UI: The Xcode UI test type.
The job's type.
Allowed values include the following:
BUILTIN_FUZZ: The built-in fuzz type.
BUILTIN_EXPLORER: For Android, an app explorer that will traverse an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
APPIUM_PYTHON: The Appium Python type.
APPIUM_NODE: The Appium Node.js type.
APPIUM_RUBY: The Appium Ruby type.
APPIUM_WEB_JAVA_JUNIT: The Appium Java JUnit type for web apps.
APPIUM_WEB_JAVA_TESTNG: The Appium Java TestNG type for web apps.
APPIUM_WEB_PYTHON: The Appium Python type for web apps.
APPIUM_WEB_NODE: The Appium Node.js type for web apps.
APPIUM_WEB_RUBY: The Appium Ruby test type for web apps.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATION: The uiautomation type.
UIAUTOMATOR: The uiautomator type.
XCTEST: The Xcode test type.
XCTEST_UI: The Xcode UI test type.
The run's type.
Must be one of the following values:
BUILTIN_FUZZ: The built-in fuzz type.
BUILTIN_EXPLORER: For Android, an app explorer that will traverse an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
APPIUM_PYTHON: The Appium Python type.
APPIUM_NODE: The Appium Node.js type.
APPIUM_RUBY: The Appium Ruby type.
APPIUM_WEB_JAVA_JUNIT: The Appium Java JUnit type for web apps.
APPIUM_WEB_JAVA_TESTNG: The Appium Java TestNG type for web apps.
APPIUM_WEB_PYTHON: The Appium Python type for web apps.
APPIUM_WEB_NODE: The Appium Node.js type for web apps.
APPIUM_WEB_RUBY: The Appium Ruby type for web apps.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATION: The uiautomation type.
UIAUTOMATOR: The uiautomator type.
XCTEST: The Xcode test type.
XCTEST_UI: The Xcode UI test type.
The test's type.
Must be one of the following values:
BUILTIN_FUZZ: The built-in fuzz type.
BUILTIN_EXPLORER: For Android, an app explorer that will traverse an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
APPIUM_PYTHON: The Appium Python type.
APPIUM_NODE: The Appium Node.js type.
APPIUM_RUBY: The Appium Ruby type.
APPIUM_WEB_JAVA_JUNIT: The Appium Java JUnit type for web apps.
APPIUM_WEB_JAVA_TESTNG: The Appium Java TestNG type for web apps.
APPIUM_WEB_PYTHON: The Appium Python type for web apps.
APPIUM_WEB_NODE: The Appium Node.js type for web apps.
APPIUM_WEB_RUBY: The Appium Ruby type for web apps.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATION: The uiautomation type.
UIAUTOMATOR: The uiautomator type.
XCTEST: The Xcode test type.
XCTEST_UI: The Xcode UI test type.
The suite's type.
Must be one of the following values:
BUILTIN_FUZZ: The built-in fuzz type.
BUILTIN_EXPLORER: For Android, an app explorer that will traverse an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
APPIUM_PYTHON: The Appium Python type.
APPIUM_NODE: The Appium Node.js type.
APPIUM_RUBY: The Appium Ruby type.
APPIUM_WEB_JAVA_JUNIT: The Appium Java JUnit type for web apps.
APPIUM_WEB_JAVA_TESTNG: The Appium Java TestNG type for web apps.
APPIUM_WEB_PYTHON: The Appium Python type for web apps.
APPIUM_WEB_NODE: The Appium Node.js type for web apps.
APPIUM_WEB_RUBY: The Appium Ruby type for web apps.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATION: The uiautomation type.
UIAUTOMATOR: The uiautomator type.
XCTEST: The Xcode test type.
XCTEST_UI: The Xcode UI test type.
The test's type.
Must be one of the following values:
BUILTIN_FUZZ: The built-in fuzz type.
BUILTIN_EXPLORER: For Android, an app explorer that will traverse an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT: The Appium Java JUnit type.
APPIUM_JAVA_TESTNG: The Appium Java TestNG type.
APPIUM_PYTHON: The Appium Python type.
APPIUM_NODE: The Appium Node.js type.
APPIUM_RUBY: The Appium Ruby type.
APPIUM_WEB_JAVA_JUNIT: The Appium Java JUnit type for web apps.
APPIUM_WEB_JAVA_TESTNG: The Appium Java TestNG type for web apps.
APPIUM_WEB_PYTHON: The Appium Python type for web apps.
APPIUM_WEB_NODE: The Appium Node.js type for web apps.
APPIUM_WEB_RUBY: The Appium Ruby type for web apps.
CALABASH: The Calabash type.
INSTRUMENTATION: The Instrumentation type.
UIAUTOMATION: The uiautomation type.
UIAUTOMATOR: The uiautomator type.
XCTEST: The Xcode test type.
XCTEST_UI: The Xcode UI test type.
The test type for the specified device pool.
Allowed values include the following:
BUILTIN_FUZZ.
BUILTIN_EXPLORER. For Android, an app explorer that traverses an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT.
APPIUM_JAVA_TESTNG.
APPIUM_PYTHON.
APPIUM_NODE.
APPIUM_RUBY.
APPIUM_WEB_JAVA_JUNIT.
APPIUM_WEB_JAVA_TESTNG.
APPIUM_WEB_PYTHON.
APPIUM_WEB_NODE.
APPIUM_WEB_RUBY.
CALABASH.
INSTRUMENTATION.
UIAUTOMATION.
UIAUTOMATOR.
XCTEST.
XCTEST_UI.
The job's type.
Allowed values include the following:
BUILTIN_FUZZ
BUILTIN_EXPLORER. For Android, an app explorer that traverses an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT
APPIUM_JAVA_TESTNG
APPIUM_PYTHON
APPIUM_NODE
APPIUM_RUBY
APPIUM_WEB_JAVA_JUNIT
APPIUM_WEB_JAVA_TESTNG
APPIUM_WEB_PYTHON
APPIUM_WEB_NODE
APPIUM_WEB_RUBY
CALABASH
INSTRUMENTATION
UIAUTOMATION
UIAUTOMATOR
XCTEST
XCTEST_UI
The run's type.
Must be one of the following values:
BUILTIN_FUZZ
BUILTIN_EXPLORER
For Android, an app explorer that traverses an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT
APPIUM_JAVA_TESTNG
APPIUM_PYTHON
APPIUM_NODE
APPIUM_RUBY
APPIUM_WEB_JAVA_JUNIT
APPIUM_WEB_JAVA_TESTNG
APPIUM_WEB_PYTHON
APPIUM_WEB_NODE
APPIUM_WEB_RUBY
CALABASH
INSTRUMENTATION
UIAUTOMATION
UIAUTOMATOR
XCTEST
XCTEST_UI
The test's type.
Must be one of the following values:
BUILTIN_FUZZ
BUILTIN_EXPLORER. For Android, an app explorer that traverses an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT
APPIUM_JAVA_TESTNG
APPIUM_PYTHON
APPIUM_NODE
APPIUM_RUBY
APPIUM_WEB_JAVA_JUNIT
APPIUM_WEB_JAVA_TESTNG
APPIUM_WEB_PYTHON
APPIUM_WEB_NODE
APPIUM_WEB_RUBY
CALABASH
INSTRUMENTATION
UIAUTOMATION
UIAUTOMATOR
XCTEST
XCTEST_UI
The suite's type.
Must be one of the following values:
BUILTIN_FUZZ
BUILTIN_EXPLORER
Only available for Android; an app explorer that traverses an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT
APPIUM_JAVA_TESTNG
APPIUM_PYTHON
APPIUM_NODE
APPIUM_RUBY
APPIUM_WEB_JAVA_JUNIT
APPIUM_WEB_JAVA_TESTNG
APPIUM_WEB_PYTHON
APPIUM_WEB_NODE
APPIUM_WEB_RUBY
CALABASH
INSTRUMENTATION
UIAUTOMATION
UIAUTOMATOR
XCTEST
XCTEST_UI
The test's type.
Must be one of the following values:
BUILTIN_FUZZ
BUILTIN_EXPLORER
For Android, an app explorer that traverses an Android app, interacting with it and capturing screenshots at the same time.
APPIUM_JAVA_JUNIT
APPIUM_JAVA_TESTNG
APPIUM_PYTHON
APPIUM_NODE
APPIUM_RUBY
APPIUM_WEB_JAVA_JUNIT
APPIUM_WEB_JAVA_TESTNG
APPIUM_WEB_PYTHON
APPIUM_WEB_NODE
APPIUM_WEB_RUBY
CALABASH
INSTRUMENTATION
UIAUTOMATION
UIAUTOMATOR
XCTEST
XCTEST_UI
The pre-signed Amazon S3 URL that can be used with a corresponding GET request to download the artifact's file.
", - "Sample$url": "The pre-signed Amazon S3 URL that can be used with a corresponding GET request to download the sample's file.
", - "Upload$url": "The pre-signed Amazon S3 URL that was used to store a file through a corresponding PUT request.
" + "Artifact$url": "The presigned Amazon S3 URL that can be used with a GET request to download the artifact's file.
", + "Sample$url": "The presigned Amazon S3 URL that can be used with a GET request to download the sample's file.
", + "Upload$url": "The presigned Amazon S3 URL that was used to store a file using a PUT request.
" } }, "UniqueProblem": { @@ -1916,7 +2178,7 @@ "UniqueProblemsByExecutionResultMap": { "base": null, "refs": { - "ListUniqueProblemsResult$uniqueProblems": "Information about the unique problems.
Allowed values include:
PENDING: A pending condition.
PASSED: A passing condition.
WARNED: A warning condition.
FAILED: A failed condition.
SKIPPED: A skipped condition.
ERRORED: An error condition.
STOPPED: A stopped condition.
Information about the unique problems.
Allowed values include:
PENDING
PASSED
WARNED
FAILED
SKIPPED
ERRORED
STOPPED
The upload's status.
Must be one of the following values:
FAILED: A failed status.
INITIALIZED: An initialized status.
PROCESSING: A processing status.
SUCCEEDED: A succeeded status.
The upload's status.
Must be one of the following values:
FAILED
INITIALIZED
PROCESSING
SUCCEEDED
The upload's upload type.
Must be one of the following values:
ANDROID_APP: An Android upload.
IOS_APP: An iOS upload.
WEB_APP: A web application upload.
EXTERNAL_DATA: An external data upload.
APPIUM_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload.
APPIUM_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload.
APPIUM_PYTHON_TEST_PACKAGE: An Appium Python test package upload.
APPIUM_NODE_TEST_PACKAGE: An Appium Node.js test package upload.
APPIUM_RUBY_TEST_PACKAGE: An Appium Ruby test package upload.
APPIUM_WEB_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload for a web app.
APPIUM_WEB_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload for a web app.
APPIUM_WEB_PYTHON_TEST_PACKAGE: An Appium Python test package upload for a web app.
APPIUM_WEB_NODE_TEST_PACKAGE: An Appium Node.js test package upload for a web app.
APPIUM_WEB_RUBY_TEST_PACKAGE: An Appium Ruby test package upload for a web app.
CALABASH_TEST_PACKAGE: A Calabash test package upload.
INSTRUMENTATION_TEST_PACKAGE: An instrumentation upload.
UIAUTOMATION_TEST_PACKAGE: A uiautomation test package upload.
UIAUTOMATOR_TEST_PACKAGE: A uiautomator test package upload.
XCTEST_TEST_PACKAGE: An Xcode test package upload.
XCTEST_UI_TEST_PACKAGE: An Xcode UI test package upload.
APPIUM_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload.
APPIUM_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload.
APPIUM_PYTHON_TEST_SPEC: An Appium Python test spec upload.
APPIUM_NODE_TEST_SPEC: An Appium Node.js test spec upload.
APPIUM_RUBY_TEST_SPEC: An Appium Ruby test spec upload.
APPIUM_WEB_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload for a web app.
APPIUM_WEB_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload for a web app.
APPIUM_WEB_PYTHON_TEST_SPEC: An Appium Python test spec upload for a web app.
APPIUM_WEB_NODE_TEST_SPEC: An Appium Node.js test spec upload for a web app.
APPIUM_WEB_RUBY_TEST_SPEC: An Appium Ruby test spec upload for a web app.
INSTRUMENTATION_TEST_SPEC: An instrumentation test spec upload.
XCTEST_UI_TEST_SPEC: An Xcode UI test spec upload.
Note If you call CreateUpload
with WEB_APP
specified, AWS Device Farm throws an ArgumentException
error.
The type of upload.
Must be one of the following values:
ANDROID_APP: An Android upload.
IOS_APP: An iOS upload.
WEB_APP: A web application upload.
EXTERNAL_DATA: An external data upload.
APPIUM_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload.
APPIUM_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload.
APPIUM_PYTHON_TEST_PACKAGE: An Appium Python test package upload.
APPIUM_NODE_TEST_PACKAGE: An Appium Node.js test package upload.
APPIUM_RUBY_TEST_PACKAGE: An Appium Ruby test package upload.
APPIUM_WEB_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload for a web app.
APPIUM_WEB_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload for a web app.
APPIUM_WEB_PYTHON_TEST_PACKAGE: An Appium Python test package upload for a web app.
APPIUM_WEB_NODE_TEST_PACKAGE: An Appium Node.js test package upload for a web app.
APPIUM_WEB_RUBY_TEST_PACKAGE: An Appium Ruby test package upload for a web app.
CALABASH_TEST_PACKAGE: A Calabash test package upload.
INSTRUMENTATION_TEST_PACKAGE: An instrumentation upload.
UIAUTOMATION_TEST_PACKAGE: A uiautomation test package upload.
UIAUTOMATOR_TEST_PACKAGE: A uiautomator test package upload.
XCTEST_TEST_PACKAGE: An Xcode test package upload.
XCTEST_UI_TEST_PACKAGE: An Xcode UI test package upload.
APPIUM_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload.
APPIUM_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload.
APPIUM_PYTHON_TEST_SPEC: An Appium Python test spec upload.
APPIUM_NODE_TEST_SPEC: An Appium Node.js test spec upload.
APPIUM_RUBY_TEST_SPEC: An Appium Ruby test spec upload.
APPIUM_WEB_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload for a web app.
APPIUM_WEB_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload for a web app.
APPIUM_WEB_PYTHON_TEST_SPEC: An Appium Python test spec upload for a web app.
APPIUM_WEB_NODE_TEST_SPEC: An Appium Node.js test spec upload for a web app.
APPIUM_WEB_RUBY_TEST_SPEC: An Appium Ruby test spec upload for a web app.
INSTRUMENTATION_TEST_SPEC: An instrumentation test spec upload.
XCTEST_UI_TEST_SPEC: An Xcode UI test spec upload.
The upload's type.
Must be one of the following values:
ANDROID_APP: An Android upload.
IOS_APP: An iOS upload.
WEB_APP: A web application upload.
EXTERNAL_DATA: An external data upload.
APPIUM_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload.
APPIUM_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload.
APPIUM_PYTHON_TEST_PACKAGE: An Appium Python test package upload.
APPIUM_NODE_TEST_PACKAGE: An Appium Node.js test package upload.
APPIUM_RUBY_TEST_PACKAGE: An Appium Ruby test package upload.
APPIUM_WEB_JAVA_JUNIT_TEST_PACKAGE: An Appium Java JUnit test package upload for web apps.
APPIUM_WEB_JAVA_TESTNG_TEST_PACKAGE: An Appium Java TestNG test package upload for web apps.
APPIUM_WEB_PYTHON_TEST_PACKAGE: An Appium Python test package upload for web apps.
APPIUM_WEB_NODE_TEST_PACKAGE: An Appium Node.js test package upload for web apps.
APPIUM_WEB_RUBY_TEST_PACKAGE: An Appium Ruby test package upload for web apps.
CALABASH_TEST_PACKAGE: A Calabash test package upload.
INSTRUMENTATION_TEST_PACKAGE: An instrumentation upload.
UIAUTOMATION_TEST_PACKAGE: A uiautomation test package upload.
UIAUTOMATOR_TEST_PACKAGE: A uiautomator test package upload.
XCTEST_TEST_PACKAGE: An Xcode test package upload.
XCTEST_UI_TEST_PACKAGE: An Xcode UI test package upload.
APPIUM_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload.
APPIUM_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload.
APPIUM_PYTHON_TEST_SPEC: An Appium Python test spec upload.
APPIUM_NODE_TEST_SPEC: An Appium Node.js test spec upload.
APPIUM_RUBY_TEST_SPEC: An Appium Ruby test spec upload.
APPIUM_WEB_JAVA_JUNIT_TEST_SPEC: An Appium Java JUnit test spec upload for a web app.
APPIUM_WEB_JAVA_TESTNG_TEST_SPEC: An Appium Java TestNG test spec upload for a web app.
APPIUM_WEB_PYTHON_TEST_SPEC: An Appium Python test spec upload for a web app.
APPIUM_WEB_NODE_TEST_SPEC: An Appium Node.js test spec upload for a web app.
APPIUM_WEB_RUBY_TEST_SPEC: An Appium Ruby test spec upload for a web app.
INSTRUMENTATION_TEST_SPEC: An instrumentation test spec upload.
XCTEST_UI_TEST_SPEC: An Xcode UI test spec upload.
The upload's upload type.
Must be one of the following values:
ANDROID_APP
IOS_APP
WEB_APP
EXTERNAL_DATA
APPIUM_JAVA_JUNIT_TEST_PACKAGE
APPIUM_JAVA_TESTNG_TEST_PACKAGE
APPIUM_PYTHON_TEST_PACKAGE
APPIUM_NODE_TEST_PACKAGE
APPIUM_RUBY_TEST_PACKAGE
APPIUM_WEB_JAVA_JUNIT_TEST_PACKAGE
APPIUM_WEB_JAVA_TESTNG_TEST_PACKAGE
APPIUM_WEB_PYTHON_TEST_PACKAGE
APPIUM_WEB_NODE_TEST_PACKAGE
APPIUM_WEB_RUBY_TEST_PACKAGE
CALABASH_TEST_PACKAGE
INSTRUMENTATION_TEST_PACKAGE
UIAUTOMATION_TEST_PACKAGE
UIAUTOMATOR_TEST_PACKAGE
XCTEST_TEST_PACKAGE
XCTEST_UI_TEST_PACKAGE
APPIUM_JAVA_JUNIT_TEST_SPEC
APPIUM_JAVA_TESTNG_TEST_SPEC
APPIUM_PYTHON_TEST_SPEC
APPIUM_NODE_TEST_SPEC
APPIUM_RUBY_TEST_SPEC
APPIUM_WEB_JAVA_JUNIT_TEST_SPEC
APPIUM_WEB_JAVA_TESTNG_TEST_SPEC
APPIUM_WEB_PYTHON_TEST_SPEC
APPIUM_WEB_NODE_TEST_SPEC
APPIUM_WEB_RUBY_TEST_SPEC
INSTRUMENTATION_TEST_SPEC
XCTEST_UI_TEST_SPEC
If you call CreateUpload
with WEB_APP
specified, AWS Device Farm throws an ArgumentException
error.
The type of upload.
Must be one of the following values:
ANDROID_APP
IOS_APP
WEB_APP
EXTERNAL_DATA
APPIUM_JAVA_JUNIT_TEST_PACKAGE
APPIUM_JAVA_TESTNG_TEST_PACKAGE
APPIUM_PYTHON_TEST_PACKAGE
APPIUM_NODE_TEST_PACKAGE
APPIUM_RUBY_TEST_PACKAGE
APPIUM_WEB_JAVA_JUNIT_TEST_PACKAGE
APPIUM_WEB_JAVA_TESTNG_TEST_PACKAGE
APPIUM_WEB_PYTHON_TEST_PACKAGE
APPIUM_WEB_NODE_TEST_PACKAGE
APPIUM_WEB_RUBY_TEST_PACKAGE
CALABASH_TEST_PACKAGE
INSTRUMENTATION_TEST_PACKAGE
UIAUTOMATION_TEST_PACKAGE
UIAUTOMATOR_TEST_PACKAGE
XCTEST_TEST_PACKAGE
XCTEST_UI_TEST_PACKAGE
APPIUM_JAVA_JUNIT_TEST_SPEC
APPIUM_JAVA_TESTNG_TEST_SPEC
APPIUM_PYTHON_TEST_SPEC
APPIUM_NODE_TEST_SPEC
APPIUM_RUBY_TEST_SPEC
APPIUM_WEB_JAVA_JUNIT_TEST_SPEC
APPIUM_WEB_JAVA_TESTNG_TEST_SPEC
APPIUM_WEB_PYTHON_TEST_SPEC
APPIUM_WEB_NODE_TEST_SPEC
APPIUM_WEB_RUBY_TEST_SPEC
INSTRUMENTATION_TEST_SPEC
XCTEST_UI_TEST_SPEC
The upload's type.
Must be one of the following values:
ANDROID_APP
IOS_APP
WEB_APP
EXTERNAL_DATA
APPIUM_JAVA_JUNIT_TEST_PACKAGE
APPIUM_JAVA_TESTNG_TEST_PACKAGE
APPIUM_PYTHON_TEST_PACKAGE
APPIUM_NODE_TEST_PACKAGE
APPIUM_RUBY_TEST_PACKAGE
APPIUM_WEB_JAVA_JUNIT_TEST_PACKAGE
APPIUM_WEB_JAVA_TESTNG_TEST_PACKAGE
APPIUM_WEB_PYTHON_TEST_PACKAGE
APPIUM_WEB_NODE_TEST_PACKAGE
APPIUM_WEB_RUBY_TEST_PACKAGE
CALABASH_TEST_PACKAGE
INSTRUMENTATION_TEST_PACKAGE
UIAUTOMATION_TEST_PACKAGE
UIAUTOMATOR_TEST_PACKAGE
XCTEST_TEST_PACKAGE
XCTEST_UI_TEST_PACKAGE
APPIUM_JAVA_JUNIT_TEST_SPEC
APPIUM_JAVA_TESTNG_TEST_SPEC
APPIUM_PYTHON_TEST_SPEC
APPIUM_NODE_TEST_SPEC
APPIUM_RUBY_TEST_SPEC
APPIUM_WEB_JAVA_JUNIT_TEST_SPEC
APPIUM_WEB_JAVA_TESTNG_TEST_SPEC
APPIUM_WEB_PYTHON_TEST_SPEC
APPIUM_WEB_NODE_TEST_SPEC
APPIUM_WEB_RUBY_TEST_SPEC
INSTRUMENTATION_TEST_SPEC
XCTEST_UI_TEST_SPEC
Represents an Amazon Virtual Private Cloud (VPC) endpoint configuration.
", "refs": { - "CreateVPCEConfigurationResult$vpceConfiguration": "An object containing information about your VPC endpoint configuration.
", - "GetVPCEConfigurationResult$vpceConfiguration": "An object containing information about your VPC endpoint configuration.
", - "UpdateVPCEConfigurationResult$vpceConfiguration": "An object containing information about your VPC endpoint configuration.
", + "CreateVPCEConfigurationResult$vpceConfiguration": "An object that contains information about your VPC endpoint configuration.
", + "GetVPCEConfigurationResult$vpceConfiguration": "An object that contains information about your VPC endpoint configuration.
", + "UpdateVPCEConfigurationResult$vpceConfiguration": "An object that contains information about your VPC endpoint configuration.
", "VPCEConfigurations$member": null } }, "VPCEConfigurationDescription": { "base": null, "refs": { - "CreateVPCEConfigurationRequest$vpceConfigurationDescription": "An optional description, providing more details about your VPC endpoint configuration.
", - "UpdateVPCEConfigurationRequest$vpceConfigurationDescription": "An optional description, providing more details about your VPC endpoint configuration.
", - "VPCEConfiguration$vpceConfigurationDescription": "An optional description, providing more details about your VPC endpoint configuration.
" + "CreateVPCEConfigurationRequest$vpceConfigurationDescription": "An optional description that provides details about your VPC endpoint configuration.
", + "UpdateVPCEConfigurationRequest$vpceConfigurationDescription": "An optional description that provides details about your VPC endpoint configuration.
", + "VPCEConfiguration$vpceConfigurationDescription": "An optional description that provides details about your VPC endpoint configuration.
" } }, "VPCEConfigurationName": { "base": null, "refs": { "CreateVPCEConfigurationRequest$vpceConfigurationName": "The friendly name you give to your VPC endpoint configuration, to manage your configurations more easily.
", - "UpdateVPCEConfigurationRequest$vpceConfigurationName": "The friendly name you give to your VPC endpoint configuration, to manage your configurations more easily.
", - "VPCEConfiguration$vpceConfigurationName": "The friendly name you give to your VPC endpoint configuration, to manage your configurations more easily.
" + "UpdateVPCEConfigurationRequest$vpceConfigurationName": "The friendly name you give to your VPC endpoint configuration to manage your configurations more easily.
", + "VPCEConfiguration$vpceConfigurationName": "The friendly name you give to your VPC endpoint configuration to manage your configurations more easily.
" } }, "VPCEConfigurations": { "base": null, "refs": { - "ListVPCEConfigurationsResult$vpceConfigurations": "An array of VPCEConfiguration
objects containing information about your VPC endpoint configuration.
An array of VPCEConfiguration
objects that contain information about your VPC endpoint configuration.
The name of the VPC endpoint service running inside your AWS account that you want Device Farm to test.
", - "UpdateVPCEConfigurationRequest$vpceServiceName": "The name of the VPC endpoint service running inside your AWS account that you want Device Farm to test.
", - "VPCEConfiguration$vpceServiceName": "The name of the VPC endpoint service running inside your AWS account that you want Device Farm to test.
" + "CreateVPCEConfigurationRequest$vpceServiceName": "The name of the VPC endpoint service running in your AWS account that you want Device Farm to test.
", + "UpdateVPCEConfigurationRequest$vpceServiceName": "The name of the VPC endpoint service running in your AWS account that you want Device Farm to test.
", + "VPCEConfiguration$vpceServiceName": "The name of the VPC endpoint service running in your AWS account that you want Device Farm to test.
" } }, "VideoCapture": { "base": null, "refs": { - "ExecutionConfiguration$videoCapture": "Set to true to enable video capture; otherwise, set to false. The default is true.
", - "Job$videoCapture": "This value is set to true if video capture is enabled; otherwise, it is set to false.
" + "ExecutionConfiguration$videoCapture": "Set to true to enable video capture. Otherwise, set to false. The default is true.
", + "Job$videoCapture": "This value is set to true if video capture is enabled. Otherwise, it is set to false.
" } } } diff --git a/models/apis/devicefarm/2015-06-23/paginators-1.json b/models/apis/devicefarm/2015-06-23/paginators-1.json index 7f9e88c8ff1..92e2263422e 100644 --- a/models/apis/devicefarm/2015-06-23/paginators-1.json +++ b/models/apis/devicefarm/2015-06-23/paginators-1.json @@ -58,6 +58,26 @@ "output_token": "nextToken", "result_key": "suites" }, + "ListTestGridProjects": { + "input_token": "nextToken", + "limit_key": "maxResult", + "output_token": "nextToken" + }, + "ListTestGridSessionActions": { + "input_token": "nextToken", + "limit_key": "maxResult", + "output_token": "nextToken" + }, + "ListTestGridSessionArtifacts": { + "input_token": "nextToken", + "limit_key": "maxResult", + "output_token": "nextToken" + }, + "ListTestGridSessions": { + "input_token": "nextToken", + "limit_key": "maxResult", + "output_token": "nextToken" + }, "ListTests": { "input_token": "nextToken", "output_token": "nextToken", diff --git a/models/apis/discovery/2015-11-01/docs-2.json b/models/apis/discovery/2015-11-01/docs-2.json index aef504ac6e5..fef3fab89fd 100644 --- a/models/apis/discovery/2015-11-01/docs-2.json +++ b/models/apis/discovery/2015-11-01/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "AWS Application Discovery Service helps you plan application migration projects by automatically identifying servers, virtual machines (VMs), software, and software dependencies running in your on-premises data centers. Application Discovery Service also collects application performance data, which can help you assess the outcome of your migration. The data collected by Application Discovery Service is securely retained in an AWS-hosted and managed database in the cloud. You can export the data as a CSV or XML file into your preferred visualization tool or cloud-migration solution to plan your migration. For more information, see AWS Application Discovery Service FAQ.
Application Discovery Service offers two modes of operation:
Agentless discovery mode is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment. Agentless discovery doesn't collect information about software and software dependencies. It also doesn't work in non-VMware environments.
Agent-based discovery mode collects a richer set of data than agentless discovery by using the AWS Application Discovery Agent, which you install on one or more hosts in your data center. The agent captures infrastructure and application information, including an inventory of installed software applications, system and process performance, resource utilization, and network dependencies between workloads. The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud.
We recommend that you use agent-based discovery for non-VMware environments and to collect information about software and software dependencies. You can also run agent-based and agentless discovery simultaneously. Use agentless discovery to quickly complete the initial infrastructure assessment and then install agents on select hosts.
Application Discovery Service integrates with application discovery solutions from AWS Partner Network (APN) partners. Third-party application discovery tools can query Application Discovery Service and write to the Application Discovery Service database using a public API. You can then import the data into either a visualization tool or cloud-migration solution.
Application Discovery Service doesn't gather sensitive information. All data is handled according to the AWS Privacy Policy. You can operate Application Discovery Service offline to inspect collected data before it is shared with the service.
This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs.
This guide is intended for use with the AWS Application Discovery Service User Guide .
Remember that you must set your AWS Migration Hub home region before you call any of these APIs, or a HomeRegionNotSetException
error will be returned. Also, you must make the API calls while in your home region.
AWS Application Discovery Service helps you plan application migration projects. It automatically identifies servers, virtual machines (VMs), and network dependencies in your on-premises data centers. For more information, see the AWS Application Discovery Service FAQ. Application Discovery Service offers three ways of performing discovery and collecting data about your on-premises servers:
Agentless discovery is recommended for environments that use VMware vCenter Server. This mode doesn't require you to install an agent on each host. It does not work in non-VMware environments.
Agentless discovery gathers server information regardless of the operating systems, which minimizes the time required for initial on-premises infrastructure assessment.
Agentless discovery doesn't collect information about network dependencies, only agent-based discovery collects that information.
Agent-based discovery collects a richer set of data than agentless discovery by using the AWS Application Discovery Agent, which you install on one or more hosts in your data center.
The agent captures infrastructure and application information, including an inventory of running processes, system performance information, resource utilization, and network dependencies.
The information collected by agents is secured at rest and in transit to the Application Discovery Service database in the cloud.
AWS Partner Network (APN) solutions integrate with Application Discovery Service, enabling you to import details of your on-premises environment directly into Migration Hub without using the discovery connector or discovery agent.
Third-party application discovery tools can query AWS Application Discovery Service, and they can write to the Application Discovery Service database using the public API.
In this way, you can import data into Migration Hub and view it, so that you can associate applications with servers and track migrations.
Recommendations
We recommend that you use agent-based discovery for non-VMware environments, and whenever you want to collect information about network dependencies. You can run agent-based and agentless discovery simultaneously. Use agentless discovery to complete the initial infrastructure assessment quickly, and then install agents on select hosts to collect additional information.
Working With This Guide
This API reference provides descriptions, syntax, and usage examples for each of the actions and data types for Application Discovery Service. The topic for each action shows the API request parameters and the response. Alternatively, you can use one of the AWS SDKs to access an API that is tailored to the programming language or platform that you're using. For more information, see AWS SDKs.
Remember that you must set your Migration Hub home region before you call any of these APIs.
You must make API calls for write actions (create, notify, associate, disassociate, import, or put) while in your home region, or a HomeRegionNotSetException
error is returned.
API calls for read actions (list, describe, stop, and delete) are permitted outside of your home region.
Although it is unlikely, the Migration Hub home region could change. If you call APIs outside the home region, an InvalidInputException
is returned.
You must call GetHomeRegion
to obtain the latest Migration Hub home region.
This guide is intended for use with the AWS Application Discovery Service User Guide.
All data is handled according to the AWS Privacy Policy. You can operate Application Discovery Service offline to inspect collected data before it is shared with the service.
Associates one or more configuration items with an application.
", "BatchDeleteImportData": "Deletes one or more import tasks, each identified by their import ID. Each import task has a number of records that can identify servers or applications.
AWS Application Discovery Service has built-in matching logic that will identify when discovered servers match existing entries that you've previously discovered, the information for the already-existing discovered server is updated. When you delete an import task that contains records that were used to match, the information in those matched records that comes from the deleted records will also be deleted.
", @@ -9,7 +9,7 @@ "DeleteApplications": "Deletes a list of applications and their associations with configuration items.
", "DeleteTags": "Deletes the association between configuration items and one or more tags. This API accepts a list of multiple configuration items.
", "DescribeAgents": "Lists agents or connectors as specified by ID or other filters. All agents/connectors associated with your user account can be listed if you call DescribeAgents
as is without passing any parameters.
Retrieves attributes for a list of configuration item IDs.
All of the supplied IDs must be for the same asset type from one of the following:
server
application
process
connection
Output fields are specific to the asset type specified. For example, the output for a server configuration item includes a list of attributes about the server, such as host name, operating system, number of network cards, etc.
For a complete list of outputs for each asset type, see Using the DescribeConfigurations Action.
Retrieves attributes for a list of configuration item IDs.
All of the supplied IDs must be for the same asset type from one of the following:
server
application
process
connection
Output fields are specific to the asset type specified. For example, the output for a server configuration item includes a list of attributes about the server, such as host name, operating system, number of network cards, etc.
For a complete list of outputs for each asset type, see Using the DescribeConfigurations Action in the AWS Application Discovery Service User Guide.
Lists exports as specified by ID. All continuous exports associated with your user account can be listed if you call DescribeContinuousExports
as is without passing any parameters.
DescribeExportConfigurations
is deprecated. Use DescribeImportTasks, instead.
Retrieve status of one or more export tasks. You can retrieve the status of up to 100 export tasks.
", @@ -18,12 +18,12 @@ "DisassociateConfigurationItemsFromApplication": "Disassociates one or more configuration items from an application.
", "ExportConfigurations": "Deprecated. Use StartExportTask
instead.
Exports all discovered configuration data to an Amazon S3 bucket or an application that enables you to view and evaluate the data. Data includes tags and tag associations, processes, connections, servers, and system performance. This API returns an export ID that you can query using the DescribeExportConfigurations API. The system imposes a limit of two configuration exports in six hours.
", "GetDiscoverySummary": "Retrieves a short summary of discovered assets.
This API operation takes no request parameters and is called as is at the command prompt as shown in the example.
", - "ListConfigurations": "Retrieves a list of configuration items as specified by the value passed to the required paramater configurationType
. Optional filtering may be applied to refine search results.
Retrieves a list of configuration items as specified by the value passed to the required parameter configurationType
. Optional filtering may be applied to refine search results.
Retrieves a list of servers that are one network hop away from a specified server.
", "StartContinuousExport": "Start the continuous flow of agent's discovered data into Amazon Athena.
", "StartDataCollectionByAgentIds": "Instructs the specified agents or connectors to start collecting data.
", "StartExportTask": "Begins the export of discovered data to an S3 bucket.
If you specify agentIds
in a filter, the task exports up to 72 hours of detailed data collected by the identified Application Discovery Agent, including network, process, and performance details. A time range for exported agent data may be set by using startTime
and endTime
. Export of detailed agent data is limited to five concurrently running exports.
If you do not include an agentIds
filter, summary data is exported that includes both AWS Agentless Discovery Connector data and summary data from AWS Discovery Agents. Export of summary data is limited to two exports per day.
Starts an import task, which allows you to import details of your on-premises environment directly into AWS without having to use the Application Discovery Service (ADS) tools such as the Discovery Connector or Discovery Agent. This gives you the option to perform migration assessment and planning directly from your imported data, including the ability to group your devices as applications and track their migration status.
To start an import request, do this:
Download the specially formatted comma separated value (CSV) import template, which you can find here: https://s3-us-west-2.amazonaws.com/templates-7cffcf56-bd96-4b1c-b45b-a5b42f282e46/import_template.csv.
Fill out the template with your server and application data.
Upload your import file to an Amazon S3 bucket, and make a note of it's Object URL. Your import file must be in the CSV format.
Use the console or the StartImportTask
command with the AWS CLI or one of the AWS SDKs to import the records from your file.
For more information, including step-by-step procedures, see Migration Hub Import in the AWS Application Discovery Service User Guide.
There are limits to the number of import tasks you can create (and delete) in an AWS account. For more information, see AWS Application Discovery Service Limits in the AWS Application Discovery Service User Guide.
Starts an import task, which allows you to import details of your on-premises environment directly into AWS Migration Hub without having to use the Application Discovery Service (ADS) tools such as the Discovery Connector or Discovery Agent. This gives you the option to perform migration assessment and planning directly from your imported data, including the ability to group your devices as applications and track their migration status.
To start an import request, do this:
Download the specially formatted comma separated value (CSV) import template, which you can find here: https://s3-us-west-2.amazonaws.com/templates-7cffcf56-bd96-4b1c-b45b-a5b42f282e46/import_template.csv.
Fill out the template with your server and application data.
Upload your import file to an Amazon S3 bucket, and make a note of it's Object URL. Your import file must be in the CSV format.
Use the console or the StartImportTask
command with the AWS CLI or one of the AWS SDKs to import the records from your file.
For more information, including step-by-step procedures, see Migration Hub Import in the AWS Application Discovery Service User Guide.
There are limits to the number of import tasks you can create (and delete) in an AWS account. For more information, see AWS Application Discovery Service Limits in the AWS Application Discovery Service User Guide.
Stop the continuous flow of agent's discovered data into Amazon Athena.
", "StopDataCollectionByAgentIds": "Instructs the specified agents or connectors to stop collecting data.
", "UpdateApplication": "Updates metadata about an application.
" @@ -523,7 +523,7 @@ } }, "Filter": { - "base": "A filter that can use conditional operators.
For more information about filters, see Querying Discovered Configuration Items.
", + "base": "A filter that can use conditional operators.
For more information about filters, see Querying Discovered Configuration Items in the AWS Application Discovery Service User Guide.
", "refs": { "Filters$member": null } @@ -553,7 +553,7 @@ "base": null, "refs": { "DescribeAgentsRequest$filters": "You can filter the request using various logical operators and a key-value format. For example:
{\"key\": \"collectionStatus\", \"value\": \"STARTED\"}
You can filter the request using various logical operators and a key-value format. For example:
{\"key\": \"serverType\", \"value\": \"webServer\"}
For a complete list of filter options and guidance about using them with this action, see Querying Discovered Configuration Items.
" + "ListConfigurationsRequest$filters": "You can filter the request using various logical operators and a key-value format. For example:
{\"key\": \"serverType\", \"value\": \"webServer\"}
For a complete list of filter options and guidance about using them with this action, see Using the ListConfigurations Action in the AWS Application Discovery Service User Guide.
" } }, "GetDiscoverySummaryRequest": { @@ -765,7 +765,7 @@ "OrderByList": { "base": null, "refs": { - "ListConfigurationsRequest$orderBy": "Certain filter criteria return output that can be sorted in ascending or descending order. For a list of output characteristics for each filter, see Using the ListConfigurations Action.
" + "ListConfigurationsRequest$orderBy": "Certain filter criteria return output that can be sorted in ascending or descending order. For a list of output characteristics for each filter, see Using the ListConfigurations Action in the AWS Application Discovery Service User Guide.
" } }, "ResourceInUseException": { diff --git a/models/apis/discovery/2015-11-01/smoke.json b/models/apis/discovery/2015-11-01/smoke.json index f93d4959864..626b803f575 100644 --- a/models/apis/discovery/2015-11-01/smoke.json +++ b/models/apis/discovery/2015-11-01/smoke.json @@ -1,11 +1,11 @@ { - "version": 1, - "defaultRegion": "us-west-2", - "testCases": [ - { - "operationName": "DescribeAgents", - "input": {}, - "errorExpectedFromService": false - } - ] + "version": 1, + "defaultRegion": "us-west-2", + "testCases": [ + { + "operationName": "DescribeAgents", + "input": {}, + "errorExpectedFromService": false + } + ] } diff --git a/models/apis/dlm/2018-01-12/api-2.json b/models/apis/dlm/2018-01-12/api-2.json index c02a383ef53..5cd16f677d5 100644 --- a/models/apis/dlm/2018-01-12/api-2.json +++ b/models/apis/dlm/2018-01-12/api-2.json @@ -141,7 +141,14 @@ "max":10, "min":1 }, + "CmkArn":{ + "type":"string", + "max":2048, + "min":0, + "pattern":"arn:aws(-[a-z]{1,3}){0,2}:kms:([a-z]+-){2,3}\\d:\\d+:key/.*" + }, "CopyTags":{"type":"boolean"}, + "CopyTagsNullable":{"type":"boolean"}, "Count":{ "type":"integer", "max":1000, @@ -181,6 +188,33 @@ "Times":{"shape":"TimesList"} } }, + "CrossRegionCopyRetainRule":{ + "type":"structure", + "members":{ + "Interval":{"shape":"Interval"}, + "IntervalUnit":{"shape":"RetentionIntervalUnitValues"} + } + }, + "CrossRegionCopyRule":{ + "type":"structure", + "required":[ + "TargetRegion", + "Encrypted" + ], + "members":{ + "TargetRegion":{"shape":"TargetRegion"}, + "Encrypted":{"shape":"Encrypted"}, + "CmkArn":{"shape":"CmkArn"}, + "CopyTags":{"shape":"CopyTagsNullable"}, + "RetainRule":{"shape":"CrossRegionCopyRetainRule"} + } + }, + "CrossRegionCopyRules":{ + "type":"list", + "member":{"shape":"CrossRegionCopyRule"}, + "max":3, + "min":0 + }, "DeleteLifecyclePolicyRequest":{ "type":"structure", "required":["PolicyId"], @@ -197,6 +231,7 @@ "members":{ } }, + "Encrypted":{"type":"boolean"}, "ErrorCode":{"type":"string"}, "ErrorMessage":{"type":"string"}, "ExcludeBootVolume":{"type":"boolean"}, @@ -373,6 +408,8 @@ }, "PolicyArn":{ "type":"string", + "max":2048, + "min":0, "pattern":"^arn:aws:dlm:[A-Za-z0-9_/.-]{0,63}:\\d+:policy/[0-9A-Za-z_-]{1,128}$" }, "PolicyDescription":{ @@ -455,7 +492,8 @@ "VariableTags":{"shape":"VariableTagsList"}, "CreateRule":{"shape":"CreateRule"}, "RetainRule":{"shape":"RetainRule"}, - "FastRestoreRule":{"shape":"FastRestoreRule"} + "FastRestoreRule":{"shape":"FastRestoreRule"}, + "CrossRegionCopyRules":{"shape":"CrossRegionCopyRules"} } }, "ScheduleList":{ @@ -547,7 +585,8 @@ }, "TagValue":{ "type":"string", - "max":256 + "max":256, + "pattern":"[\\p{all}]*" }, "TagsToAddFilterList":{ "type":"list", @@ -561,6 +600,12 @@ "max":50, "min":0 }, + "TargetRegion":{ + "type":"string", + "max":16, + "min":0, + "pattern":"([a-z]+-){2,3}\\d" + }, "TargetTagList":{ "type":"list", "member":{"shape":"Tag"}, diff --git a/models/apis/dlm/2018-01-12/docs-2.json b/models/apis/dlm/2018-01-12/docs-2.json index ed472a975cc..5c349600135 100644 --- a/models/apis/dlm/2018-01-12/docs-2.json +++ b/models/apis/dlm/2018-01-12/docs-2.json @@ -24,12 +24,24 @@ "FastRestoreRule$AvailabilityZones": "The Availability Zones in which to enable fast snapshot restore.
" } }, + "CmkArn": { + "base": null, + "refs": { + "CrossRegionCopyRule$CmkArn": "The Amazon Resource Name (ARN) of the AWS KMS customer master key (CMK) to use for EBS encryption. If this parameter is not specified, your AWS managed CMK for EBS is used.
" + } + }, "CopyTags": { "base": null, "refs": { "Schedule$CopyTags": "Copy all user-defined tags on a source volume to snapshots of the volume created by this policy.
" } }, + "CopyTagsNullable": { + "base": null, + "refs": { + "CrossRegionCopyRule$CopyTags": "Copy all user-defined tags from the source snapshot to the copied snapshot.
" + } + }, "Count": { "base": null, "refs": { @@ -50,7 +62,25 @@ "CreateRule": { "base": "Specifies when to create snapshots of EBS volumes.
", "refs": { - "Schedule$CreateRule": "The create rule.
" + "Schedule$CreateRule": "The creation rule.
" + } + }, + "CrossRegionCopyRetainRule": { + "base": "Specifies the retention rule for cross-Region snapshot copies.
", + "refs": { + "CrossRegionCopyRule$RetainRule": "The retention rule.
" + } + }, + "CrossRegionCopyRule": { + "base": "Specifies a rule for cross-Region snapshot copies.
", + "refs": { + "CrossRegionCopyRules$member": null + } + }, + "CrossRegionCopyRules": { + "base": null, + "refs": { + "Schedule$CrossRegionCopyRules": "The rule for cross-Region snapshot copies.
" } }, "DeleteLifecyclePolicyRequest": { @@ -63,6 +93,12 @@ "refs": { } }, + "Encrypted": { + "base": null, + "refs": { + "CrossRegionCopyRule$Encrypted": "To encrypt a copy of an unencrypted snapshot if encryption by default is not enabled, enable encryption using this parameter. Copies of encrypted snapshots are encrypted, even if this parameter is false or if encryption by default is not enabled.
" + } + }, "ErrorCode": { "base": null, "refs": { @@ -98,7 +134,7 @@ "FastRestoreRule": { "base": "Specifies a rule for enabling fast snapshot restore. You can enable fast snapshot restore based on either a count or a time interval.
", "refs": { - "Schedule$FastRestoreRule": "Enable fast snapshot restore.
" + "Schedule$FastRestoreRule": "The rule for enabling fast snapshot restore.
" } }, "GetLifecyclePoliciesRequest": { @@ -138,6 +174,7 @@ "base": null, "refs": { "CreateRule$Interval": "The interval between snapshots. The supported values are 2, 3, 4, 6, 8, 12, and 24.
", + "CrossRegionCopyRetainRule$Interval": "The amount of time to retain each snapshot. The maximum is 100 years. This is equivalent to 1200 months, 5200 weeks, or 36500 days.
", "FastRestoreRule$Interval": "The amount of time to enable fast snapshot restore. The maximum is 100 years. This is equivalent to 1200 months, 5200 weeks, or 36500 days.
", "RetainRule$Interval": "The amount of time to retain each snapshot. The maximum is 100 years. This is equivalent to 1200 months, 5200 weeks, or 36500 days.
" } @@ -283,6 +320,7 @@ "RetentionIntervalUnitValues": { "base": null, "refs": { + "CrossRegionCopyRetainRule$IntervalUnit": "The unit of time for time-based retention.
", "FastRestoreRule$IntervalUnit": "The unit of time for enabling fast snapshot restore.
", "RetainRule$IntervalUnit": "The unit of time for time-based retention.
" } @@ -393,6 +431,12 @@ "Schedule$TagsToAdd": "The tags to apply to policy-created resources. These user-defined tags are in addition to the AWS-added lifecycle tags.
" } }, + "TargetRegion": { + "base": null, + "refs": { + "CrossRegionCopyRule$TargetRegion": "The target Region.
" + } + }, "TargetTagList": { "base": null, "refs": { diff --git a/models/apis/ds/2015-04-16/api-2.json b/models/apis/ds/2015-04-16/api-2.json index dcdc4e9e9c9..6c1d4387909 100644 --- a/models/apis/ds/2015-04-16/api-2.json +++ b/models/apis/ds/2015-04-16/api-2.json @@ -1826,7 +1826,10 @@ }, "DisableLDAPSRequest":{ "type":"structure", - "required":["DirectoryId"], + "required":[ + "DirectoryId", + "Type" + ], "members":{ "DirectoryId":{"shape":"DirectoryId"}, "Type":{"shape":"LDAPSType"} @@ -1917,7 +1920,10 @@ }, "EnableLDAPSRequest":{ "type":"structure", - "required":["DirectoryId"], + "required":[ + "DirectoryId", + "Type" + ], "members":{ "DirectoryId":{"shape":"DirectoryId"}, "Type":{"shape":"LDAPSType"} diff --git a/models/apis/ec2/2016-11-15/api-2.json b/models/apis/ec2/2016-11-15/api-2.json index 27f0d299cb5..624999148bb 100755 --- a/models/apis/ec2/2016-11-15/api-2.json +++ b/models/apis/ec2/2016-11-15/api-2.json @@ -1691,6 +1691,15 @@ "input":{"shape":"DescribeInternetGatewaysRequest"}, "output":{"shape":"DescribeInternetGatewaysResult"} }, + "DescribeIpv6Pools":{ + "name":"DescribeIpv6Pools", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeIpv6PoolsRequest"}, + "output":{"shape":"DescribeIpv6PoolsResult"} + }, "DescribeKeyPairs":{ "name":"DescribeKeyPairs", "http":{ @@ -2547,6 +2556,15 @@ "input":{"shape":"ExportTransitGatewayRoutesRequest"}, "output":{"shape":"ExportTransitGatewayRoutesResult"} }, + "GetAssociatedIpv6PoolCidrs":{ + "name":"GetAssociatedIpv6PoolCidrs", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetAssociatedIpv6PoolCidrsRequest"}, + "output":{"shape":"GetAssociatedIpv6PoolCidrsResult"} + }, "GetCapacityReservationUsage":{ "name":"GetCapacityReservationUsage", "http":{ @@ -3435,6 +3453,15 @@ "input":{"shape":"StartInstancesRequest"}, "output":{"shape":"StartInstancesResult"} }, + "StartVpcEndpointServicePrivateDnsVerification":{ + "name":"StartVpcEndpointServicePrivateDnsVerification", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartVpcEndpointServicePrivateDnsVerificationRequest"}, + "output":{"shape":"StartVpcEndpointServicePrivateDnsVerificationResult"} + }, "StopInstances":{ "name":"StopInstances", "http":{ @@ -4141,7 +4168,7 @@ "VpcId" ], "members":{ - "DhcpOptionsId":{"shape":"DhcpOptionsId"}, + "DhcpOptionsId":{"shape":"DefaultingDhcpOptionsId"}, "VpcId":{"shape":"VpcId"}, "DryRun":{ "shape":"Boolean", @@ -4213,7 +4240,7 @@ "locationName":"ipv6CidrBlock" }, "SubnetId":{ - "shape":"String", + "shape":"SubnetId", "locationName":"subnetId" } } @@ -4283,6 +4310,8 @@ "shape":"VpcId", "locationName":"vpcId" }, + "Ipv6Pool":{"shape":"String"}, + "Ipv6CidrBlock":{"shape":"String"}, "Ipv6CidrBlockNetworkBorderGroup":{"shape":"String"} } }, @@ -4489,7 +4518,6 @@ } } }, - "AttachmentId":{"type":"string"}, "AttachmentStatus":{ "type":"string", "enum":[ @@ -4826,6 +4854,7 @@ } }, "Boolean":{"type":"boolean"}, + "BundleId":{"type":"string"}, "BundleIdStringList":{ "type":"list", "member":{ @@ -4964,7 +4993,8 @@ "failed-provision", "pending-deprovision", "pending-provision", - "provisioned" + "provisioned", + "provisioned-not-publicly-advertisable" ] }, "CancelBatchErrorCode":{ @@ -4980,7 +5010,7 @@ "type":"structure", "required":["BundleId"], "members":{ - "BundleId":{"shape":"String"}, + "BundleId":{"shape":"BundleId"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -5330,6 +5360,21 @@ "Linux with SQL Server Enterprise" ] }, + "CapacityReservationOptions":{ + "type":"structure", + "members":{ + "UsageStrategy":{ + "shape":"FleetCapacityReservationUsageStrategy", + "locationName":"usageStrategy" + } + } + }, + "CapacityReservationOptionsRequest":{ + "type":"structure", + "members":{ + "UsageStrategy":{"shape":"FleetCapacityReservationUsageStrategy"} + } + }, "CapacityReservationPreference":{ "type":"string", "enum":[ @@ -5740,6 +5785,10 @@ "shape":"TransportProtocol", "locationName":"transportProtocol" }, + "VpnPort":{ + "shape":"Integer", + "locationName":"vpnPort" + }, "AssociatedTargetNetworks":{ "shape":"AssociatedTargetNetworkSet", "deprecated":true, @@ -6113,7 +6162,7 @@ "locationName":"encrypted" }, "KmsKeyId":{ - "shape":"String", + "shape":"KmsKeyId", "locationName":"kmsKeyId" }, "Name":{"shape":"String"}, @@ -6266,6 +6315,7 @@ "ConnectionLogOptions":{"shape":"ConnectionLogOptions"}, "DnsServers":{"shape":"ValueStringList"}, "TransportProtocol":{"shape":"TransportProtocol"}, + "VpnPort":{"shape":"Integer"}, "Description":{"shape":"String"}, "SplitTunnel":{"shape":"Boolean"}, "DryRun":{"shape":"Boolean"}, @@ -6306,7 +6356,7 @@ "members":{ "ClientVpnEndpointId":{"shape":"ClientVpnEndpointId"}, "DestinationCidrBlock":{"shape":"String"}, - "TargetVpcSubnetId":{"shape":"String"}, + "TargetVpcSubnetId":{"shape":"SubnetId"}, "Description":{"shape":"String"}, "ClientToken":{ "shape":"String", @@ -6622,7 +6672,7 @@ "locationName":"dryRun" }, "InstanceId":{ - "shape":"String", + "shape":"InstanceId", "locationName":"instanceId" }, "Name":{ @@ -7212,7 +7262,7 @@ "CidrBlock":{"shape":"String"}, "Ipv6CidrBlock":{"shape":"String"}, "OutpostArn":{"shape":"String"}, - "VpcId":{"shape":"String"}, + "VpcId":{"shape":"VpcId"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -7659,6 +7709,7 @@ "members":{ "DryRun":{"shape":"Boolean"}, "AcceptanceRequired":{"shape":"Boolean"}, + "PrivateDnsName":{"shape":"String"}, "NetworkLoadBalancerArns":{ "shape":"ValueStringList", "locationName":"NetworkLoadBalancerArn" @@ -7719,6 +7770,8 @@ "shape":"Boolean", "locationName":"amazonProvidedIpv6CidrBlock" }, + "Ipv6Pool":{"shape":"String"}, + "Ipv6CidrBlock":{"shape":"String"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -7894,6 +7947,7 @@ }, "DateTime":{"type":"timestamp"}, "DedicatedHostFlag":{"type":"boolean"}, + "DedicatedHostId":{"type":"string"}, "DefaultRouteTableAssociationValue":{ "type":"string", "enum":[ @@ -7915,6 +7969,7 @@ "on-demand" ] }, + "DefaultingDhcpOptionsId":{"type":"string"}, "DeleteClientVpnEndpointRequest":{ "type":"structure", "required":["ClientVpnEndpointId"], @@ -7940,7 +7995,7 @@ ], "members":{ "ClientVpnEndpointId":{"shape":"ClientVpnEndpointId"}, - "TargetVpcSubnetId":{"shape":"String"}, + "TargetVpcSubnetId":{"shape":"SubnetId"}, "DestinationCidrBlock":{"shape":"String"}, "DryRun":{"shape":"Boolean"} } @@ -8142,7 +8197,7 @@ "type":"structure", "required":["KeyName"], "members":{ - "KeyName":{"shape":"String"}, + "KeyName":{"shape":"KeyPairName"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -8509,7 +8564,7 @@ "type":"structure", "required":["SubnetId"], "members":{ - "SubnetId":{"shape":"String"}, + "SubnetId":{"shape":"SubnetId"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -8867,7 +8922,7 @@ "type":"structure", "required":["ImageId"], "members":{ - "ImageId":{"shape":"String"}, + "ImageId":{"shape":"ImageId"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -9440,7 +9495,11 @@ "locationName":"EgressOnlyInternetGatewayId" }, "MaxResults":{"shape":"DescribeEgressOnlyInternetGatewaysMaxResults"}, - "NextToken":{"shape":"String"} + "NextToken":{"shape":"String"}, + "Filters":{ + "shape":"FilterList", + "locationName":"Filter" + } } }, "DescribeEgressOnlyInternetGatewaysResult":{ @@ -9534,6 +9593,10 @@ "ExportTaskIds":{ "shape":"ExportTaskIdStringList", "locationName":"exportTaskId" + }, + "Filters":{ + "shape":"FilterList", + "locationName":"Filter" } } }, @@ -10355,6 +10418,35 @@ } } }, + "DescribeIpv6PoolsRequest":{ + "type":"structure", + "members":{ + "PoolIds":{ + "shape":"ValueStringList", + "locationName":"PoolId" + }, + "NextToken":{"shape":"NextToken"}, + "MaxResults":{"shape":"Ipv6PoolMaxResults"}, + "DryRun":{"shape":"Boolean"}, + "Filters":{ + "shape":"FilterList", + "locationName":"Filter" + } + } + }, + "DescribeIpv6PoolsResult":{ + "type":"structure", + "members":{ + "Ipv6Pools":{ + "shape":"Ipv6PoolSet", + "locationName":"ipv6PoolSet" + }, + "NextToken":{ + "shape":"NextToken", + "locationName":"nextToken" + } + } + }, "DescribeKeyPairsRequest":{ "type":"structure", "members":{ @@ -10366,6 +10458,10 @@ "shape":"KeyNameStringList", "locationName":"KeyName" }, + "KeyPairIds":{ + "shape":"KeyPairIdStringList", + "locationName":"KeyPairId" + }, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -10863,6 +10959,10 @@ "GroupNames":{ "shape":"PlacementGroupStringList", "locationName":"groupName" + }, + "GroupIds":{ + "shape":"PlacementGroupIdStringList", + "locationName":"GroupId" } } }, @@ -12530,7 +12630,7 @@ "required":["AttachmentId"], "members":{ "AttachmentId":{ - "shape":"AttachmentId", + "shape":"NetworkInterfaceAttachmentId", "locationName":"attachmentId" }, "DryRun":{ @@ -12885,7 +12985,7 @@ "DisassociateAddressRequest":{ "type":"structure", "members":{ - "AssociationId":{"shape":"String"}, + "AssociationId":{"shape":"ElasticIpAssociationId"}, "PublicIp":{"shape":"String"}, "DryRun":{ "shape":"Boolean", @@ -12953,7 +13053,7 @@ "required":["AssociationId"], "members":{ "AssociationId":{ - "shape":"String", + "shape":"SubnetCidrAssociationId", "locationName":"associationId" } } @@ -13166,6 +13266,14 @@ "locationName":"item" } }, + "DnsNameState":{ + "type":"string", + "enum":[ + "pendingVerification", + "verified", + "failed" + ] + }, "DnsServersOptionsModifyStructure":{ "type":"structure", "members":{ @@ -13211,11 +13319,11 @@ "shape":"VolumeType", "locationName":"volumeType" }, + "KmsKeyId":{"shape":"String"}, "Encrypted":{ "shape":"Boolean", "locationName":"encrypted" - }, - "KmsKeyId":{"shape":"String"} + } } }, "EbsEncryptionSupport":{ @@ -13267,7 +13375,7 @@ "locationName":"deleteOnTermination" }, "VolumeId":{ - "shape":"String", + "shape":"VolumeId", "locationName":"volumeId" } } @@ -13290,6 +13398,10 @@ "EgressOnlyInternetGatewayId":{ "shape":"EgressOnlyInternetGatewayId", "locationName":"egressOnlyInternetGatewayId" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -13433,6 +13545,10 @@ "InstanceId":{ "shape":"String", "locationName":"instanceId" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -13483,6 +13599,7 @@ "locationName":"item" } }, + "ElasticIpAssociationId":{"type":"string"}, "EnaSupport":{ "type":"string", "enum":[ @@ -13974,6 +14091,10 @@ "StatusMessage":{ "shape":"String", "locationName":"statusMessage" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -14145,6 +14266,10 @@ "fulfilled" ] }, + "FleetCapacityReservationUsageStrategy":{ + "type":"string", + "enum":["use-capacity-reservations-first"] + }, "FleetData":{ "type":"structure", "members":{ @@ -14668,6 +14793,29 @@ "type":"string", "enum":["ipsec.1"] }, + "GetAssociatedIpv6PoolCidrsRequest":{ + "type":"structure", + "required":["PoolId"], + "members":{ + "PoolId":{"shape":"String"}, + "NextToken":{"shape":"NextToken"}, + "MaxResults":{"shape":"Ipv6PoolMaxResults"}, + "DryRun":{"shape":"Boolean"} + } + }, + "GetAssociatedIpv6PoolCidrsResult":{ + "type":"structure", + "members":{ + "Ipv6CidrAssociations":{ + "shape":"Ipv6CidrAssociationSet", + "locationName":"ipv6CidrAssociationSet" + }, + "NextToken":{ + "shape":"String", + "locationName":"nextToken" + } + } + }, "GetCapacityReservationUsageRequest":{ "type":"structure", "required":["CapacityReservationId"], @@ -15324,7 +15472,6 @@ } } }, - "HostId":{"type":"string"}, "HostInstance":{ "type":"structure", "members":{ @@ -15924,7 +16071,7 @@ "DryRun":{"shape":"Boolean"}, "Encrypted":{"shape":"Boolean"}, "Hypervisor":{"shape":"String"}, - "KmsKeyId":{"shape":"String"}, + "KmsKeyId":{"shape":"KmsKeyId"}, "LicenseType":{"shape":"String"}, "Platform":{"shape":"String"}, "RoleName":{"shape":"String"}, @@ -16047,6 +16194,10 @@ "shape":"String", "locationName":"statusMessage" }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" + }, "LicenseSpecifications":{ "shape":"ImportImageLicenseSpecificationListResponse", "locationName":"licenseSpecifications" @@ -16217,7 +16368,7 @@ "locationName":"dryRun" }, "KeyName":{ - "shape":"String", + "shape":"KeyPairName", "locationName":"keyName" }, "PublicKeyMaterial":{ @@ -16283,6 +16434,10 @@ "SnapshotTaskDetail":{ "shape":"SnapshotTaskDetail", "locationName":"snapshotTaskDetail" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -17679,9 +17834,9 @@ "shape":"UsageClassTypeList", "locationName":"supportedUsageClasses" }, - "SupportedRootDevices":{ + "SupportedRootDeviceTypes":{ "shape":"RootDeviceTypeList", - "locationName":"supportedRootDevices" + "locationName":"supportedRootDeviceTypes" }, "BareMetal":{ "shape":"BareMetalFlag", @@ -17940,6 +18095,26 @@ "locationName":"item" } }, + "Ipv6CidrAssociation":{ + "type":"structure", + "members":{ + "Ipv6Cidr":{ + "shape":"String", + "locationName":"ipv6Cidr" + }, + "AssociatedResource":{ + "shape":"String", + "locationName":"associatedResource" + } + } + }, + "Ipv6CidrAssociationSet":{ + "type":"list", + "member":{ + "shape":"Ipv6CidrAssociation", + "locationName":"item" + } + }, "Ipv6CidrBlock":{ "type":"structure", "members":{ @@ -17957,6 +18132,39 @@ } }, "Ipv6Flag":{"type":"boolean"}, + "Ipv6Pool":{ + "type":"structure", + "members":{ + "PoolId":{ + "shape":"String", + "locationName":"poolId" + }, + "Description":{ + "shape":"String", + "locationName":"description" + }, + "PoolCidrBlocks":{ + "shape":"PoolCidrBlocksSet", + "locationName":"poolCidrBlockSet" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" + } + } + }, + "Ipv6PoolMaxResults":{ + "type":"integer", + "max":1000, + "min":1 + }, + "Ipv6PoolSet":{ + "type":"list", + "member":{ + "shape":"Ipv6Pool", + "locationName":"item" + } + }, "Ipv6Range":{ "type":"structure", "members":{ @@ -17984,6 +18192,7 @@ "disable" ] }, + "KernelId":{"type":"string"}, "KeyNameStringList":{ "type":"list", "member":{ @@ -18005,12 +18214,27 @@ "KeyName":{ "shape":"String", "locationName":"keyName" + }, + "KeyPairId":{ + "shape":"String", + "locationName":"keyPairId" } } }, + "KeyPairIdStringList":{ + "type":"list", + "member":{ + "shape":"String", + "locationName":"KeyPairId" + } + }, "KeyPairInfo":{ "type":"structure", "members":{ + "KeyPairId":{ + "shape":"String", + "locationName":"keyPairId" + }, "KeyFingerprint":{ "shape":"String", "locationName":"keyFingerprint" @@ -18018,6 +18242,10 @@ "KeyName":{ "shape":"String", "locationName":"keyName" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -18030,6 +18258,19 @@ }, "KeyPairName":{"type":"string"}, "KmsKeyId":{"type":"string"}, + "LastError":{ + "type":"structure", + "members":{ + "Message":{ + "shape":"String", + "locationName":"message" + }, + "Code":{ + "shape":"String", + "locationName":"code" + } + } + }, "LaunchPermission":{ "type":"structure", "members":{ @@ -18389,6 +18630,13 @@ "Configured":{"shape":"Boolean"} } }, + "LaunchTemplateHttpTokensState":{ + "type":"string", + "enum":[ + "optional", + "required" + ] + }, "LaunchTemplateIamInstanceProfileSpecification":{ "type":"structure", "members":{ @@ -18430,6 +18678,49 @@ "SpotOptions":{"shape":"LaunchTemplateSpotMarketOptionsRequest"} } }, + "LaunchTemplateInstanceMetadataEndpointState":{ + "type":"string", + "enum":[ + "disabled", + "enabled" + ] + }, + "LaunchTemplateInstanceMetadataOptions":{ + "type":"structure", + "members":{ + "State":{ + "shape":"LaunchTemplateInstanceMetadataOptionsState", + "locationName":"state" + }, + "HttpTokens":{ + "shape":"LaunchTemplateHttpTokensState", + "locationName":"httpTokens" + }, + "HttpPutResponseHopLimit":{ + "shape":"Integer", + "locationName":"httpPutResponseHopLimit" + }, + "HttpEndpoint":{ + "shape":"LaunchTemplateInstanceMetadataEndpointState", + "locationName":"httpEndpoint" + } + } + }, + "LaunchTemplateInstanceMetadataOptionsRequest":{ + "type":"structure", + "members":{ + "HttpTokens":{"shape":"LaunchTemplateHttpTokensState"}, + "HttpPutResponseHopLimit":{"shape":"Integer"}, + "HttpEndpoint":{"shape":"LaunchTemplateInstanceMetadataEndpointState"} + } + }, + "LaunchTemplateInstanceMetadataOptionsState":{ + "type":"string", + "enum":[ + "pending", + "applied" + ] + }, "LaunchTemplateInstanceNetworkInterfaceSpecification":{ "type":"structure", "members":{ @@ -18630,6 +18921,10 @@ "HostResourceGroupArn":{ "shape":"String", "locationName":"hostResourceGroupArn" + }, + "PartitionNumber":{ + "shape":"Integer", + "locationName":"partitionNumber" } } }, @@ -18642,7 +18937,8 @@ "HostId":{"shape":"String"}, "Tenancy":{"shape":"Tenancy"}, "SpreadDomain":{"shape":"String"}, - "HostResourceGroupArn":{"shape":"String"} + "HostResourceGroupArn":{"shape":"String"}, + "PartitionNumber":{"shape":"Integer"} } }, "LaunchTemplateSet":{ @@ -18910,6 +19206,10 @@ "State":{ "shape":"String", "locationName":"state" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -18985,6 +19285,10 @@ "State":{ "shape":"String", "locationName":"state" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -19024,6 +19328,10 @@ "State":{ "shape":"String", "locationName":"state" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -19063,6 +19371,10 @@ "State":{ "shape":"String", "locationName":"state" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -19124,6 +19436,10 @@ "PeerBgpAsn":{ "shape":"Integer", "locationName":"peerBgpAsn" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -19141,6 +19457,10 @@ "LocalGatewayId":{ "shape":"String", "locationName":"localGatewayId" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, @@ -19243,6 +19563,7 @@ "ServerCertificateArn":{"shape":"String"}, "ConnectionLogOptions":{"shape":"ConnectionLogOptions"}, "DnsServers":{"shape":"DnsServersOptionsModifyStructure"}, + "VpnPort":{"shape":"Integer"}, "Description":{"shape":"String"}, "SplitTunnel":{"shape":"Boolean"}, "DryRun":{"shape":"Boolean"} @@ -19420,7 +19741,7 @@ "members":{ "Attribute":{"shape":"String"}, "Description":{"shape":"AttributeValue"}, - "ImageId":{"shape":"String"}, + "ImageId":{"shape":"ImageId"}, "LaunchPermission":{"shape":"LaunchPermissionModifications"}, "OperationType":{"shape":"OperationType"}, "ProductCodes":{ @@ -19612,7 +19933,7 @@ }, "GroupName":{"shape":"PlacementGroupName"}, "HostId":{ - "shape":"HostId", + "shape":"DedicatedHostId", "locationName":"hostId" }, "InstanceId":{ @@ -19775,7 +20096,7 @@ "AssignIpv6AddressOnCreation":{"shape":"AttributeBooleanValue"}, "MapPublicIpOnLaunch":{"shape":"AttributeBooleanValue"}, "SubnetId":{ - "shape":"String", + "shape":"SubnetId", "locationName":"subnetId" } } @@ -20002,6 +20323,8 @@ "members":{ "DryRun":{"shape":"Boolean"}, "ServiceId":{"shape":"ServiceId"}, + "PrivateDnsName":{"shape":"String"}, + "RemovePrivateDnsName":{"shape":"Boolean"}, "AcceptanceRequired":{"shape":"Boolean"}, "AddNetworkLoadBalancerArns":{ "shape":"ValueStringList", @@ -20686,6 +21009,7 @@ } } }, + "NetworkInterfaceAttachmentId":{"type":"string"}, "NetworkInterfaceAttribute":{ "type":"string", "enum":[ @@ -20907,6 +21231,10 @@ "shape":"FleetOnDemandAllocationStrategy", "locationName":"allocationStrategy" }, + "CapacityReservationOptions":{ + "shape":"CapacityReservationOptions", + "locationName":"capacityReservationOptions" + }, "SingleInstanceType":{ "shape":"Boolean", "locationName":"singleInstanceType" @@ -20929,6 +21257,7 @@ "type":"structure", "members":{ "AllocationStrategy":{"shape":"FleetOnDemandAllocationStrategy"}, + "CapacityReservationOptions":{"shape":"CapacityReservationOptionsRequest"}, "SingleInstanceType":{"shape":"Boolean"}, "SingleAvailabilityZone":{"shape":"Boolean"}, "MinTargetCapacity":{"shape":"Integer"}, @@ -21254,9 +21583,24 @@ "PartitionCount":{ "shape":"Integer", "locationName":"partitionCount" + }, + "GroupId":{ + "shape":"String", + "locationName":"groupId" + }, + "Tags":{ + "shape":"TagList", + "locationName":"tagSet" } } }, + "PlacementGroupIdStringList":{ + "type":"list", + "member":{ + "shape":"String", + "locationName":"GroupId" + } + }, "PlacementGroupInfo":{ "type":"structure", "members":{ @@ -21323,6 +21667,22 @@ "type":"string", "enum":["Windows"] }, + "PoolCidrBlock":{ + "type":"structure", + "members":{ + "Cidr":{ + "shape":"String", + "locationName":"poolCidrBlock" + } + } + }, + "PoolCidrBlocksSet":{ + "type":"list", + "member":{ + "shape":"PoolCidrBlock", + "locationName":"item" + } + }, "PoolMaxResults":{ "type":"integer", "max":10, @@ -21495,6 +21855,27 @@ "Role" ] }, + "PrivateDnsNameConfiguration":{ + "type":"structure", + "members":{ + "State":{ + "shape":"DnsNameState", + "locationName":"state" + }, + "Type":{ + "shape":"String", + "locationName":"type" + }, + "Value":{ + "shape":"String", + "locationName":"value" + }, + "Name":{ + "shape":"String", + "locationName":"name" + } + } + }, "PrivateIpAddressConfigSet":{ "type":"list", "member":{ @@ -21603,6 +21984,7 @@ "members":{ "Cidr":{"shape":"String"}, "CidrAuthorizationContext":{"shape":"CidrAuthorizationContext"}, + "PubliclyAdvertisable":{"shape":"Boolean"}, "Description":{"shape":"String"}, "DryRun":{"shape":"Boolean"} } @@ -21879,6 +22261,7 @@ "Windows (Amazon VPC)" ] }, + "RamdiskId":{"type":"string"}, "ReasonCodesList":{ "type":"list", "member":{ @@ -21981,7 +22364,7 @@ "locationName":"enaSupport" }, "KernelId":{ - "shape":"String", + "shape":"KernelId", "locationName":"kernelId" }, "Name":{ @@ -21993,7 +22376,7 @@ "locationName":"BillingProduct" }, "RamdiskId":{ - "shape":"String", + "shape":"RamdiskId", "locationName":"ramdiskId" }, "RootDeviceName":{ @@ -22139,7 +22522,7 @@ "ReleaseAddressRequest":{ "type":"structure", "members":{ - "AllocationId":{"shape":"String"}, + "AllocationId":{"shape":"AllocationId"}, "PublicIp":{"shape":"String"}, "NetworkBorderGroup":{"shape":"String"}, "DryRun":{ @@ -22448,7 +22831,7 @@ "RequestHostIdSet":{ "type":"list", "member":{ - "shape":"HostId", + "shape":"DedicatedHostId", "locationName":"item" } }, @@ -22509,7 +22892,8 @@ "shape":"LaunchTemplateLicenseSpecificationListRequest", "locationName":"LicenseSpecification" }, - "HibernationOptions":{"shape":"LaunchTemplateHibernationOptionsRequest"} + "HibernationOptions":{"shape":"LaunchTemplateHibernationOptionsRequest"}, + "MetadataOptions":{"shape":"LaunchTemplateInstanceMetadataOptionsRequest"} } }, "RequestSpotFleetRequest":{ @@ -23161,7 +23545,7 @@ ], "members":{ "Attribute":{"shape":"ResetImageAttributeName"}, - "ImageId":{"shape":"String"}, + "ImageId":{"shape":"ImageId"}, "DryRun":{ "shape":"Boolean", "locationName":"dryRun" @@ -23247,10 +23631,12 @@ "image", "instance", "internet-gateway", + "key-pair", "launch-template", "natgateway", "network-acl", "network-interface", + "placement-group", "reserved-instances", "route-table", "security-group", @@ -23401,6 +23787,10 @@ "HibernationOptions":{ "shape":"LaunchTemplateHibernationOptions", "locationName":"hibernationOptions" + }, + "MetadataOptions":{ + "shape":"LaunchTemplateInstanceMetadataOptions", + "locationName":"metadataOptions" } } }, @@ -24428,6 +24818,10 @@ "shape":"String", "locationName":"privateDnsName" }, + "PrivateDnsNameConfiguration":{ + "shape":"PrivateDnsNameConfiguration", + "locationName":"privateDnsNameConfiguration" + }, "Tags":{ "shape":"TagList", "locationName":"tagSet" @@ -24487,6 +24881,10 @@ "Tags":{ "shape":"TagList", "locationName":"tagSet" + }, + "PrivateDnsNameVerificationState":{ + "shape":"DnsNameState", + "locationName":"privateDnsNameVerificationState" } } }, @@ -25431,6 +25829,23 @@ } } }, + "StartVpcEndpointServicePrivateDnsVerificationRequest":{ + "type":"structure", + "required":["ServiceId"], + "members":{ + "DryRun":{"shape":"Boolean"}, + "ServiceId":{"shape":"ServiceId"} + } + }, + "StartVpcEndpointServicePrivateDnsVerificationResult":{ + "type":"structure", + "members":{ + "ReturnValue":{ + "shape":"Boolean", + "locationName":"return" + } + } + }, "State":{ "type":"string", "enum":[ @@ -25605,6 +26020,7 @@ "locationName":"item" } }, + "SubnetCidrAssociationId":{"type":"string"}, "SubnetCidrBlockState":{ "type":"structure", "members":{ @@ -28181,6 +28597,10 @@ "OwnerId":{ "shape":"String", "locationName":"ownerId" + }, + "LastError":{ + "shape":"LastError", + "locationName":"lastError" } } }, @@ -28262,6 +28682,10 @@ "shape":"VpcCidrBlockState", "locationName":"ipv6CidrBlockState" }, + "Ipv6Pool":{ + "shape":"String", + "locationName":"ipv6Pool" + }, "NetworkBorderGroup":{ "shape":"String", "locationName":"networkBorderGroup" diff --git a/models/apis/ec2/2016-11-15/docs-2.json b/models/apis/ec2/2016-11-15/docs-2.json index c9150caaf6c..f9187dde0c0 100755 --- a/models/apis/ec2/2016-11-15/docs-2.json +++ b/models/apis/ec2/2016-11-15/docs-2.json @@ -7,13 +7,13 @@ "AcceptTransitGatewayVpcAttachment": "Accepts a request to attach a VPC to a transit gateway.
The VPC attachment must be in the pendingAcceptance
state. Use DescribeTransitGatewayVpcAttachments to view your pending VPC attachment requests. Use RejectTransitGatewayVpcAttachment to reject a VPC attachment request.
Accepts one or more interface VPC endpoint connection requests to your VPC endpoint service.
", "AcceptVpcPeeringConnection": "Accept a VPC peering connection request. To accept a request, the VPC peering connection must be in the pending-acceptance
state, and you must be the owner of the peer VPC. Use DescribeVpcPeeringConnections to view your outstanding VPC peering connection requests.
For an inter-Region VPC peering connection request, you must accept the VPC peering connection in the Region of the accepter VPC.
", - "AdvertiseByoipCidr": "Advertises an IPv4 address range that is provisioned for use with your AWS resources through bring your own IP addresses (BYOIP).
You can perform this operation at most once every 10 seconds, even if you specify different address ranges each time.
We recommend that you stop advertising the BYOIP CIDR from other locations when you advertise it from AWS. To minimize down time, you can configure your AWS resources to use an address from a BYOIP CIDR before it is advertised, and then simultaneously stop advertising it from the current location and start advertising it through AWS.
It can take a few minutes before traffic to the specified addresses starts routing to AWS because of BGP propagation delays.
To stop advertising the BYOIP CIDR, use WithdrawByoipCidr.
", + "AdvertiseByoipCidr": "Advertises an IPv4 or IPv6 address range that is provisioned for use with your AWS resources through bring your own IP addresses (BYOIP).
You can perform this operation at most once every 10 seconds, even if you specify different address ranges each time.
We recommend that you stop advertising the BYOIP CIDR from other locations when you advertise it from AWS. To minimize down time, you can configure your AWS resources to use an address from a BYOIP CIDR before it is advertised, and then simultaneously stop advertising it from the current location and start advertising it through AWS.
It can take a few minutes before traffic to the specified addresses starts routing to AWS because of BGP propagation delays.
To stop advertising the BYOIP CIDR, use WithdrawByoipCidr.
", "AllocateAddress": "Allocates an Elastic IP address to your AWS account. After you allocate the Elastic IP address you can associate it with an instance or network interface. After you release an Elastic IP address, it is released to the IP address pool and can be allocated to a different AWS account.
You can allocate an Elastic IP address from an address pool owned by AWS or from an address pool created from a public IPv4 address range that you have brought to AWS for use with your AWS resources using bring your own IP addresses (BYOIP). For more information, see Bring Your Own IP Addresses (BYOIP) in the Amazon Elastic Compute Cloud User Guide.
[EC2-VPC] If you release an Elastic IP address, you might be able to recover it. You cannot recover an Elastic IP address that you released after it is allocated to another AWS account. You cannot recover an Elastic IP address for EC2-Classic. To attempt to recover an Elastic IP address that you released, specify it in this operation.
An Elastic IP address is for use either in the EC2-Classic platform or in a VPC. By default, you can allocate 5 Elastic IP addresses for EC2-Classic per Region and 5 Elastic IP addresses for EC2-VPC per Region.
For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide.
", "AllocateHosts": "Allocates a Dedicated Host to your account. At a minimum, specify the supported instance type or instance family, the Availability Zone in which to allocate the host, and the number of hosts to allocate.
", "ApplySecurityGroupsToClientVpnTargetNetwork": "Applies a security group to the association between the target network and the Client VPN endpoint. This action replaces the existing security groups with the specified security groups.
", "AssignIpv6Addresses": "Assigns one or more IPv6 addresses to the specified network interface. You can specify one or more specific IPv6 addresses, or you can specify the number of IPv6 addresses to be automatically assigned from within the subnet's IPv6 CIDR block range. You can assign as many IPv6 addresses to a network interface as you can assign private IPv4 addresses, and the limit varies per instance type. For information, see IP Addresses Per Network Interface Per Instance Type in the Amazon Elastic Compute Cloud User Guide.
", "AssignPrivateIpAddresses": "Assigns one or more secondary private IP addresses to the specified network interface.
You can specify one or more specific secondary IP addresses, or you can specify the number of secondary IP addresses to be automatically assigned within the subnet's CIDR block range. The number of secondary IP addresses that you can assign to an instance varies by instance type. For information about instance types, see Instance Types in the Amazon Elastic Compute Cloud User Guide. For more information about Elastic IP addresses, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide.
When you move a secondary private IP address to another network interface, any Elastic IP address that is associated with the IP address is also moved.
Remapping an IP address is an asynchronous operation. When you move an IP address from one network interface to another, check network/interfaces/macs/mac/local-ipv4s
in the instance metadata to confirm that the remapping is complete.
Associates an Elastic IP address with an instance or a network interface. Before you can use an Elastic IP address, you must allocate it to your account.
An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide.
[EC2-Classic, VPC in an EC2-VPC-only account] If the Elastic IP address is already associated with a different instance, it is disassociated from that instance and associated with the specified instance. If you associate an Elastic IP address with an instance that has an existing Elastic IP address, the existing address is disassociated from the instance, but remains allocated to your account.
[VPC in an EC2-Classic account] If you don't specify a private IP address, the Elastic IP address is associated with the primary IP address. If the Elastic IP address is already associated with a different instance or a network interface, you get an error unless you allow reassociation. You cannot associate an Elastic IP address with an instance or network interface that has an existing Elastic IP address.
This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error, and you may be charged for each time the Elastic IP address is remapped to the same instance. For more information, see the Elastic IP Addresses section of Amazon EC2 Pricing.
Associates an Elastic IP address with an instance or a network interface. Before you can use an Elastic IP address, you must allocate it to your account.
An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide.
[EC2-Classic, VPC in an EC2-VPC-only account] If the Elastic IP address is already associated with a different instance, it is disassociated from that instance and associated with the specified instance. If you associate an Elastic IP address with an instance that has an existing Elastic IP address, the existing address is disassociated from the instance, but remains allocated to your account.
[VPC in an EC2-Classic account] If you don't specify a private IP address, the Elastic IP address is associated with the primary IP address. If the Elastic IP address is already associated with a different instance or a network interface, you get an error unless you allow reassociation. You cannot associate an Elastic IP address with an instance or network interface that has an existing Elastic IP address.
You cannot associate an Elastic IP address with an interface in a different network border group.
This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error, and you may be charged for each time the Elastic IP address is remapped to the same instance. For more information, see the Elastic IP Addresses section of Amazon EC2 Pricing.
Associates a target network with a Client VPN endpoint. A target network is a subnet in a VPC. You can associate multiple subnets from the same VPC with a Client VPN endpoint. You can associate only one subnet in each Availability Zone. We recommend that you associate at least two subnets to provide Availability Zone redundancy.
", "AssociateDhcpOptions": "Associates a set of DHCP options (that you've previously created) with the specified VPC, or associates no DHCP options with the VPC.
After you associate the options with the VPC, any existing instances and all new instances that you launch in that VPC use the options. You don't need to restart or relaunch the instances. They automatically pick up the changes within a few hours, depending on how frequently the instance renews its DHCP lease. You can explicitly renew the lease using the operating system on the instance.
For more information, see DHCP Options Sets in the Amazon Virtual Private Cloud User Guide.
", "AssociateIamInstanceProfile": "Associates an IAM instance profile with a running or stopped instance. You cannot associate more than one IAM instance profile with an instance.
", @@ -21,7 +21,7 @@ "AssociateSubnetCidrBlock": "Associates a CIDR block with your subnet. You can only associate a single IPv6 CIDR block with your subnet. An IPv6 CIDR block must have a prefix length of /64.
", "AssociateTransitGatewayMulticastDomain": "Associates the specified subnets and transit gateway attachments with the specified transit gateway multicast domain.
The transit gateway attachment must be in the available state before you can add a resource. Use DescribeTransitGatewayAttachments to see the state of the attachment.
", "AssociateTransitGatewayRouteTable": "Associates the specified attachment with the specified transit gateway route table. You can associate only one route table with an attachment.
", - "AssociateVpcCidrBlock": "Associates a CIDR block with your VPC. You can associate a secondary IPv4 CIDR block, or you can associate an Amazon-provided IPv6 CIDR block. The IPv6 CIDR block size is fixed at /56.
For more information about associating CIDR blocks with your VPC and applicable restrictions, see VPC and Subnet Sizing in the Amazon Virtual Private Cloud User Guide.
", + "AssociateVpcCidrBlock": "Associates a CIDR block with your VPC. You can associate a secondary IPv4 CIDR block, an Amazon-provided IPv6 CIDR block, or an IPv6 CIDR block from an IPv6 address pool that you provisioned through bring your own IP addresses (BYOIP). The IPv6 CIDR block size is fixed at /56.
For more information about associating CIDR blocks with your VPC and applicable restrictions, see VPC and Subnet Sizing in the Amazon Virtual Private Cloud User Guide.
", "AttachClassicLinkVpc": "Links an EC2-Classic instance to a ClassicLink-enabled VPC through one or more of the VPC's security groups. You cannot link an EC2-Classic instance to more than one VPC at a time. You can only link an instance that's in the running
state. An instance is automatically unlinked from a VPC when it's stopped - you can link it to the VPC again when you restart it.
After you've linked an instance, you cannot change the VPC security groups that are associated with it. To change the security groups, you must first unlink the instance, and then link it again.
Linking your instance to a VPC is sometimes referred to as attaching your instance.
", "AttachInternetGateway": "Attaches an internet gateway or a virtual private gateway to a VPC, enabling connectivity between the internet and the VPC. For more information about your VPC and internet gateway, see the Amazon Virtual Private Cloud User Guide.
", "AttachNetworkInterface": "Attaches a network interface to an instance.
", @@ -88,10 +88,10 @@ "CreateTransitGatewayRouteTable": "Creates a route table for the specified transit gateway.
", "CreateTransitGatewayVpcAttachment": "Attaches the specified VPC to the specified transit gateway.
If you attach a VPC with a CIDR range that overlaps the CIDR range of a VPC that is already attached, the new VPC CIDR range is not propagated to the default propagation route table.
To send VPC traffic to an attached transit gateway, add a route to the VPC route table using CreateRoute.
", "CreateVolume": "Creates an EBS volume that can be attached to an instance in the same Availability Zone. The volume is created in the regional endpoint that you send the HTTP request to. For more information see Regions and Endpoints.
You can create a new empty volume or restore a volume from an EBS snapshot. Any AWS Marketplace product codes from the snapshot are propagated to the volume.
You can create encrypted volumes. Encrypted volumes must be attached to instances that support Amazon EBS encryption. Volumes that are created from encrypted snapshots are also automatically encrypted. For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.
You can tag your volumes during creation. For more information, see Tagging Your Amazon EC2 Resources in the Amazon Elastic Compute Cloud User Guide.
For more information, see Creating an Amazon EBS Volume in the Amazon Elastic Compute Cloud User Guide.
", - "CreateVpc": "Creates a VPC with the specified IPv4 CIDR block. The smallest VPC you can create uses a /28 netmask (16 IPv4 addresses), and the largest uses a /16 netmask (65,536 IPv4 addresses). For more information about how large to make your VPC, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.
You can optionally request an Amazon-provided IPv6 CIDR block for the VPC. The IPv6 CIDR block uses a /56 prefix length, and is allocated from Amazon's pool of IPv6 addresses. You cannot choose the IPv6 range for your VPC.
By default, each instance you launch in the VPC has the default DHCP options, which include only a default DNS server that we provide (AmazonProvidedDNS). For more information, see DHCP Options Sets in the Amazon Virtual Private Cloud User Guide.
You can specify the instance tenancy value for the VPC when you create it. You can't change this value for the VPC after you create it. For more information, see Dedicated Instances in the Amazon Elastic Compute Cloud User Guide.
", - "CreateVpcEndpoint": "Creates a VPC endpoint for a specified service. An endpoint enables you to create a private connection between your VPC and the service. The service may be provided by AWS, an AWS Marketplace partner, or another AWS account. For more information, see VPC Endpoints in the Amazon Virtual Private Cloud User Guide.
A gateway
endpoint serves as a target for a route in your route table for traffic destined for the AWS service. You can specify an endpoint policy to attach to the endpoint that will control access to the service from your VPC. You can also specify the VPC route tables that use the endpoint.
An interface
endpoint is a network interface in your subnet that serves as an endpoint for communicating with the specified service. You can specify the subnets in which to create an endpoint, and the security groups to associate with the endpoint network interface.
Use DescribeVpcEndpointServices to get a list of supported services.
", + "CreateVpc": "Creates a VPC with the specified IPv4 CIDR block. The smallest VPC you can create uses a /28 netmask (16 IPv4 addresses), and the largest uses a /16 netmask (65,536 IPv4 addresses). For more information about how large to make your VPC, see Your VPC and Subnets in the Amazon Virtual Private Cloud User Guide.
You can optionally request an IPv6 CIDR block for the VPC. You can request an Amazon-provided IPv6 CIDR block from Amazon's pool of IPv6 addresses, or an IPv6 CIDR block from an IPv6 address pool that you provisioned through bring your own IP addresses (BYOIP).
By default, each instance you launch in the VPC has the default DHCP options, which include only a default DNS server that we provide (AmazonProvidedDNS). For more information, see DHCP Options Sets in the Amazon Virtual Private Cloud User Guide.
You can specify the instance tenancy value for the VPC when you create it. You can't change this value for the VPC after you create it. For more information, see Dedicated Instances in the Amazon Elastic Compute Cloud User Guide.
", + "CreateVpcEndpoint": "Creates a VPC endpoint for a specified service. An endpoint enables you to create a private connection between your VPC and the service. The service may be provided by AWS, an AWS Marketplace Partner, or another AWS account. For more information, see VPC Endpoints in the Amazon Virtual Private Cloud User Guide.
A gateway
endpoint serves as a target for a route in your route table for traffic destined for the AWS service. You can specify an endpoint policy to attach to the endpoint, which will control access to the service from your VPC. You can also specify the VPC route tables that use the endpoint.
An interface
endpoint is a network interface in your subnet that serves as an endpoint for communicating with the specified service. You can specify the subnets in which to create an endpoint, and the security groups to associate with the endpoint network interface.
Use DescribeVpcEndpointServices to get a list of supported services.
", "CreateVpcEndpointConnectionNotification": "Creates a connection notification for a specified VPC endpoint or VPC endpoint service. A connection notification notifies you of specific endpoint events. You must create an SNS topic to receive notifications. For more information, see Create a Topic in the Amazon Simple Notification Service Developer Guide.
You can create a connection notification for interface endpoints only.
", - "CreateVpcEndpointServiceConfiguration": "Creates a VPC endpoint service configuration to which service consumers (AWS accounts, IAM users, and IAM roles) can connect. Service consumers can create an interface VPC endpoint to connect to your service.
To create an endpoint service configuration, you must first create a Network Load Balancer for your service. For more information, see VPC Endpoint Services in the Amazon Virtual Private Cloud User Guide.
", + "CreateVpcEndpointServiceConfiguration": "Creates a VPC endpoint service configuration to which service consumers (AWS accounts, IAM users, and IAM roles) can connect. Service consumers can create an interface VPC endpoint to connect to your service.
To create an endpoint service configuration, you must first create a Network Load Balancer for your service. For more information, see VPC Endpoint Services in the Amazon Virtual Private Cloud User Guide.
If you set the private DNS name, you must prove that you own the private DNS domain name. For more information, see VPC Endpoint Service Private DNS Name Verification in the Amazon Virtual Private Cloud User Guide.
", "CreateVpcPeeringConnection": "Requests a VPC peering connection between two VPCs: a requester VPC that you own and an accepter VPC with which to create the connection. The accepter VPC can belong to another AWS account and can be in a different Region to the requester VPC. The requester VPC and accepter VPC cannot have overlapping CIDR blocks.
Limitations and rules apply to a VPC peering connection. For more information, see the limitations section in the VPC Peering Guide.
The owner of the accepter VPC must accept the peering request to activate the peering connection. The VPC peering connection request expires after 7 days, after which it cannot be accepted or rejected.
If you create a VPC peering connection request between VPCs with overlapping CIDR blocks, the VPC peering connection has a status of failed
.
Creates a VPN connection between an existing virtual private gateway and a VPN customer gateway. The supported connection type is ipsec.1
.
The response includes information that you need to give to your network administrator to configure your customer gateway.
We strongly recommend that you use HTTPS when calling this operation because the response contains sensitive cryptographic information for configuring your customer gateway.
If you decide to shut down your VPN connection for any reason and later create a new VPN connection, you must reconfigure your customer gateway with the new information returned from this call.
This is an idempotent operation. If you perform the operation more than once, Amazon EC2 doesn't return an error.
For more information, see AWS Site-to-Site VPN in the AWS Site-to-Site VPN User Guide.
", "CreateVpnConnectionRoute": "Creates a static route associated with a VPN connection between an existing virtual private gateway and a VPN customer gateway. The static route allows traffic to be routed from the virtual private gateway to the VPN customer gateway.
For more information, see AWS Site-to-Site VPN in the AWS Site-to-Site VPN User Guide.
", @@ -147,12 +147,12 @@ "DeregisterImage": "Deregisters the specified AMI. After you deregister an AMI, it can't be used to launch new instances; however, it doesn't affect any instances that you've already launched from the AMI. You'll continue to incur usage costs for those instances until you terminate them.
When you deregister an Amazon EBS-backed AMI, it doesn't affect the snapshot that was created for the root volume of the instance during the AMI creation process. When you deregister an instance store-backed AMI, it doesn't affect the files that you uploaded to Amazon S3 when you created the AMI.
", "DeregisterTransitGatewayMulticastGroupMembers": "Deregisters the specified members (network interfaces) from the transit gateway multicast group.
", "DeregisterTransitGatewayMulticastGroupSources": "Deregisters the specified sources (network interfaces) from the transit gateway multicast group.
", - "DescribeAccountAttributes": "Describes attributes of your AWS account. The following are the supported account attributes:
supported-platforms
: Indicates whether your account can launch instances into EC2-Classic and EC2-VPC, or only into EC2-VPC.
default-vpc
: The ID of the default VPC for your account, or none
.
max-instances
: The maximum number of On-Demand Instances that you can run.
vpc-max-security-groups-per-interface
: The maximum number of security groups that you can assign to a network interface.
max-elastic-ips
: The maximum number of Elastic IP addresses that you can allocate for use with EC2-Classic.
vpc-max-elastic-ips
: The maximum number of Elastic IP addresses that you can allocate for use with EC2-VPC.
Describes attributes of your AWS account. The following are the supported account attributes:
supported-platforms
: Indicates whether your account can launch instances into EC2-Classic and EC2-VPC, or only into EC2-VPC.
default-vpc
: The ID of the default VPC for your account, or none
.
max-instances
: This attribute is no longer supported. The returned value does not reflect your actual vCPU limit for running On-Demand Instances. For more information, see On-Demand Instance Limits in the Amazon Elastic Compute Cloud User Guide.
vpc-max-security-groups-per-interface
: The maximum number of security groups that you can assign to a network interface.
max-elastic-ips
: The maximum number of Elastic IP addresses that you can allocate for use with EC2-Classic.
vpc-max-elastic-ips
: The maximum number of Elastic IP addresses that you can allocate for use with EC2-VPC.
Describes the specified Elastic IP addresses or all of your Elastic IP addresses.
An Elastic IP address is for use in either the EC2-Classic platform or in a VPC. For more information, see Elastic IP Addresses in the Amazon Elastic Compute Cloud User Guide.
", "DescribeAggregateIdFormat": "Describes the longer ID format settings for all resource types in a specific Region. This request is useful for performing a quick audit to determine whether a specific Region is fully opted in for longer IDs (17-character IDs).
This request only returns information about resource types that support longer IDs.
The following resource types support longer IDs: bundle
| conversion-task
| customer-gateway
| dhcp-options
| elastic-ip-allocation
| elastic-ip-association
| export-task
| flow-log
| image
| import-task
| instance
| internet-gateway
| network-acl
| network-acl-association
| network-interface
| network-interface-attachment
| prefix-list
| reservation
| route-table
| route-table-association
| security-group
| snapshot
| subnet
| subnet-cidr-block-association
| volume
| vpc
| vpc-cidr-block-association
| vpc-endpoint
| vpc-peering-connection
| vpn-connection
| vpn-gateway
.
Describes the Availability Zones and Local Zones that are available to you. If there is an event impacting an Availability Zone or Local Zone, you can use this request to view the state and any provided messages for that Availability Zone or Local Zone.
For more information about Availability Zones and Local Zones, see Regions and Availability Zones in the Amazon Elastic Compute Cloud User Guide.
", "DescribeBundleTasks": "Describes the specified bundle tasks or all of your bundle tasks.
Completed bundle tasks are listed for only a limited time. If your bundle task is no longer in the list, you can still register an AMI from it. Just use RegisterImage
with the Amazon S3 bucket name and image manifest name you provided to the bundle task.
Describes the IP address ranges that were specified in calls to ProvisionByoipCidr.
To describe the address pools that were created when you provisioned the address ranges, use DescribePublicIpv4Pools.
", + "DescribeByoipCidrs": "Describes the IP address ranges that were specified in calls to ProvisionByoipCidr.
To describe the address pools that were created when you provisioned the address ranges, use DescribePublicIpv4Pools or DescribeIpv6Pools.
", "DescribeCapacityReservations": "Describes one or more of your Capacity Reservations. The results describe only the Capacity Reservations in the AWS Region that you're currently using.
", "DescribeClassicLinkInstances": "Describes one or more of your linked EC2-Classic instances. This request only returns information about EC2-Classic instances linked to a VPC through ClassicLink. You cannot use this request to return information about other instances.
", "DescribeClientVpnAuthorizationRules": "Describes the authorization rules for a specified Client VPN endpoint.
", @@ -171,7 +171,7 @@ "DescribeFastSnapshotRestores": "Describes the state of fast snapshot restores for your snapshots.
", "DescribeFleetHistory": "Describes the events for the specified EC2 Fleet during the specified time.
EC2 Fleet events are delayed by up to 30 seconds before they can be described. This ensures that you can query by the last evaluated time and not miss a recorded event. EC2 Fleet events are available for 48 hours.
", "DescribeFleetInstances": "Describes the running instances for the specified EC2 Fleet.
", - "DescribeFleets": "Describes the specified EC2 Fleets or all your EC2 Fleets.
", + "DescribeFleets": "Describes the specified EC2 Fleets or all of your EC2 Fleets.
", "DescribeFlowLogs": "Describes one or more flow logs. To view the information in your flow logs (the log streams for the network interfaces), you must use the CloudWatch Logs console or the CloudWatch Logs API.
", "DescribeFpgaImageAttribute": "Describes the specified attribute of the specified Amazon FPGA Image (AFI).
", "DescribeFpgaImages": "Describes the Amazon FPGA Images (AFIs) available to you. These include public AFIs, private AFIs that you own, and AFIs owned by other AWS accounts for which you have load permissions.
", @@ -192,6 +192,7 @@ "DescribeInstanceTypes": "Returns a list of all instance types offered in your current AWS Region. The results can be filtered by the attributes of the instance types.
", "DescribeInstances": "Describes the specified instances or all of AWS account's instances.
If you specify one or more instance IDs, Amazon EC2 returns information for those instances. If you do not specify instance IDs, Amazon EC2 returns information for all relevant instances. If you specify an instance ID that is not valid, an error is returned. If you specify an instance that you do not own, it is not included in the returned results.
Recently terminated instances might appear in the returned results. This interval is usually less than one hour.
If you describe instances in the rare case where an Availability Zone is experiencing a service disruption and you specify instance IDs that are in the affected zone, or do not specify any instance IDs at all, the call fails. If you describe instances and specify only instance IDs that are in an unaffected zone, the call works normally.
", "DescribeInternetGateways": "Describes one or more of your internet gateways.
", + "DescribeIpv6Pools": "Describes your IPv6 address pools.
", "DescribeKeyPairs": "Describes the specified key pairs or all of your key pairs.
For more information about key pairs, see Key Pairs in the Amazon Elastic Compute Cloud User Guide.
", "DescribeLaunchTemplateVersions": "Describes one or more versions of a specified launch template. You can describe all versions, individual versions, or a range of versions.
", "DescribeLaunchTemplates": "Describes one or more launch templates.
", @@ -288,6 +289,7 @@ "ExportClientVpnClientConfiguration": "Downloads the contents of the Client VPN endpoint configuration file for the specified Client VPN endpoint. The Client VPN endpoint configuration file includes the Client VPN endpoint and certificate information clients need to establish a connection with the Client VPN endpoint.
", "ExportImage": "Exports an Amazon Machine Image (AMI) to a VM file. For more information, see Exporting a VM Directory from an Amazon Machine Image (AMI) in the VM Import/Export User Guide.
", "ExportTransitGatewayRoutes": "Exports routes from the specified transit gateway route table to the specified S3 bucket. By default, all routes are exported. Alternatively, you can filter by CIDR range.
", + "GetAssociatedIpv6PoolCidrs": "Gets information about the IPv6 CIDR block associations for a specified IPv6 address pool.
", "GetCapacityReservationUsage": "Gets usage information about a Capacity Reservation. If the Capacity Reservation is shared, it shows usage information for the Capacity Reservation owner and each AWS account that is currently using the shared capacity. If the Capacity Reservation is not shared, it shows only the Capacity Reservation owner's usage.
", "GetCoipPoolUsage": "Describes the allocations from the specified customer-owned address pool.
", "GetConsoleOutput": "Gets the console output for the specified instance. For Linux instances, the instance console output displays the exact console output that would normally be displayed on a physical monitor attached to a computer. For Windows instances, the instance console output includes the last three system event log errors.
By default, the console output returns buffered information that was posted shortly after an instance transition state (start, stop, reboot, or terminate). This information is available for at least one hour after the most recent post. Only the most recent 64 KB of console output is available.
You can optionally retrieve the latest serial console output at any time during the instance lifecycle. This option is supported on instance types that use the Nitro hypervisor.
For more information, see Instance Console Output in the Amazon Elastic Compute Cloud User Guide.
", @@ -312,7 +314,7 @@ "ModifyCapacityReservation": "Modifies a Capacity Reservation's capacity and the conditions under which it is to be released. You cannot change a Capacity Reservation's instance type, EBS optimization, instance store settings, platform, Availability Zone, or instance eligibility. If you need to modify any of these attributes, we recommend that you cancel the Capacity Reservation, and then create a new one with the required attributes.
", "ModifyClientVpnEndpoint": "Modifies the specified Client VPN endpoint. You can only modify an endpoint's server certificate information, client connection logging information, DNS server, and description. Modifying the DNS server resets existing client connections.
", "ModifyDefaultCreditSpecification": "Modifies the default credit option for CPU usage of burstable performance instances. The default credit option is set at the account level per AWS Region, and is specified per instance family. All new burstable performance instances in the account launch using the default credit option.
ModifyDefaultCreditSpecification
is an asynchronous operation, which works at an AWS Region level and modifies the credit option for each Availability Zone. All zones in a Region are updated within five minutes. But if instances are launched during this operation, they might not get the new credit option until the zone is updated. To verify whether the update has occurred, you can call GetDefaultCreditSpecification
and check DefaultCreditSpecification
for updates.
For more information, see Burstable Performance Instances in the Amazon Elastic Compute Cloud User Guide.
", - "ModifyEbsDefaultKmsKeyId": "Changes the default customer master key (CMK) for EBS encryption by default for your account in this Region.
AWS creates a unique AWS managed CMK in each Region for use with encryption by default. If you change the default CMK to a customer managed CMK, it is used instead of the AWS managed CMK. To reset the default CMK to the AWS managed CMK for EBS, use ResetEbsDefaultKmsKeyId.
If you delete or disable the customer managed CMK that you specified for use with encryption by default, your instances will fail to launch.
For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.
", + "ModifyEbsDefaultKmsKeyId": "Changes the default customer master key (CMK) for EBS encryption by default for your account in this Region.
AWS creates a unique AWS managed CMK in each Region for use with encryption by default. If you change the default CMK to a symmetric customer managed CMK, it is used instead of the AWS managed CMK. To reset the default CMK to the AWS managed CMK for EBS, use ResetEbsDefaultKmsKeyId. Amazon EBS does not support asymmetric CMKs.
If you delete or disable the customer managed CMK that you specified for use with encryption by default, your instances will fail to launch.
For more information, see Amazon EBS Encryption in the Amazon Elastic Compute Cloud User Guide.
", "ModifyFleet": "Modifies the specified EC2 Fleet.
You can only modify an EC2 Fleet request of type maintain
.
While the EC2 Fleet is being modified, it is in the modifying
state.
To scale up your EC2 Fleet, increase its target capacity. The EC2 Fleet launches the additional Spot Instances according to the allocation strategy for the EC2 Fleet request. If the allocation strategy is lowest-price
, the EC2 Fleet launches instances using the Spot Instance pool with the lowest price. If the allocation strategy is diversified
, the EC2 Fleet distributes the instances across the Spot Instance pools. If the allocation strategy is capacity-optimized
, EC2 Fleet launches instances from Spot Instance pools with optimal capacity for the number of instances that are launching.
To scale down your EC2 Fleet, decrease its target capacity. First, the EC2 Fleet cancels any open requests that exceed the new target capacity. You can request that the EC2 Fleet terminate Spot Instances until the size of the fleet no longer exceeds the new target capacity. If the allocation strategy is lowest-price
, the EC2 Fleet terminates the instances with the highest price per unit. If the allocation strategy is capacity-optimized
, the EC2 Fleet terminates the instances in the Spot Instance pools that have the least available Spot Instance capacity. If the allocation strategy is diversified
, the EC2 Fleet terminates instances across the Spot Instance pools. Alternatively, you can request that the EC2 Fleet keep the fleet at its current size, but not replace any Spot Instances that are interrupted or that you terminate manually.
If you are finished with your EC2 Fleet for now, but will use it again later, you can set the target capacity to 0.
", "ModifyFpgaImageAttribute": "Modifies the specified attribute of the specified Amazon FPGA Image (AFI).
", "ModifyHosts": "Modify the auto-placement setting of a Dedicated Host. When auto-placement is enabled, any instances that you launch with a tenancy of host
but without a specific host ID are placed onto any available Dedicated Host in your account that has auto-placement enabled. When auto-placement is disabled, you need to provide a host ID to have the instance launch onto a specific host. If no host ID is provided, the instance is launched onto a suitable host with auto-placement enabled.
You can also use this API action to modify a Dedicated Host to support either multiple instance types in an instance family, or to support a specific instance type only.
", @@ -340,7 +342,7 @@ "ModifyVpcAttribute": "Modifies the specified attribute of the specified VPC.
", "ModifyVpcEndpoint": "Modifies attributes of a specified VPC endpoint. The attributes that you can modify depend on the type of VPC endpoint (interface or gateway). For more information, see VPC Endpoints in the Amazon Virtual Private Cloud User Guide.
", "ModifyVpcEndpointConnectionNotification": "Modifies a connection notification for VPC endpoint or VPC endpoint service. You can change the SNS topic for the notification, or the events for which to be notified.
", - "ModifyVpcEndpointServiceConfiguration": "Modifies the attributes of your VPC endpoint service configuration. You can change the Network Load Balancers for your service, and you can specify whether acceptance is required for requests to connect to your endpoint service through an interface VPC endpoint.
", + "ModifyVpcEndpointServiceConfiguration": "Modifies the attributes of your VPC endpoint service configuration. You can change the Network Load Balancers for your service, and you can specify whether acceptance is required for requests to connect to your endpoint service through an interface VPC endpoint.
If you set or modify the private DNS name, you must prove that you own the private DNS domain name. For more information, see VPC Endpoint Service Private DNS Name Verification in the Amazon Virtual Private Cloud User Guide.
", "ModifyVpcEndpointServicePermissions": "Modifies the permissions for your VPC endpoint service. You can add or remove permissions for service consumers (IAM users, IAM roles, and AWS accounts) to connect to your endpoint service.
If you grant permissions to all principals, the service is public. Any users who know the name of a public service can send a request to attach an endpoint. If the service does not require manual approval, attachments are automatically approved.
", "ModifyVpcPeeringConnectionOptions": "Modifies the VPC peering connection options on one side of a VPC peering connection. You can do the following:
Enable/disable communication over the peering connection between an EC2-Classic instance that's linked to your VPC (using ClassicLink) and instances in the peer VPC.
Enable/disable communication over the peering connection between instances in your VPC and an EC2-Classic instance that's linked to the peer VPC.
Enable/disable the ability to resolve public DNS hostnames to private IP addresses when queried from instances in the peer VPC.
If the peered VPCs are in the same AWS account, you can enable DNS resolution for queries from the local VPC. This ensures that queries from the local VPC resolve to private IP addresses in the peer VPC. This option is not available if the peered VPCs are in different AWS accounts or different Regions. For peered VPCs in different AWS accounts, each AWS account owner must initiate a separate request to modify the peering connection options. For inter-region peering connections, you must use the Region for the requester VPC to modify the requester VPC peering options and the Region for the accepter VPC to modify the accepter VPC peering options. To verify which VPCs are the accepter and the requester for a VPC peering connection, use the DescribeVpcPeeringConnections command.
", "ModifyVpcTenancy": "Modifies the instance tenancy attribute of the specified VPC. You can change the instance tenancy attribute of a VPC to default
only. You cannot change the instance tenancy attribute to dedicated
.
After you modify the tenancy of the VPC, any new instances that you launch into the VPC have a tenancy of default
, unless you specify otherwise during launch. The tenancy of any existing instances in the VPC is not affected.
For more information, see Dedicated Instances in the Amazon Elastic Compute Cloud User Guide.
", @@ -349,7 +351,7 @@ "ModifyVpnTunnelOptions": "Modifies the options for a VPN tunnel in an AWS Site-to-Site VPN connection. You can modify multiple options for a tunnel in a single request, but you can only modify one tunnel at a time. For more information, see Site-to-Site VPN Tunnel Options for Your Site-to-Site VPN Connection in the AWS Site-to-Site VPN User Guide.
", "MonitorInstances": "Enables detailed monitoring for a running instance. Otherwise, basic monitoring is enabled. For more information, see Monitoring Your Instances and Volumes in the Amazon Elastic Compute Cloud User Guide.
To disable detailed monitoring, see .
", "MoveAddressToVpc": "Moves an Elastic IP address from the EC2-Classic platform to the EC2-VPC platform. The Elastic IP address must be allocated to your account for more than 24 hours, and it must not be associated with an instance. After the Elastic IP address is moved, it is no longer available for use in the EC2-Classic platform, unless you move it back using the RestoreAddressToClassic request. You cannot move an Elastic IP address that was originally allocated for use in the EC2-VPC platform to the EC2-Classic platform.
", - "ProvisionByoipCidr": "Provisions an address range for use with your AWS resources through bring your own IP addresses (BYOIP) and creates a corresponding address pool. After the address range is provisioned, it is ready to be advertised using AdvertiseByoipCidr.
AWS verifies that you own the address range and are authorized to advertise it. You must ensure that the address range is registered to you and that you created an RPKI ROA to authorize Amazon ASNs 16509 and 14618 to advertise the address range. For more information, see Bring Your Own IP Addresses (BYOIP) in the Amazon Elastic Compute Cloud User Guide.
Provisioning an address range is an asynchronous operation, so the call returns immediately, but the address range is not ready to use until its status changes from pending-provision
to provisioned
. To monitor the status of an address range, use DescribeByoipCidrs. To allocate an Elastic IP address from your address pool, use AllocateAddress with either the specific address from the address pool or the ID of the address pool.
Provisions an IPv4 or IPv6 address range for use with your AWS resources through bring your own IP addresses (BYOIP) and creates a corresponding address pool. After the address range is provisioned, it is ready to be advertised using AdvertiseByoipCidr.
AWS verifies that you own the address range and are authorized to advertise it. You must ensure that the address range is registered to you and that you created an RPKI ROA to authorize Amazon ASNs 16509 and 14618 to advertise the address range. For more information, see Bring Your Own IP Addresses (BYOIP) in the Amazon Elastic Compute Cloud User Guide.
Provisioning an address range is an asynchronous operation, so the call returns immediately, but the address range is not ready to use until its status changes from pending-provision
to provisioned
. To monitor the status of an address range, use DescribeByoipCidrs. To allocate an Elastic IP address from your IPv4 address pool, use AllocateAddress with either the specific address from the address pool or the ID of the address pool.
Purchase a reservation with configurations that match those of your Dedicated Host. You must have active Dedicated Hosts in your account before you purchase a reservation. This action results in the specified reservation being purchased and charged to your account.
", "PurchaseReservedInstancesOffering": "Purchases a Reserved Instance for use with your account. With Reserved Instances, you pay a lower hourly rate compared to On-Demand instance pricing.
Use DescribeReservedInstancesOfferings to get a list of Reserved Instance offerings that match your specifications. After you've purchased a Reserved Instance, you can check for your new Reserved Instance with DescribeReservedInstances.
To queue a purchase for a future date and time, specify a purchase time. If you do not specify a purchase time, the default is the current time.
For more information, see Reserved Instances and Reserved Instance Marketplace in the Amazon Elastic Compute Cloud User Guide.
", "PurchaseScheduledInstances": "Purchases the Scheduled Instances with the specified schedule.
Scheduled Instances enable you to purchase Amazon EC2 compute capacity by the hour for a one-year term. Before you can purchase a Scheduled Instance, you must call DescribeScheduledInstanceAvailability to check for available schedules and obtain a purchase token. After you purchase a Scheduled Instance, you must call RunScheduledInstances during each scheduled time period.
After you purchase a Scheduled Instance, you can't cancel, modify, or resell your purchase.
", @@ -389,6 +391,7 @@ "SearchTransitGatewayRoutes": "Searches for routes in the specified transit gateway route table.
", "SendDiagnosticInterrupt": "Sends a diagnostic interrupt to the specified Amazon EC2 instance to trigger a kernel panic (on Linux instances), or a blue screen/stop error (on Windows instances). For instances based on Intel and AMD processors, the interrupt is received as a non-maskable interrupt (NMI).
In general, the operating system crashes and reboots when a kernel panic or stop error is triggered. The operating system can also be configured to perform diagnostic tasks, such as generating a memory dump file, loading a secondary kernel, or obtaining a call trace.
Before sending a diagnostic interrupt to your instance, ensure that its operating system is configured to perform the required diagnostic tasks.
For more information about configuring your operating system to generate a crash dump when a kernel panic or stop error occurs, see Send a Diagnostic Interrupt (Linux instances) or Send a Diagnostic Interrupt (Windows instances).
", "StartInstances": "Starts an Amazon EBS-backed instance that you've previously stopped.
Instances that use Amazon EBS volumes as their root devices can be quickly stopped and started. When an instance is stopped, the compute resources are released and you are not billed for instance usage. However, your root partition Amazon EBS volume remains and continues to persist your data, and you are charged for Amazon EBS volume usage. You can restart your instance at any time. Every time you start your Windows instance, Amazon EC2 charges you for a full instance hour. If you stop and restart your Windows instance, a new instance hour begins and Amazon EC2 charges you for another full instance hour even if you are still within the same 60-minute period when it was stopped. Every time you start your Linux instance, Amazon EC2 charges a one-minute minimum for instance usage, and thereafter charges per second for instance usage.
Before stopping an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM.
Performing this operation on an instance that uses an instance store as its root device returns an error.
For more information, see Stopping Instances in the Amazon Elastic Compute Cloud User Guide.
", + "StartVpcEndpointServicePrivateDnsVerification": "Initiates the verification process to prove that the service provider owns the private DNS name domain for the endpoint service.
The service provider must successfully perform the verification before the consumer can use the name to access the service.
Before the service provider runs this command, they must add a record to the DNS server. For more information, see Adding a TXT Record to Your Domain's DNS Server in the Amazon VPC User Guide.
", "StopInstances": "Stops an Amazon EBS-backed instance.
You can use the Stop action to hibernate an instance if the instance is enabled for hibernation and it meets the hibernation prerequisites. For more information, see Hibernate Your Instance in the Amazon Elastic Compute Cloud User Guide.
We don't charge usage for a stopped instance, or data transfer fees; however, your root partition Amazon EBS volume remains and continues to persist your data, and you are charged for Amazon EBS volume usage. Every time you start your Windows instance, Amazon EC2 charges you for a full instance hour. If you stop and restart your Windows instance, a new instance hour begins and Amazon EC2 charges you for another full instance hour even if you are still within the same 60-minute period when it was stopped. Every time you start your Linux instance, Amazon EC2 charges a one-minute minimum for instance usage, and thereafter charges per second for instance usage.
You can't start, stop, or hibernate Spot Instances, and you can't stop or hibernate instance store-backed instances. For information about using hibernation for Spot Instances, see Hibernating Interrupted Spot Instances in the Amazon Elastic Compute Cloud User Guide.
When you stop or hibernate an instance, we shut it down. You can restart your instance at any time. Before stopping or hibernating an instance, make sure it is in a state from which it can be restarted. Stopping an instance does not preserve data stored in RAM, but hibernating an instance does preserve data stored in RAM. If an instance cannot hibernate successfully, a normal shutdown occurs.
Stopping and hibernating an instance is different to rebooting or terminating it. For example, when you stop or hibernate an instance, the root device and any other devices attached to the instance persist. When you terminate an instance, the root device and any other devices attached during the instance launch are automatically deleted. For more information about the differences between rebooting, stopping, hibernating, and terminating instances, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.
When you stop an instance, we attempt to shut it down forcibly after a short while. If your instance appears stuck in the stopping state after a period of time, there may be an issue with the underlying host computer. For more information, see Troubleshooting Stopping Your Instance in the Amazon Elastic Compute Cloud User Guide.
", "TerminateClientVpnConnections": "Terminates active Client VPN endpoint connections. This action can be used to terminate a specific client connection, or up to five connections established by a specific user.
", "TerminateInstances": "Shuts down the specified instances. This operation is idempotent; if you terminate an instance more than once, each call succeeds.
If you specify multiple instances and the request fails (for example, because of a single incorrect instance ID), none of the instances are terminated.
Terminated instances remain visible after termination (for approximately one hour).
By default, Amazon EC2 deletes all EBS volumes that were attached when the instance launched. Volumes attached after instance launch continue running.
You can stop, start, and terminate EBS-backed instances. You can only terminate instance store-backed instances. What happens to an instance differs if you stop it or terminate it. For example, when you stop an instance, the root device and any other devices attached to the instance persist. When you terminate an instance, any attached EBS volumes with the DeleteOnTermination
block device mapping parameter set to true
are automatically deleted. For more information about the differences between stopping and terminating instances, see Instance Lifecycle in the Amazon Elastic Compute Cloud User Guide.
For more information about troubleshooting, see Troubleshooting Terminating Your Instance in the Amazon Elastic Compute Cloud User Guide.
", @@ -397,7 +400,7 @@ "UnmonitorInstances": "Disables detailed monitoring for a running instance. For more information, see Monitoring Your Instances and Volumes in the Amazon Elastic Compute Cloud User Guide.
", "UpdateSecurityGroupRuleDescriptionsEgress": "[VPC only] Updates the description of an egress (outbound) security group rule. You can replace an existing description, or add a description to a rule that did not have one previously.
You specify the description as part of the IP permissions structure. You can remove a description for a security group rule by omitting the description parameter in the request.
", "UpdateSecurityGroupRuleDescriptionsIngress": "Updates the description of an ingress (inbound) security group rule. You can replace an existing description, or add a description to a rule that did not have one previously.
You specify the description as part of the IP permissions structure. You can remove a description for a security group rule by omitting the description parameter in the request.
", - "WithdrawByoipCidr": "Stops advertising an IPv4 address range that is provisioned as an address pool.
You can perform this operation at most once every 10 seconds, even if you specify different address ranges each time.
It can take a few minutes before traffic to the specified addresses stops routing to AWS because of BGP propagation delays.
" + "WithdrawByoipCidr": "Stops advertising an address range that is provisioned as an address pool.
You can perform this operation at most once every 10 seconds, even if you specify different address ranges each time.
It can take a few minutes before traffic to the specified addresses stops routing to AWS because of BGP propagation delays.
" }, "shapes": { "AcceptReservedInstancesExchangeQuoteRequest": { @@ -557,7 +560,8 @@ "base": null, "refs": { "AssociateAddressRequest$AllocationId": "[EC2-VPC] The allocation ID. This is required for EC2-VPC.
", - "CreateNatGatewayRequest$AllocationId": "The allocation ID of an Elastic IP address to associate with the NAT gateway. If the Elastic IP address is associated with another resource, you must first disassociate it.
" + "CreateNatGatewayRequest$AllocationId": "The allocation ID of an Elastic IP address to associate with the NAT gateway. If the Elastic IP address is associated with another resource, you must first disassociate it.
", + "ReleaseAddressRequest$AllocationId": "[EC2-VPC] The allocation ID. Required for EC2-VPC.
" } }, "AllocationIdList": { @@ -822,12 +826,6 @@ "refs": { } }, - "AttachmentId": { - "base": null, - "refs": { - "DetachNetworkInterfaceRequest$AttachmentId": "The ID of the attachment.
" - } - }, "AttachmentStatus": { "base": null, "refs": { @@ -1161,9 +1159,9 @@ "CreateVolumeRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
(Interface endpoint) Indicate whether to associate a private hosted zone with the specified VPC. The private hosted zone contains a record set for the default public DNS name for the service for the Region (for example, kinesis.us-east-1.amazonaws.com
) which resolves to the private IP addresses of the endpoint network interfaces in the VPC. This enables you to make requests to the default public DNS name for the service instead of the public DNS names that are automatically generated by the VPC endpoint service.
To use a private hosted zone, you must set the following VPC attributes to true
: enableDnsHostnames
and enableDnsSupport
. Use ModifyVpcAttribute to set the VPC attributes.
Default: true
(Interface endpoint) Indicates whether to associate a private hosted zone with the specified VPC. The private hosted zone contains a record set for the default public DNS name for the service for the Region (for example, kinesis.us-east-1.amazonaws.com
), which resolves to the private IP addresses of the endpoint network interfaces in the VPC. This enables you to make requests to the default public DNS name for the service instead of the public DNS names that are automatically generated by the VPC endpoint service.
To use a private hosted zone, you must set the following VPC attributes to true
: enableDnsHostnames
and enableDnsSupport
. Use ModifyVpcAttribute to set the VPC attributes.
Default: true
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Indicate whether requests from service consumers to create an endpoint to your service must be accepted. To accept a request, use AcceptVpcEndpointConnections.
", + "CreateVpcEndpointServiceConfigurationRequest$AcceptanceRequired": "Indicates whether requests from service consumers to create an endpoint to your service must be accepted. To accept a request, use AcceptVpcEndpointConnections.
", "CreateVpcPeeringConnectionRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Requests an Amazon-provided IPv6 CIDR block with a /56 prefix length for the VPC. You cannot specify the range of IP addresses, or the size of the CIDR block.
", "CreateVpcRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Indicates whether EC2 Fleet should replace unhealthy instances.
", "FpgaImage$Public": "Indicates whether the AFI is public.
", "FpgaImage$DataRetentionSupport": "Indicates whether data retention support is enabled for the AFI.
", + "GetAssociatedIpv6PoolCidrsRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Returns true
if the request succeeds; otherwise, it returns an error.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
(Gateway endpoint) Specify true
to reset the policy document to the default policy. The default policy allows full access to the service.
(Interface endpoint) Indicate whether a private hosted zone is associated with the VPC.
", + "ModifyVpcEndpointRequest$PrivateDnsEnabled": "(Interface endpoint) Indicates whether a private hosted zone is associated with the VPC.
", "ModifyVpcEndpointResult$Return": "Returns true
if the request succeeds; otherwise, it returns an error.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Indicate whether requests to create an endpoint to your service must be accepted.
", + "ModifyVpcEndpointServiceConfigurationRequest$RemovePrivateDnsName": "Removes the private DNS name of the endpoint service.
", + "ModifyVpcEndpointServiceConfigurationRequest$AcceptanceRequired": "Indicates whether requests to create an endpoint to your service must be accepted.
", "ModifyVpcEndpointServiceConfigurationResult$Return": "Returns true
if the request succeeds; otherwise, it returns an error.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Returns true
if the request succeeds; otherwise, it returns an error.
Indicates whether the network interface is deleted when the instance is terminated.
", "NetworkInterfaceAttachmentChanges$DeleteOnTermination": "Indicates whether the network interface is deleted when the instance is terminated.
", "NetworkInterfacePrivateIpAddress$Primary": "Indicates whether this IPv4 address is the primary private IPv4 address of the network interface.
", - "OnDemandOptions$SingleInstanceType": "Indicates that the fleet uses a single instance type to launch all On-Demand Instances in the fleet.
", - "OnDemandOptions$SingleAvailabilityZone": "Indicates that the fleet launches all On-Demand Instances into a single Availability Zone.
", - "OnDemandOptionsRequest$SingleInstanceType": "Indicates that the fleet uses a single instance type to launch all On-Demand Instances in the fleet.
", - "OnDemandOptionsRequest$SingleAvailabilityZone": "Indicates that the fleet launches all On-Demand Instances into a single Availability Zone.
", + "OnDemandOptions$SingleInstanceType": "Indicates that the fleet uses a single instance type to launch all On-Demand Instances in the fleet. Supported only for fleets of type instant
.
Indicates that the fleet launches all On-Demand Instances into a single Availability Zone. Supported only for fleets of type instant
.
Indicates that the fleet uses a single instance type to launch all On-Demand Instances in the fleet. Supported only for fleets of type instant
.
Indicates that the fleet launches all On-Demand Instances into a single Availability Zone. Supported only for fleets of type instant
.
If true, the public DNS hostnames of instances in the specified VPC resolve to private IP addresses when queried from instances in the peer VPC.
", "PeeringConnectionOptions$AllowEgressFromLocalClassicLinkToRemoteVpc": "If true, enables outbound communication from an EC2-Classic instance that's linked to a local VPC using ClassicLink to instances in a peer VPC.
", "PeeringConnectionOptions$AllowEgressFromLocalVpcToRemoteClassicLink": "If true, enables outbound communication from instances in a local VPC to an EC2-Classic instance that's linked to a peer VPC using ClassicLink.
", @@ -1496,6 +1497,7 @@ "PeeringConnectionOptionsRequest$AllowEgressFromLocalVpcToRemoteClassicLink": "If true, enables outbound communication from instances in a local VPC to an EC2-Classic instance that's linked to a peer VPC using ClassicLink.
", "PriceSchedule$Active": "The current price schedule, as determined by the term remaining for the Reserved Instance in the listing.
A specific price schedule is always in effect, but only one price schedule can be active at any time. Take, for example, a Reserved Instance listing that has five months remaining in its term. When you specify price schedules for five months and two months, this means that schedule 1, covering the first three months of the remaining term, will be active during months 5, 4, and 3. Then schedule 2, covering the last two months of the term, will be active for months 2 and 1.
", "PrivateIpAddressSpecification$Primary": "Indicates whether the private IPv4 address is the primary private IPv4 address. Only one IPv4 address can be designated as primary.
", + "ProvisionByoipCidrRequest$PubliclyAdvertisable": "(IPv6 only) Indicate whether the address range will be publicly advertised to the internet.
Default: true
", "ProvisionByoipCidrRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Indicates whether there are additional routes available.
", "SendDiagnosticInterruptRequest$DryRun": "Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Indicates whether requests from other AWS accounts to create an endpoint to the service must first be accepted.
", - "ServiceConfiguration$ManagesVpcEndpoints": "Indicates whether the service manages it's VPC endpoints. Management of the service VPC endpoints using the VPC endpoint API is restricted.
", + "ServiceConfiguration$ManagesVpcEndpoints": "Indicates whether the service manages its VPC endpoints. Management of the service VPC endpoints using the VPC endpoint API is restricted.
", "ServiceDetail$VpcEndpointPolicySupported": "Indicates whether the service supports endpoint policies.
", "ServiceDetail$AcceptanceRequired": "Indicates whether VPC endpoint connection requests to the service must be accepted by the service owner.
", - "ServiceDetail$ManagesVpcEndpoints": "Indicates whether the service manages it's VPC endpoints. Management of the service VPC endpoints using the VPC endpoint API is restricted.
", + "ServiceDetail$ManagesVpcEndpoints": "Indicates whether the service manages its VPC endpoints. Management of the service VPC endpoints using the VPC endpoint API is restricted.
", "Snapshot$Encrypted": "Indicates whether the snapshot is encrypted.
", "SnapshotInfo$Encrypted": "Indicates whether the snapshot is encrypted.
", "SnapshotTaskDetail$Encrypted": "Indicates whether the snapshot is encrypted.
", @@ -1571,11 +1573,13 @@ "SpotFleetMonitoring$Enabled": "Enables monitoring for the instance.
Default: false
Indicates whether running Spot Instances are terminated when the Spot Fleet request expires.
", "SpotFleetRequestConfigData$ReplaceUnhealthyInstances": "Indicates whether Spot Fleet should replace unhealthy instances.
", - "SpotOptions$SingleInstanceType": "Indicates that the fleet uses a single instance type to launch all Spot Instances in the fleet.
", - "SpotOptions$SingleAvailabilityZone": "Indicates that the fleet launches all Spot Instances into a single Availability Zone.
", - "SpotOptionsRequest$SingleInstanceType": "Indicates that the fleet uses a single instance type to launch all Spot Instances in the fleet.
", - "SpotOptionsRequest$SingleAvailabilityZone": "Indicates that the fleet launches all Spot Instances into a single Availability Zone.
", + "SpotOptions$SingleInstanceType": "Indicates that the fleet uses a single instance type to launch all Spot Instances in the fleet. Supported only for fleets of type instant
.
Indicates that the fleet launches all Spot Instances into a single Availability Zone. Supported only for fleets of type instant
.
Indicates that the fleet uses a single instance type to launch all Spot Instances in the fleet. Supported only for fleets of type instant
.
Indicates that the fleet launches all Spot Instances into a single Availability Zone. Supported only for fleets of type instant
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Returns true
if the request succeeds; otherwise, it returns an error.
Hibernates the instance if the instance was enabled for hibernation at launch. If the instance cannot hibernate successfully, a normal shutdown occurs. For more information, see Hibernate Your Instance in the Amazon Elastic Compute Cloud User Guide.
Default: false
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
Forces the instances to stop. The instances do not have an opportunity to flush file system caches or file system metadata. If you use this option, you must perform file system check and repair procedures. This option is not recommended for Windows instances.
Default: false
Checks whether you have the required permissions for the action, without actually making the request, and provides an error response. If you have the required permissions, the error response is DryRunOperation
. Otherwise, it is UnauthorizedOperation
.
The ID of the bundle task.
" + } + }, "BundleIdStringList": { "base": null, "refs": { @@ -1664,7 +1674,7 @@ "AdvertiseByoipCidrResult$ByoipCidr": "Information about the address range.
", "ByoipCidrSet$member": null, "DeprovisionByoipCidrResult$ByoipCidr": "Information about the address range.
", - "ProvisionByoipCidrResult$ByoipCidr": "Information about the address pool.
", + "ProvisionByoipCidrResult$ByoipCidr": "Information about the address range.
", "WithdrawByoipCidrResult$ByoipCidr": "Information about the address pool.
" } }, @@ -1832,6 +1842,18 @@ "CreateCapacityReservationRequest$InstancePlatform": "The type of operating system for which to reserve capacity.
" } }, + "CapacityReservationOptions": { + "base": "Describes the strategy for using unused Capacity Reservations for fulfilling On-Demand capacity.
This strategy can only be used if the EC2 Fleet is of type instant
.
For more information about Capacity Reservations, see On-Demand Capacity Reservations in the Amazon Elastic Compute Cloud User Guide. For examples of using Capacity Reservations in an EC2 Fleet, see EC2 Fleet Example Configurations in the Amazon Elastic Compute Cloud User Guide.
", + "refs": { + "OnDemandOptions$CapacityReservationOptions": "The strategy for using unused Capacity Reservations for fulfilling On-Demand capacity. Supported only for fleets of type instant
.
Describes the strategy for using unused Capacity Reservations for fulfilling On-Demand capacity.
This strategy can only be used if the EC2 Fleet is of type instant
.
For more information about Capacity Reservations, see On-Demand Capacity Reservations in the Amazon Elastic Compute Cloud User Guide. For examples of using Capacity Reservations in an EC2 Fleet, see EC2 Fleet Example Configurations in the Amazon Elastic Compute Cloud User Guide.
", + "refs": { + "OnDemandOptionsRequest$CapacityReservationOptions": "The strategy for using unused Capacity Reservations for fulfilling On-Demand capacity. Supported only for fleets of type instant
.
Indicates whether Dedicated Hosts are supported on the instance type.
" } }, + "DedicatedHostId": { + "base": null, + "refs": { + "ModifyInstancePlacementRequest$HostId": "The ID of the Dedicated Host with which to associate the instance.
", + "RequestHostIdSet$member": null + } + }, "DefaultRouteTableAssociationValue": { "base": null, "refs": { @@ -3060,6 +3089,12 @@ "TargetCapacitySpecificationRequest$DefaultTargetCapacityType": "The default TotalTargetCapacity
, which is either Spot
or On-Demand
.
The ID of the DHCP options set, or default
to associate no DHCP options with the VPC.
The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.
" + "DescribeNetworkInterfacesRequest$MaxResults": "The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results. You cannot specify this parameter and the network interface IDs parameter in the same request.
" } }, "DescribeNetworkInterfacesRequest": { @@ -4973,7 +5018,6 @@ "DhcpOptionsId": { "base": null, "refs": { - "AssociateDhcpOptionsRequest$DhcpOptionsId": "The ID of the DHCP options set, or default
to associate no DHCP options with the VPC.
The ID of the DHCP options set.
" } }, @@ -5233,8 +5277,8 @@ "DiskSize": { "base": null, "refs": { - "DiskInfo$SizeInGB": "The size of the disk in GiB.
", - "InstanceStorageInfo$TotalSizeInGB": "The total size of the disks, in GiB.
" + "DiskInfo$SizeInGB": "The size of the disk in GB.
", + "InstanceStorageInfo$TotalSizeInGB": "The total size of the disks, in GB.
" } }, "DiskType": { @@ -5256,6 +5300,13 @@ "VpcEndpointConnection$DnsEntries": "The DNS entries for the VPC endpoint.
" } }, + "DnsNameState": { + "base": null, + "refs": { + "PrivateDnsNameConfiguration$State": "The verification state of the VPC endpoint service.
>Consumers of the endpoint service can use the private name only when the state is verified
.
The verification state of the VPC endpoint service.
Consumers of the endpoint service cannot use the private name when the state is not verified
.
Information about the DNS server to be used.
", "refs": { @@ -5469,7 +5520,7 @@ "ElasticInferenceAcceleratorCount": { "base": null, "refs": { - "ElasticInferenceAccelerator$Count": "The number of elastic inference accelerators of given type to be attached to the instance. Only positive values allowed. If not specified defaults to 1.
" + "ElasticInferenceAccelerator$Count": "The number of elastic inference accelerators to attach to the instance.
Default: 1
" } }, "ElasticInferenceAccelerators": { @@ -5478,6 +5529,12 @@ "RunInstancesRequest$ElasticInferenceAccelerators": "An elastic inference accelerator to associate with the instance. Elastic inference accelerators are a resource you can attach to your Amazon EC2 instances to accelerate your Deep Learning (DL) inference workloads.
" } }, + "ElasticIpAssociationId": { + "base": null, + "refs": { + "DisassociateAddressRequest$AssociationId": "[EC2-VPC] The association ID. Required for EC2-VPC.
" + } + }, "EnaSupport": { "base": null, "refs": { @@ -5783,21 +5840,23 @@ "FilterList": { "base": null, "refs": { - "DescribeAddressesRequest$Filters": "One or more filters. Filter names and values are case-sensitive.
allocation-id
- [EC2-VPC] The allocation ID for the address.
association-id
- [EC2-VPC] The association ID for the address.
domain
- Indicates whether the address is for use in EC2-Classic (standard
) or in a VPC (vpc
).
instance-id
- The ID of the instance the address is associated with, if any.
network-interface-id
- [EC2-VPC] The ID of the network interface that the address is associated with, if any.
network-interface-owner-id
- The AWS account ID of the owner.
private-ip-address
- [EC2-VPC] The private IP address associated with the Elastic IP address.
public-ip
- The Elastic IP address.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters. Filter names and values are case-sensitive.
allocation-id
- [EC2-VPC] The allocation ID for the address.
association-id
- [EC2-VPC] The association ID for the address.
domain
- Indicates whether the address is for use in EC2-Classic (standard
) or in a VPC (vpc
).
instance-id
- The ID of the instance the address is associated with, if any.
network-border-group
- The location from where the IP address is advertised.
network-interface-id
- [EC2-VPC] The ID of the network interface that the address is associated with, if any.
network-interface-owner-id
- The AWS account ID of the owner.
private-ip-address
- [EC2-VPC] The private IP address associated with the Elastic IP address.
public-ip
- The Elastic IP address.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters.
group-name
- For Availability Zones, use the Region name. For Local Zones, use the name of the group associated with the Local Zone (for example, us-west-2-lax-1
).
message
- The Availability Zone or Local Zone message.
opt-in-status
- The opt in status (opted-in
, and not-opted-in
| opt-in-not-required
).
region-name
- The name of the Region for the Availability Zone or Local Zone (for example, us-east-1
).
state
- The state of the Availability Zone or Local Zone (available
| information
| impaired
| unavailable
).
zone-id
- The ID of the Availability Zone (for example, use1-az1
) or the Local Zone (for example, use usw2-lax1-az1
).
zone-name
- The name of the Availability Zone (for example, us-east-1a
) or the Local Zone (for example, use us-west-2-lax-1a
).
The filters.
bundle-id
- The ID of the bundle task.
error-code
- If the task failed, the error code returned.
error-message
- If the task failed, the error message returned.
instance-id
- The ID of the instance.
progress
- The level of task completion, as a percentage (for example, 20%).
s3-bucket
- The Amazon S3 bucket to store the AMI.
s3-prefix
- The beginning of the AMI name.
start-time
- The time the task started (for example, 2013-09-15T17:15:20.000Z).
state
- The state of the task (pending
| waiting-for-shutdown
| bundling
| storing
| cancelling
| complete
| failed
).
update-time
- The time of the most recent update for the task.
One or more filters.
", "DescribeClassicLinkInstancesRequest$Filters": "One or more filters.
group-id
- The ID of a VPC security group that's associated with the instance.
instance-id
- The ID of the instance.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
vpc-id
- The ID of the VPC to which the instance is linked.
vpc-id
- The ID of the VPC that the instance is linked to.
One or more filters. Filter names and values are case-sensitive.
", - "DescribeClientVpnConnectionsRequest$Filters": "One or more filters. Filter names and values are case-sensitive.
", - "DescribeClientVpnEndpointsRequest$Filters": "One or more filters. Filter names and values are case-sensitive.
", - "DescribeClientVpnRoutesRequest$Filters": "One or more filters. Filter names and values are case-sensitive.
", - "DescribeClientVpnTargetNetworksRequest$Filters": "One or more filters. Filter names and values are case-sensitive.
", + "DescribeClientVpnAuthorizationRulesRequest$Filters": "One or more filters. Filter names and values are case-sensitive.
description
- The description of the authorization rule.
destination-cidr
- The CIDR of the network to which the authorization rule applies.
group-id
- The ID of the Active Directory group to which the authorization rule grants access.
One or more filters. Filter names and values are case-sensitive.
connection-id
- The ID of the connection.
username
- For Active Directory client authentication, the user name of the client who established the client connection.
One or more filters. Filter names and values are case-sensitive.
endpoint-id
- The ID of the Client VPN endpoint.
transport-protocol
- The transport protocol (tcp
| udp
).
One or more filters. Filter names and values are case-sensitive.
destination-cidr
- The CIDR of the route destination.
origin
- How the route was associated with the Client VPN endpoint (associate
| add-route
).
target-subnet
- The ID of the subnet through which traffic is routed.
One or more filters. Filter names and values are case-sensitive.
association-id
- The ID of the association.
target-network-id
- The ID of the subnet specified as the target network.
vpc-id
- The ID of the VPC in which the target network is located.
The filters. The following are the possible values:
coip-pool.pool-id
coip-pool.local-gateway-route-table-id
One or more filters.
bgp-asn
- The customer gateway's Border Gateway Protocol (BGP) Autonomous System Number (ASN).
customer-gateway-id
- The ID of the customer gateway.
ip-address
- The IP address of the customer gateway's Internet-routable external interface.
state
- The state of the customer gateway (pending
| available
| deleting
| deleted
).
type
- The type of customer gateway. Currently, the only supported type is ipsec.1
.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
dhcp-options-id
- The ID of a DHCP options set.
key
- The key for one of the options (for example, domain-name
).
value
- The value for one of the options.
owner-id
- The ID of the AWS account that owns the DHCP options set.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters.
availability-zone
- The Availability Zone in which the Elastic Graphics accelerator resides.
elastic-gpu-health
- The status of the Elastic Graphics accelerator (OK
| IMPAIRED
).
elastic-gpu-state
- The state of the Elastic Graphics accelerator (ATTACHED
).
elastic-gpu-type
- The type of Elastic Graphics accelerator; for example, eg1.medium
.
instance-id
- The ID of the instance to which the Elastic Graphics accelerator is associated.
Filter tasks using the task-state
filter and one of the following values: active
, completed
, deleting
, or deleted
.
the filters for the export tasks.
", "DescribeFastSnapshotRestoresRequest$Filters": "The filters. The possible values are:
availability-zone
: The Availability Zone of the snapshot.
owner-id
: The ID of the AWS account that owns the snapshot.
snapshot-id
: The ID of the snapshot.
state
: The state of fast snapshot restores for the snapshot (enabling
| optimizing
| enabled
| disabling
| disabled
).
The filters.
instance-type
- The instance type.
The filters.
activity-status
- The progress of the EC2 Fleet ( error
| pending-fulfillment
| pending-termination
| fulfilled
).
excess-capacity-termination-policy
- Indicates whether to terminate running instances if the target capacity is decreased below the current EC2 Fleet size (true
| false
).
fleet-state
- The state of the EC2 Fleet (submitted
| active
| deleted
| failed
| deleted-running
| deleted-terminating
| modifying
).
replace-unhealthy-instances
- Indicates whether EC2 Fleet should replace unhealthy instances (true
| false
).
type
- The type of request (instant
| request
| maintain
).
The filters.
instance-id
- The ID of the instance.
The filters.
availability-zone
- The Availability Zone of the instance.
event.code
- The code for the scheduled event (instance-reboot
| system-reboot
| system-maintenance
| instance-retirement
| instance-stop
).
event.description
- A description of the event.
event.instance-event-id
- The ID of the event whose date and time you are modifying.
event.not-after
- The latest end time for the scheduled event (for example, 2014-09-15T17:15:20.000Z
).
event.not-before
- The earliest start time for the scheduled event (for example, 2014-09-15T17:15:20.000Z
).
event.not-before-deadline
- The deadline for starting the event (for example, 2014-09-15T17:15:20.000Z
).
instance-state-code
- The code for the instance state, as a 16-bit unsigned integer. The high byte is used for internal purposes and should be ignored. The low byte is set based on the state represented. The valid values are 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), and 80 (stopped).
instance-state-name
- The state of the instance (pending
| running
| shutting-down
| terminated
| stopping
| stopped
).
instance-status.reachability
- Filters on instance status where the name is reachability
(passed
| failed
| initializing
| insufficient-data
).
instance-status.status
- The status of the instance (ok
| impaired
| initializing
| insufficient-data
| not-applicable
).
system-status.reachability
- Filters on system status where the name is reachability
(passed
| failed
| initializing
| insufficient-data
).
system-status.status
- The system status of the instance (ok
| impaired
| initializing
| insufficient-data
| not-applicable
).
One or more filters. Filter names and values are case-sensitive.
location
- This depends on the location type. For example, if the location type is region
(default), the location is the Region code (for example, us-east-2
.)
instance-type
- The instance type.
One or more filters. Filter names and values are case-sensitive.
auto-recovery-supported
- Indicates whether auto recovery is supported. (true
| false
)
bare-metal
- Indicates whether it is a bare metal instance type. (true
| false
)
burstable-performance-supported
- Indicates whether it is a burstable performance instance type. (true
| false
)
current-generation
- Indicates whether this instance type is the latest generation instance type of an instance family. (true
| false
)
ebs-info.ebs-optimized-support
- Indicates whether the instance type is EBS-optimized. (true
| false
)
ebs-info.encryption-support
- Indicates whether EBS encryption is supported. (true
| false
)
free-tier-eligible
- Indicates whether the instance type is eligible to use in the free tier. (true
| false
)
hibernation-supported
- Indicates whether On-Demand hibernation is supported. (true
| false
)
hypervisor
- The hypervisor used. (nitro
| xen
)
instance-storage-info.disk.count
- The number of local disks.
instance-storage-info.disk.size-in-gb
- The storage size of each instance storage disk, in GB.
instance-storage-info.disk.type
- The storage technology for the local instance storage disks. (hdd
| ssd
)
instance-storage-info.total-size-in-gb
- The total amount of storage available from all local instance storage, in GB.
instance-storage-supported
- Indicates whether the instance type has local instance storage. (true
| false
)
memory-info.size-in-mib
- The memory size.
network-info.ena-support
- Indicates whether Elastic Network Adapter (ENA) is supported or required. (required
| supported
| unsupported
)
network-info.ipv4-addresses-per-interface
- The maximum number of private IPv4 addresses per network interface.
network-info.ipv6-addresses-per-interface
- The maximum number of private IPv6 addresses per network interface.
network-info.ipv6-supported
- Indicates whether the instance type supports IPv6. (true
| false
)
network-info.maximum-network-interfaces
- The maximum number of network interfaces per instance.
network-info.network-performance
- Describes the network performance.
processor-info.sustained-clock-speed-in-ghz
- The CPU clock speed, in GHz.
vcpu-info.default-cores
- The default number of cores for the instance type.
vcpu-info.default-threads-per-core
- The default number of threads per cores for the instance type.
vcpu-info.default-vcpus
- The default number of vCPUs for the instance type.
The filters.
affinity
- The affinity setting for an instance running on a Dedicated Host (default
| host
).
architecture
- The instance architecture (i386
| x86_64
| arm64
).
availability-zone
- The Availability Zone of the instance.
block-device-mapping.attach-time
- The attach time for an EBS volume mapped to the instance, for example, 2010-09-15T17:15:20.000Z
.
block-device-mapping.delete-on-termination
- A Boolean that indicates whether the EBS volume is deleted on instance termination.
block-device-mapping.device-name
- The device name specified in the block device mapping (for example, /dev/sdh
or xvdh
).
block-device-mapping.status
- The status for the EBS volume (attaching
| attached
| detaching
| detached
).
block-device-mapping.volume-id
- The volume ID of the EBS volume.
client-token
- The idempotency token you provided when you launched the instance.
dns-name
- The public DNS name of the instance.
group-id
- The ID of the security group for the instance. EC2-Classic only.
group-name
- The name of the security group for the instance. EC2-Classic only.
hibernation-options.configured
- A Boolean that indicates whether the instance is enabled for hibernation. A value of true
means that the instance is enabled for hibernation.
host-id
- The ID of the Dedicated Host on which the instance is running, if applicable.
hypervisor
- The hypervisor type of the instance (ovm
| xen
).
iam-instance-profile.arn
- The instance profile associated with the instance. Specified as an ARN. image-id
- The ID of the image used to launch the instance.
instance-id
- The ID of the instance.
instance-lifecycle
- Indicates whether this is a Spot Instance or a Scheduled Instance (spot
| scheduled
).
instance-state-code
- The state of the instance, as a 16-bit unsigned integer. The high byte is used for internal purposes and should be ignored. The low byte is set based on the state represented. The valid values are: 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), and 80 (stopped).
instance-state-name
- The state of the instance (pending
| running
| shutting-down
| terminated
| stopping
| stopped
).
instance-type
- The type of instance (for example, t2.micro
).
instance.group-id
- The ID of the security group for the instance.
instance.group-name
- The name of the security group for the instance.
ip-address
- The public IPv4 address of the instance.
kernel-id
- The kernel ID.
key-name
- The name of the key pair used when the instance was launched.
launch-index
- When launching multiple instances, this is the index for the instance in the launch group (for example, 0, 1, 2, and so on).
launch-time
- The time when the instance was launched.
metadata-options.http-tokens
- The metadata request authorization state (optional
| required
)
metadata-options.http-put-response-hop-limit
- The http metadata request put response hop limit (integer, possible values 1
to 64
)
metadata-options.http-endpoint
- Enable or disable metadata access on http endpoint (enabled
| disabled
)
monitoring-state
- Indicates whether detailed monitoring is enabled (disabled
| enabled
).
network-interface.addresses.private-ip-address
- The private IPv4 address associated with the network interface.
network-interface.addresses.primary
- Specifies whether the IPv4 address of the network interface is the primary private IPv4 address.
network-interface.addresses.association.public-ip
- The ID of the association of an Elastic IP address (IPv4) with a network interface.
network-interface.addresses.association.ip-owner-id
- The owner ID of the private IPv4 address associated with the network interface.
network-interface.association.public-ip
- The address of the Elastic IP address (IPv4) bound to the network interface.
network-interface.association.ip-owner-id
- The owner of the Elastic IP address (IPv4) associated with the network interface.
network-interface.association.allocation-id
- The allocation ID returned when you allocated the Elastic IP address (IPv4) for your network interface.
network-interface.association.association-id
- The association ID returned when the network interface was associated with an IPv4 address.
network-interface.attachment.attachment-id
- The ID of the interface attachment.
network-interface.attachment.instance-id
- The ID of the instance to which the network interface is attached.
network-interface.attachment.instance-owner-id
- The owner ID of the instance to which the network interface is attached.
network-interface.attachment.device-index
- The device index to which the network interface is attached.
network-interface.attachment.status
- The status of the attachment (attaching
| attached
| detaching
| detached
).
network-interface.attachment.attach-time
- The time that the network interface was attached to an instance.
network-interface.attachment.delete-on-termination
- Specifies whether the attachment is deleted when an instance is terminated.
network-interface.availability-zone
- The Availability Zone for the network interface.
network-interface.description
- The description of the network interface.
network-interface.group-id
- The ID of a security group associated with the network interface.
network-interface.group-name
- The name of a security group associated with the network interface.
network-interface.ipv6-addresses.ipv6-address
- The IPv6 address associated with the network interface.
network-interface.mac-address
- The MAC address of the network interface.
network-interface.network-interface-id
- The ID of the network interface.
network-interface.owner-id
- The ID of the owner of the network interface.
network-interface.private-dns-name
- The private DNS name of the network interface.
network-interface.requester-id
- The requester ID for the network interface.
network-interface.requester-managed
- Indicates whether the network interface is being managed by AWS.
network-interface.status
- The status of the network interface (available
) | in-use
).
network-interface.source-dest-check
- Whether the network interface performs source/destination checking. A value of true
means that checking is enabled, and false
means that checking is disabled. The value must be false
for the network interface to perform network address translation (NAT) in your VPC.
network-interface.subnet-id
- The ID of the subnet for the network interface.
network-interface.vpc-id
- The ID of the VPC for the network interface.
owner-id
- The AWS account ID of the instance owner.
placement-group-name
- The name of the placement group for the instance.
placement-partition-number
- The partition in which the instance is located.
platform
- The platform. To list only Windows instances, use windows
.
private-dns-name
- The private IPv4 DNS name of the instance.
private-ip-address
- The private IPv4 address of the instance.
product-code
- The product code associated with the AMI used to launch the instance.
product-code.type
- The type of product code (devpay
| marketplace
).
ramdisk-id
- The RAM disk ID.
reason
- The reason for the current state of the instance (for example, shows \"User Initiated [date]\" when you stop or terminate the instance). Similar to the state-reason-code filter.
requester-id
- The ID of the entity that launched the instance on your behalf (for example, AWS Management Console, Auto Scaling, and so on).
reservation-id
- The ID of the instance's reservation. A reservation ID is created any time you launch an instance. A reservation ID has a one-to-one relationship with an instance launch request, but can be associated with more than one instance if you launch multiple instances using the same launch request. For example, if you launch one instance, you get one reservation ID. If you launch ten instances using the same launch request, you also get one reservation ID.
root-device-name
- The device name of the root device volume (for example, /dev/sda1
).
root-device-type
- The type of the root device volume (ebs
| instance-store
).
source-dest-check
- Indicates whether the instance performs source/destination checking. A value of true
means that checking is enabled, and false
means that checking is disabled. The value must be false
for the instance to perform network address translation (NAT) in your VPC.
spot-instance-request-id
- The ID of the Spot Instance request.
state-reason-code
- The reason code for the state change.
state-reason-message
- A message that describes the state change.
subnet-id
- The ID of the subnet for the instance.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.
tenancy
- The tenancy of an instance (dedicated
| default
| host
).
virtualization-type
- The virtualization type of the instance (paravirtual
| hvm
).
vpc-id
- The ID of the VPC that the instance is running in.
One or more filters. Filter names and values are case-sensitive.
auto-recovery-supported
- Indicates whether auto recovery is supported. (true
| false
)
bare-metal
- Indicates whether it is a bare metal instance type. (true
| false
)
burstable-performance-supported
- Indicates whether it is a burstable performance instance type. (true
| false
)
current-generation
- Indicates whether this instance type is the latest generation instance type of an instance family. (true
| false
)
ebs-info.ebs-optimized-support
- Indicates whether the instance type is EBS-optimized. (true
| false
)
ebs-info.encryption-support
- Indicates whether EBS encryption is supported. (true
| false
)
free-tier-eligible
- Indicates whether the instance type is eligible to use in the free tier. (true
| false
)
hibernation-supported
- Indicates whether On-Demand hibernation is supported. (true
| false
)
hypervisor
- The hypervisor used. (nitro
| xen
)
instance-storage-info.disk.count
- The number of local disks.
instance-storage-info.disk.size-in-gb
- The storage size of each instance storage disk, in GB.
instance-storage-info.disk.type
- The storage technology for the local instance storage disks. (hdd
| ssd
)
instance-storage-info.total-size-in-gb
- The total amount of storage available from all local instance storage, in GB.
instance-storage-supported
- Indicates whether the instance type has local instance storage. (true
| false
)
memory-info.size-in-mib
- The memory size.
network-info.ena-support
- Indicates whether Elastic Network Adapter (ENA) is supported or required. (required
| supported
| unsupported
)
network-info.ipv4-addresses-per-interface
- The maximum number of private IPv4 addresses per network interface.
network-info.ipv6-addresses-per-interface
- The maximum number of private IPv6 addresses per network interface.
network-info.ipv6-supported
- Indicates whether the instance type supports IPv6. (true
| false
)
network-info.maximum-network-interfaces
- The maximum number of network interfaces per instance.
network-info.network-performance
- Describes the network performance.
processor-info.sustained-clock-speed-in-ghz
- The CPU clock speed, in GHz.
vcpu-info.default-cores
- The default number of cores for the instance type.
vcpu-info.default-threads-per-core
- The default number of threads per core for the instance type.
vcpu-info.default-vcpus
- The default number of vCPUs for the instance type.
The filters.
affinity
- The affinity setting for an instance running on a Dedicated Host (default
| host
).
architecture
- The instance architecture (i386
| x86_64
| arm64
).
availability-zone
- The Availability Zone of the instance.
block-device-mapping.attach-time
- The attach time for an EBS volume mapped to the instance, for example, 2010-09-15T17:15:20.000Z
.
block-device-mapping.delete-on-termination
- A Boolean that indicates whether the EBS volume is deleted on instance termination.
block-device-mapping.device-name
- The device name specified in the block device mapping (for example, /dev/sdh
or xvdh
).
block-device-mapping.status
- The status for the EBS volume (attaching
| attached
| detaching
| detached
).
block-device-mapping.volume-id
- The volume ID of the EBS volume.
client-token
- The idempotency token you provided when you launched the instance.
dns-name
- The public DNS name of the instance.
group-id
- The ID of the security group for the instance. EC2-Classic only.
group-name
- The name of the security group for the instance. EC2-Classic only.
hibernation-options.configured
- A Boolean that indicates whether the instance is enabled for hibernation. A value of true
means that the instance is enabled for hibernation.
host-id
- The ID of the Dedicated Host on which the instance is running, if applicable.
hypervisor
- The hypervisor type of the instance (ovm
| xen
).
iam-instance-profile.arn
- The instance profile associated with the instance. Specified as an ARN.
image-id
- The ID of the image used to launch the instance.
instance-id
- The ID of the instance.
instance-lifecycle
- Indicates whether this is a Spot Instance or a Scheduled Instance (spot
| scheduled
).
instance-state-code
- The state of the instance, as a 16-bit unsigned integer. The high byte is used for internal purposes and should be ignored. The low byte is set based on the state represented. The valid values are: 0 (pending), 16 (running), 32 (shutting-down), 48 (terminated), 64 (stopping), and 80 (stopped).
instance-state-name
- The state of the instance (pending
| running
| shutting-down
| terminated
| stopping
| stopped
).
instance-type
- The type of instance (for example, t2.micro
).
instance.group-id
- The ID of the security group for the instance.
instance.group-name
- The name of the security group for the instance.
ip-address
- The public IPv4 address of the instance.
kernel-id
- The kernel ID.
key-name
- The name of the key pair used when the instance was launched.
launch-index
- When launching multiple instances, this is the index for the instance in the launch group (for example, 0, 1, 2, and so on).
launch-time
- The time when the instance was launched.
metadata-options.http-tokens
- The metadata request authorization state (optional
| required
)
metadata-options.http-put-response-hop-limit
- The http metadata request put response hop limit (integer, possible values 1
to 64
)
metadata-options.http-endpoint
- Enable or disable metadata access on http endpoint (enabled
| disabled
)
monitoring-state
- Indicates whether detailed monitoring is enabled (disabled
| enabled
).
network-interface.addresses.private-ip-address
- The private IPv4 address associated with the network interface.
network-interface.addresses.primary
- Specifies whether the IPv4 address of the network interface is the primary private IPv4 address.
network-interface.addresses.association.public-ip
- The ID of the association of an Elastic IP address (IPv4) with a network interface.
network-interface.addresses.association.ip-owner-id
- The owner ID of the private IPv4 address associated with the network interface.
network-interface.association.public-ip
- The address of the Elastic IP address (IPv4) bound to the network interface.
network-interface.association.ip-owner-id
- The owner of the Elastic IP address (IPv4) associated with the network interface.
network-interface.association.allocation-id
- The allocation ID returned when you allocated the Elastic IP address (IPv4) for your network interface.
network-interface.association.association-id
- The association ID returned when the network interface was associated with an IPv4 address.
network-interface.attachment.attachment-id
- The ID of the interface attachment.
network-interface.attachment.instance-id
- The ID of the instance to which the network interface is attached.
network-interface.attachment.instance-owner-id
- The owner ID of the instance to which the network interface is attached.
network-interface.attachment.device-index
- The device index to which the network interface is attached.
network-interface.attachment.status
- The status of the attachment (attaching
| attached
| detaching
| detached
).
network-interface.attachment.attach-time
- The time that the network interface was attached to an instance.
network-interface.attachment.delete-on-termination
- Specifies whether the attachment is deleted when an instance is terminated.
network-interface.availability-zone
- The Availability Zone for the network interface.
network-interface.description
- The description of the network interface.
network-interface.group-id
- The ID of a security group associated with the network interface.
network-interface.group-name
- The name of a security group associated with the network interface.
network-interface.ipv6-addresses.ipv6-address
- The IPv6 address associated with the network interface.
network-interface.mac-address
- The MAC address of the network interface.
network-interface.network-interface-id
- The ID of the network interface.
network-interface.owner-id
- The ID of the owner of the network interface.
network-interface.private-dns-name
- The private DNS name of the network interface.
network-interface.requester-id
- The requester ID for the network interface.
network-interface.requester-managed
- Indicates whether the network interface is being managed by AWS.
network-interface.status
- The status of the network interface (available
) | in-use
).
network-interface.source-dest-check
- Whether the network interface performs source/destination checking. A value of true
means that checking is enabled, and false
means that checking is disabled. The value must be false
for the network interface to perform network address translation (NAT) in your VPC.
network-interface.subnet-id
- The ID of the subnet for the network interface.
network-interface.vpc-id
- The ID of the VPC for the network interface.
owner-id
- The AWS account ID of the instance owner.
placement-group-name
- The name of the placement group for the instance.
placement-partition-number
- The partition in which the instance is located.
platform
- The platform. To list only Windows instances, use windows
.
private-dns-name
- The private IPv4 DNS name of the instance.
private-ip-address
- The private IPv4 address of the instance.
product-code
- The product code associated with the AMI used to launch the instance.
product-code.type
- The type of product code (devpay
| marketplace
).
ramdisk-id
- The RAM disk ID.
reason
- The reason for the current state of the instance (for example, shows \"User Initiated [date]\" when you stop or terminate the instance). Similar to the state-reason-code filter.
requester-id
- The ID of the entity that launched the instance on your behalf (for example, AWS Management Console, Auto Scaling, and so on).
reservation-id
- The ID of the instance's reservation. A reservation ID is created any time you launch an instance. A reservation ID has a one-to-one relationship with an instance launch request, but can be associated with more than one instance if you launch multiple instances using the same launch request. For example, if you launch one instance, you get one reservation ID. If you launch ten instances using the same launch request, you also get one reservation ID.
root-device-name
- The device name of the root device volume (for example, /dev/sda1
).
root-device-type
- The type of the root device volume (ebs
| instance-store
).
source-dest-check
- Indicates whether the instance performs source/destination checking. A value of true
means that checking is enabled, and false
means that checking is disabled. The value must be false
for the instance to perform network address translation (NAT) in your VPC.
spot-instance-request-id
- The ID of the Spot Instance request.
state-reason-code
- The reason code for the state change.
state-reason-message
- A message that describes the state change.
subnet-id
- The ID of the subnet for the instance.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources that have a tag with a specific key, regardless of the tag value.
tenancy
- The tenancy of an instance (dedicated
| default
| host
).
virtualization-type
- The virtualization type of the instance (paravirtual
| hvm
).
vpc-id
- The ID of the VPC that the instance is running in.
One or more filters.
attachment.state
- The current state of the attachment between the gateway and the VPC (available
). Present only if a VPC is attached.
attachment.vpc-id
- The ID of an attached VPC.
internet-gateway-id
- The ID of the Internet gateway.
owner-id
- The ID of the AWS account that owns the internet gateway.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters.
fingerprint
- The fingerprint of the key pair.
key-name
- The name of the key pair.
One or more filters.
create-time
- The time the launch template version was created.
ebs-optimized
- A boolean that indicates whether the instance is optimized for Amazon EBS I/O.
iam-instance-profile
- The ARN of the IAM instance profile.
image-id
- The ID of the AMI.
instance-type
- The instance type.
is-default-version
- A boolean that indicates whether the launch template version is the default version.
kernel-id
- The kernel ID.
ram-disk-id
- The RAM disk ID.
One or more filters.
create-time
- The time the launch template was created.
launch-template-name
- The name of the launch template.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
The filters. Supported filters: volume-id
, modification-state
, target-size
, target-iops
, target-volume-type
, original-size
, original-iops
, original-volume-type
, start-time
.
The filters.
attachment.attach-time
- The time stamp when the attachment initiated.
attachment.delete-on-termination
- Whether the volume is deleted on instance termination.
attachment.device
- The device name specified in the block device mapping (for example, /dev/sda1
).
attachment.instance-id
- The ID of the instance the volume is attached to.
attachment.status
- The attachment state (attaching
| attached
| detaching
).
availability-zone
- The Availability Zone in which the volume was created.
create-time
- The time stamp when the volume was created.
encrypted
- Indicates whether the volume is encrypted (true
| false
)
size
- The size of the volume, in GiB.
snapshot-id
- The snapshot from which the volume was created.
status
- The status of the volume (creating
| available
| in-use
| deleting
| deleted
| error
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
volume-id
- The volume ID.
volume-type
- The Amazon EBS volume type. This can be gp2
for General Purpose SSD, io1
for Provisioned IOPS SSD, st1
for Throughput Optimized HDD, sc1
for Cold HDD, or standard
for Magnetic volumes.
One or more filters.
is-classic-link-enabled
- Whether the VPC is enabled for ClassicLink (true
| false
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
connection-notification-arn
- The ARN of SNS topic for the notification.
connection-notification-id
- The ID of the notification.
connection-notification-state
- The state of the notification (Enabled
| Disabled
).
connection-notification-type
- The type of notification (Topic
).
service-id
- The ID of the endpoint service.
vpc-endpoint-id
- The ID of the VPC endpoint.
One or more filters.
connection-notification-arn
- The ARN of the SNS topic for the notification.
connection-notification-id
- The ID of the notification.
connection-notification-state
- The state of the notification (Enabled
| Disabled
).
connection-notification-type
- The type of notification (Topic
).
service-id
- The ID of the endpoint service.
vpc-endpoint-id
- The ID of the VPC endpoint.
One or more filters.
service-id
- The ID of the service.
vpc-endpoint-owner
- The AWS account number of the owner of the endpoint.
vpc-endpoint-state
- The state of the endpoint (pendingAcceptance
| pending
| available
| deleting
| deleted
| rejected
| failed
).
vpc-endpoint-id
- The ID of the endpoint.
One or more filters.
service-name
- The name of the service.
service-id
- The ID of the service.
service-state
- The state of the service (Pending
| Available
| Deleting
| Deleted
| Failed
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
principal
- The ARN of the principal.
principal-type
- The principal type (All
| Service
| OrganizationUnit
| Account
| User
| Role
).
One or more filters.
service-name
: The name of the service.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
service-name
: The name of the service.
vpc-id
: The ID of the VPC in which the endpoint resides.
vpc-endpoint-id
: The ID of the endpoint.
vpc-endpoint-state
- The state of the endpoint (pendingAcceptance
| pending
| available
| deleting
| deleted
| rejected
| failed
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
service-name
- The name of the service.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
service-name
- The name of the service.
vpc-id
- The ID of the VPC in which the endpoint resides.
vpc-endpoint-id
- The ID of the endpoint.
vpc-endpoint-state
- The state of the endpoint (pendingAcceptance
| pending
| available
| deleting
| deleted
| rejected
| failed
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
One or more filters.
accepter-vpc-info.cidr-block
- The IPv4 CIDR block of the accepter VPC.
accepter-vpc-info.owner-id
- The AWS account ID of the owner of the accepter VPC.
accepter-vpc-info.vpc-id
- The ID of the accepter VPC.
expiration-time
- The expiration date and time for the VPC peering connection.
requester-vpc-info.cidr-block
- The IPv4 CIDR block of the requester's VPC.
requester-vpc-info.owner-id
- The AWS account ID of the owner of the requester VPC.
requester-vpc-info.vpc-id
- The ID of the requester VPC.
status-code
- The status of the VPC peering connection (pending-acceptance
| failed
| expired
| provisioning
| active
| deleting
| deleted
| rejected
).
status-message
- A message that provides more information about the status of the VPC peering connection, if applicable.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
vpc-peering-connection-id
- The ID of the VPC peering connection.
One or more filters.
cidr
- The primary IPv4 CIDR block of the VPC. The CIDR block you specify must exactly match the VPC's CIDR block for information to be returned for the VPC. Must contain the slash followed by one or two digits (for example, /28
).
cidr-block-association.cidr-block
- An IPv4 CIDR block associated with the VPC.
cidr-block-association.association-id
- The association ID for an IPv4 CIDR block associated with the VPC.
cidr-block-association.state
- The state of an IPv4 CIDR block associated with the VPC.
dhcp-options-id
- The ID of a set of DHCP options.
ipv6-cidr-block-association.ipv6-cidr-block
- An IPv6 CIDR block associated with the VPC.
ipv6-cidr-block-association.association-id
- The association ID for an IPv6 CIDR block associated with the VPC.
ipv6-cidr-block-association.state
- The state of an IPv6 CIDR block associated with the VPC.
isDefault
- Indicates whether the VPC is the default VPC.
owner-id
- The ID of the AWS account that owns the VPC.
state
- The state of the VPC (pending
| available
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
vpc-id
- The ID of the VPC.
One or more filters.
cidr
- The primary IPv4 CIDR block of the VPC. The CIDR block you specify must exactly match the VPC's CIDR block for information to be returned for the VPC. Must contain the slash followed by one or two digits (for example, /28
).
cidr-block-association.cidr-block
- An IPv4 CIDR block associated with the VPC.
cidr-block-association.association-id
- The association ID for an IPv4 CIDR block associated with the VPC.
cidr-block-association.state
- The state of an IPv4 CIDR block associated with the VPC.
dhcp-options-id
- The ID of a set of DHCP options.
ipv6-cidr-block-association.ipv6-cidr-block
- An IPv6 CIDR block associated with the VPC.
ipv6-cidr-block-association.ipv6-pool
- The ID of the IPv6 address pool from which the IPv6 CIDR block is allocated.
ipv6-cidr-block-association.association-id
- The association ID for an IPv6 CIDR block associated with the VPC.
ipv6-cidr-block-association.state
- The state of an IPv6 CIDR block associated with the VPC.
isDefault
- Indicates whether the VPC is the default VPC.
owner-id
- The ID of the AWS account that owns the VPC.
state
- The state of the VPC (pending
| available
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
vpc-id
- The ID of the VPC.
One or more filters.
customer-gateway-configuration
- The configuration information for the customer gateway.
customer-gateway-id
- The ID of a customer gateway associated with the VPN connection.
state
- The state of the VPN connection (pending
| available
| deleting
| deleted
).
option.static-routes-only
- Indicates whether the connection has static routes only. Used for devices that do not support Border Gateway Protocol (BGP).
route.destination-cidr-block
- The destination CIDR block. This corresponds to the subnet used in a customer data center.
bgp-asn
- The BGP Autonomous System Number (ASN) associated with a BGP device.
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
type
- The type of VPN connection. Currently the only supported type is ipsec.1
.
vpn-connection-id
- The ID of the VPN connection.
vpn-gateway-id
- The ID of a virtual private gateway associated with the VPN connection.
transit-gateway-id
- The ID of a transit gateway associated with the VPN connection.
One or more filters.
amazon-side-asn
- The Autonomous System Number (ASN) for the Amazon side of the gateway.
attachment.state
- The current state of the attachment between the gateway and the VPC (attaching
| attached
| detaching
| detached
).
attachment.vpc-id
- The ID of an attached VPC.
availability-zone
- The Availability Zone for the virtual private gateway (if applicable).
state
- The state of the virtual private gateway (pending
| available
| deleting
| deleted
).
tag
:<key> - The key/value combination of a tag assigned to the resource. Use the tag key in the filter name and the tag value as the filter value. For example, to find all resources that have a tag with the key Owner
and the value TeamA
, specify tag:Owner
for the filter name and TeamA
for the filter value.
tag-key
- The key of a tag assigned to the resource. Use this filter to find all resources assigned a tag with a specific key, regardless of the tag value.
type
- The type of virtual private gateway. Currently the only supported type is ipsec.1
.
vpn-gateway-id
- The ID of the virtual private gateway.
One or more filters. The possible values are:
attachment.transit-gateway-attachment-id
- The id of the transit gateway attachment.
attachment.resource-id
- The resource id of the transit gateway attachment.
route-search.exact-match
- The exact match of the specified filter.
route-search.longest-prefix-match
- The longest prefix that matches the route.
route-search.subnet-of-match
- The routes with a subnet that match the specified CIDR filter.
route-search.supernet-of-match
- The routes with a CIDR that encompass the CIDR filter. For example, if you have 10.0.1.0/29 and 10.0.1.0/31 routes in your route table and you specify supernet-of-match as 10.0.1.0/30, then the result returns 10.0.1.0/29.
state
- The state of the attachment (available
| deleted
| deleting
| failed
| modifying
| pendingAcceptance
| pending
| rollingBack
| rejected
| rejecting
).
transit-gateway-route-destination-cidr-block
- The CIDR range.
type
- The type of route (active
| blackhole
).
The progress of the EC2 Fleet. If there is an error, the status is error
. After all requests are placed, the status is pending_fulfillment
. If the size of the EC2 Fleet is equal to or greater than its target capacity, the status is fulfilled
. If the size of the EC2 Fleet is decreased, the status is pending_termination
while instances are terminating.
Indicates whether to use unused Capacity Reservations for fulfilling On-Demand capacity.
If you specify use-capacity-reservations-first
, the fleet uses unused Capacity Reservations to fulfill On-Demand capacity up to the target On-Demand capacity. If multiple instance pools have unused Capacity Reservations, the On-Demand allocation strategy (lowest-price
or prioritized
) is applied. If the number of unused Capacity Reservations is less than the On-Demand target capacity, the remaining On-Demand target capacity is launched according to the On-Demand allocation strategy (lowest-price
or prioritized
).
If you do not specify a value, the fleet fulfils the On-Demand capacity according to the chosen On-Demand allocation strategy.
", + "CapacityReservationOptionsRequest$UsageStrategy": "Indicates whether to use unused Capacity Reservations for fulfilling On-Demand capacity.
If you specify use-capacity-reservations-first
, the fleet uses unused Capacity Reservations to fulfill On-Demand capacity up to the target On-Demand capacity. If multiple instance pools have unused Capacity Reservations, the On-Demand allocation strategy (lowest-price
or prioritized
) is applied. If the number of unused Capacity Reservations is less than the On-Demand target capacity, the remaining On-Demand target capacity is launched according to the On-Demand allocation strategy (lowest-price
or prioritized
).
If you do not specify a value, the fleet fulfils the On-Demand capacity according to the chosen On-Demand allocation strategy.
" + } + }, "FleetData": { "base": "Describes an EC2 Fleet.
", "refs": { @@ -6017,7 +6084,7 @@ "base": null, "refs": { "CreateFleetRequest$Type": "The type of the request. By default, the EC2 Fleet places an asynchronous request for your desired capacity, and maintains it by replenishing interrupted Spot Instances (maintain
). A value of instant
places a synchronous one-time request, and returns errors for any instances that could not be launched. A value of request
places an asynchronous one-time request without maintaining capacity or submitting requests in alternative capacity pools if capacity is unavailable. For more information, see EC2 Fleet Request Types in the Amazon Elastic Compute Cloud User Guide.
The type of request. Indicates whether the EC2 Fleet only requests
the target capacity, or also attempts to maintain
it. If you request a certain target capacity, EC2 Fleet only places the required requests; it does not attempt to replenish instances if capacity is diminished, and does not submit requests in alternative capacity pools if capacity is unavailable. To maintain a certain target capacity, EC2 Fleet places the required requests to meet this target capacity. It also automatically replenishes any interrupted Spot Instances. Default: maintain
.
The type of request. Indicates whether the EC2 Fleet only requests
the target capacity, or also attempts to maintain
it. If you request a certain target capacity, EC2 Fleet only places the required requests; it does not attempt to replenish instances if capacity is diminished, and it does not submit requests in alternative capacity pools if capacity is unavailable. To maintain a certain target capacity, EC2 Fleet places the required requests to meet this target capacity. It also automatically replenishes any interrupted Spot Instances. Default: maintain
.
The type of request. Indicates whether the Spot Fleet only requests the target capacity or also attempts to maintain it. When this value is request
, the Spot Fleet only places the required requests. It does not attempt to replenish Spot Instances if capacity is diminished, nor does it submit requests in alternative Spot pools if capacity is not available. When this value is maintain
, the Spot Fleet maintains the target capacity. The Spot Fleet places the required requests to meet capacity and automatically replenishes any interrupted instances. Default: maintain
. instant
is listed but is not used by Spot Fleet.
The type of VPN connection the virtual private gateway supports.
" } }, + "GetAssociatedIpv6PoolCidrsRequest": { + "base": null, + "refs": { + } + }, + "GetAssociatedIpv6PoolCidrsResult": { + "base": null, + "refs": { + } + }, "GetCapacityReservationUsageRequest": { "base": null, "refs": { @@ -6399,7 +6476,7 @@ "GroupIdentifierSet": { "base": null, "refs": { - "VpcEndpoint$Groups": "(Interface endpoint) Information about the security groups associated with the network interface.
" + "VpcEndpoint$Groups": "(Interface endpoint) Information about the security groups that are associated with the network interface.
" } }, "GroupIds": { @@ -6463,13 +6540,6 @@ "HostList$member": null } }, - "HostId": { - "base": null, - "refs": { - "ModifyInstancePlacementRequest$HostId": "The ID of the Dedicated Host with which to associate the instance.
", - "RequestHostIdSet$member": null - } - }, "HostInstance": { "base": "Describes an instance running on a Dedicated Host.
", "refs": { @@ -6672,7 +6742,10 @@ "ImageId": { "base": null, "refs": { + "DeregisterImageRequest$ImageId": "The ID of the AMI.
", "ExportImageRequest$ImageId": "The ID of the image.
", + "ModifyImageAttributeRequest$ImageId": "The ID of the AMI.
", + "ResetImageAttributeRequest$ImageId": "The ID of the AMI.
", "RunInstancesRequest$ImageId": "The ID of the AMI. An AMI ID is required to launch an instance and must be specified here or in a launch template.
" } }, @@ -6717,7 +6790,7 @@ } }, "ImportImageLicenseConfigurationResponse": { - "base": "The response information of license configurations.
", + "base": "The response information for license configurations.
", "refs": { "ImportImageLicenseSpecificationListResponse$member": null } @@ -6732,7 +6805,7 @@ "base": null, "refs": { "ImportImageResult$LicenseSpecifications": "The ARNs of the license configurations.
", - "ImportImageTask$LicenseSpecifications": "The ARNs of the license configurations associated to the import image task.
" + "ImportImageTask$LicenseSpecifications": "The ARNs of the license configurations that are associated with the import image task.
" } }, "ImportImageRequest": { @@ -7009,6 +7082,7 @@ "AttachVolumeRequest$InstanceId": "The ID of the instance.
", "BundleInstanceRequest$InstanceId": "The ID of the instance to bundle.
Type: String
Default: None
Required: Yes
", "ConfirmProductInstanceRequest$InstanceId": "The ID of the instance.
", + "CreateImageRequest$InstanceId": "The ID of the instance.
", "CreateInstanceExportTaskRequest$InstanceId": "The ID of the instance.
", "CreateRouteRequest$InstanceId": "The ID of a NAT instance in your VPC. The operation fails if you specify an instance ID unless exactly one network interface is attached.
", "DetachClassicLinkVpcRequest$InstanceId": "The ID of the instance to unlink from the VPC.
", @@ -7401,11 +7475,13 @@ "AvailableCapacity$AvailableVCpus": "The number of vCPUs available for launching instances onto the Dedicated Host.
", "CapacityReservation$TotalInstanceCount": "The total number of instances for which the Capacity Reservation reserves capacity.
", "CapacityReservation$AvailableInstanceCount": "The remaining capacity. Indicates the number of instances that can be launched in the Capacity Reservation.
", + "ClientVpnEndpoint$VpnPort": "The port number for the Client VPN endpoint.
", "CpuOptions$CoreCount": "The number of CPU cores for the instance.
", "CpuOptions$ThreadsPerCore": "The number of threads per CPU core.
", "CpuOptionsRequest$CoreCount": "The number of CPU cores for the instance.
", "CpuOptionsRequest$ThreadsPerCore": "The number of threads per CPU core. To disable multithreading for the instance, specify a value of 1
. Otherwise, specify the default value of 2
.
The number of instances for which to reserve capacity.
", + "CreateClientVpnEndpointRequest$VpnPort": "The port number to assign to the Client VPN endpoint for TCP and UDP traffic.
Valid Values: 443
| 1194
Default Value: 443
For devices that support BGP, the customer gateway's BGP ASN.
Default: 65000
", "CreateNetworkAclEntryRequest$RuleNumber": "The rule number for the entry (for example, 100). ACL entries are processed in ascending order by rule number.
Constraints: Positive integer from 1 to 32766. The range 32767 to 65535 is reserved for internal use.
", "CreateNetworkInterfaceRequest$Ipv6AddressCount": "The number of IPv6 addresses to assign to a network interface. Amazon EC2 automatically selects the IPv6 addresses from the subnet range. You can't use this option if specifying specific IPv6 addresses. If your subnet has the AssignIpv6AddressOnCreation
attribute set to true
, you can specify 0
to override this setting.
The maximum number of results (up to a limit of 500) to be returned in a paginated request.
", "DescribeVolumesRequest$MaxResults": "The maximum number of volume results returned by DescribeVolumes
in paginated output. When this parameter is used, DescribeVolumes
only returns MaxResults
results in a single page along with a NextToken
response element. The remaining results of the initial request can be seen by sending another DescribeVolumes
request with the returned NextToken
value. This value can be between 5 and 500; if MaxResults
is given a value larger than 500, only 500 results are returned. If this parameter is not used, then DescribeVolumes
returns all results. You cannot specify this parameter and the volume IDs parameter in the same request.
The maximum number of results to return in a single call. To retrieve the remaining results, make another request with the returned NextToken
value.
The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken
value. This value can be between 5 and 1000; if MaxResults
is given a value larger than 1000, only 1000 results are returned.
The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken
value. This value can be between 5 and 1000; if MaxResults
is given a value larger than 1000, only 1000 results are returned.
The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken
value. This value can be between 5 and 1000; if MaxResults
is given a value larger than 1000, only 1000 results are returned.
The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.
Constraint: If the value is greater than 1000, we return only 1000 items.
", - "DescribeVpcEndpointsRequest$MaxResults": "The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.
Constraint: If the value is greater than 1000, we return only 1000 items.
", + "DescribeVpcEndpointConnectionsRequest$MaxResults": "The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken
value. This value can be between 5 and 1,000; if MaxResults
is given a value larger than 1,000, only 1,000 results are returned.
The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken
value. This value can be between 5 and 1,000; if MaxResults
is given a value larger than 1,000, only 1,000 results are returned.
The maximum number of results to return for the request in a single page. The remaining results of the initial request can be seen by sending another request with the returned NextToken
value. This value can be between 5 and 1,000; if MaxResults
is given a value larger than 1,000, only 1,000 results are returned.
The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.
Constraint: If the value is greater than 1,000, we return only 1,000 items.
", + "DescribeVpcEndpointsRequest$MaxResults": "The maximum number of items to return for this request. The request returns a token that you can specify in a subsequent call to get the next set of results.
Constraint: If the value is greater than 1,000, we return only 1,000 items.
", "EbsBlockDevice$Iops": "The number of I/O operations per second (IOPS) that the volume supports. For io1
volumes, this represents the number of IOPS that are provisioned for the volume. For gp2
volumes, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. For more information, see Amazon EBS Volume Types in the Amazon Elastic Compute Cloud User Guide.
Constraints: Range is 100-16,000 IOPS for gp2
volumes and 100 to 64,000IOPS for io1
volumes in most Regions. Maximum io1
IOPS of 64,000 is guaranteed only on Nitro-based instances. Other instance families guarantee performance up to 32,000 IOPS. For more information, see Amazon EBS Volume Types in the Amazon Elastic Compute Cloud User Guide.
Condition: This parameter is required for requests to create io1
volumes; it is not used in requests to create gp2
, st1
, sc1
, or standard
volumes.
The size of the volume, in GiB.
Default: If you're creating the volume from a snapshot and don't specify a volume size, the default is the snapshot size.
Constraints: 1-16384 for General Purpose SSD (gp2
), 4-16384 for Provisioned IOPS SSD (io1
), 500-16384 for Throughput Optimized HDD (st1
), 500-16384 for Cold HDD (sc1
), and 1-1024 for Magnetic (standard
) volumes. If you specify a snapshot, the volume size must be equal to or larger than the snapshot size.
The number of instances for which the Capacity Reservation reserves capacity.
", @@ -7488,19 +7564,24 @@ "LaunchTemplateEbsBlockDevice$VolumeSize": "The size of the volume, in GiB.
", "LaunchTemplateEbsBlockDeviceRequest$Iops": "The number of I/O operations per second (IOPS) that the volume supports. For io1, this represents the number of IOPS that are provisioned for the volume. For gp2, this represents the baseline performance of the volume and the rate at which the volume accumulates I/O credits for bursting. For more information about General Purpose SSD baseline performance, I/O credits, and bursting, see Amazon EBS Volume Types in the Amazon Elastic Compute Cloud User Guide.
Condition: This parameter is required for requests to create io1 volumes; it is not used in requests to create gp2, st1, sc1, or standard volumes.
", "LaunchTemplateEbsBlockDeviceRequest$VolumeSize": "The size of the volume, in GiB.
Default: If you're creating the volume from a snapshot and don't specify a volume size, the default is the snapshot size.
", - "LaunchTemplateElasticInferenceAcceleratorResponse$Count": "The number of elastic inference accelerators of given type to be attached to the instance. Only positive values allowed. If not specified defaults to 1.
", + "LaunchTemplateElasticInferenceAcceleratorResponse$Count": "The number of elastic inference accelerators to attach to the instance.
Default: 1
", + "LaunchTemplateInstanceMetadataOptions$HttpPutResponseHopLimit": "The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel.
Default: 1
Possible values: Integers from 1 to 64
", + "LaunchTemplateInstanceMetadataOptionsRequest$HttpPutResponseHopLimit": "The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel.
Default: 1
Possible values: Integers from 1 to 64
", "LaunchTemplateInstanceNetworkInterfaceSpecification$DeviceIndex": "The device index for the network interface attachment.
", "LaunchTemplateInstanceNetworkInterfaceSpecification$Ipv6AddressCount": "The number of IPv6 addresses for the network interface.
", "LaunchTemplateInstanceNetworkInterfaceSpecification$SecondaryPrivateIpAddressCount": "The number of secondary private IPv4 addresses for the network interface.
", "LaunchTemplateInstanceNetworkInterfaceSpecificationRequest$DeviceIndex": "The device index for the network interface attachment.
", "LaunchTemplateInstanceNetworkInterfaceSpecificationRequest$Ipv6AddressCount": "The number of IPv6 addresses to assign to a network interface. Amazon EC2 automatically selects the IPv6 addresses from the subnet range. You can't use this option if specifying specific IPv6 addresses.
", "LaunchTemplateInstanceNetworkInterfaceSpecificationRequest$SecondaryPrivateIpAddressCount": "The number of secondary private IPv4 addresses to assign to a network interface.
", + "LaunchTemplatePlacement$PartitionNumber": "The number of the partition the instance should launch in. Valid only if the placement group strategy is set to partition
.
The number of the partition the instance should launch in. Valid only if the placement group strategy is set to partition
.
The required duration for the Spot Instances (also known as Spot blocks), in minutes. This value must be a multiple of 60 (60, 120, 180, 240, 300, or 360).
", "LaunchTemplateSpotMarketOptionsRequest$BlockDurationMinutes": "The required duration for the Spot Instances (also known as Spot blocks), in minutes. This value must be a multiple of 60 (60, 120, 180, 240, 300, or 360).
", "LocalGatewayVirtualInterface$Vlan": "The ID of the VLAN.
", "LocalGatewayVirtualInterface$LocalBgpAsn": "The Border Gateway Protocol (BGP) Autonomous System Number (ASN) of the local gateway.
", "LocalGatewayVirtualInterface$PeerBgpAsn": "The peer BGP ASN.
", "ModifyCapacityReservationRequest$InstanceCount": "The number of instances for which to reserve capacity.
", + "ModifyClientVpnEndpointRequest$VpnPort": "The port number to assign to the Client VPN endpoint for TCP and UDP traffic.
Valid Values: 443
| 1194
Default Value: 443
The desired HTTP PUT response hop limit for instance metadata requests. The larger the number, the further instance metadata requests can travel. If no parameter is specified, the existing state is maintained.
Possible values: Integers from 1 to 64
", "ModifyInstancePlacementRequest$PartitionNumber": "Reserved for future use.
", "ModifySpotFleetRequestRequest$TargetCapacity": "The size of the fleet.
", @@ -7713,6 +7794,18 @@ "UnassignIpv6AddressesResult$UnassignedIpv6Addresses": "The IPv6 addresses that have been unassigned from the network interface.
" } }, + "Ipv6CidrAssociation": { + "base": "Describes an IPv6 CIDR block association.
", + "refs": { + "Ipv6CidrAssociationSet$member": null + } + }, + "Ipv6CidrAssociationSet": { + "base": null, + "refs": { + "GetAssociatedIpv6PoolCidrsResult$Ipv6CidrAssociations": "Information about the IPv6 CIDR block associations.
" + } + }, "Ipv6CidrBlock": { "base": "Describes an IPv6 CIDR block.
", "refs": { @@ -7731,6 +7824,25 @@ "NetworkInfo$Ipv6Supported": "Indicates whether IPv6 is supported.
" } }, + "Ipv6Pool": { + "base": "Describes an IPv6 address pool.
", + "refs": { + "Ipv6PoolSet$member": null + } + }, + "Ipv6PoolMaxResults": { + "base": null, + "refs": { + "DescribeIpv6PoolsRequest$MaxResults": "The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned nextToken
value.
The maximum number of results to return with a single call. To retrieve the remaining results, make another call with the returned nextToken
value.
Information about the IPv6 address pools.
" + } + }, "Ipv6Range": { "base": "[EC2-VPC only] Describes an IPv6 range.
", "refs": { @@ -7751,6 +7863,12 @@ "TransitGatewayVpcAttachmentOptions$Ipv6Support": "Indicates whether IPv6 support is enabled.
" } }, + "KernelId": { + "base": null, + "refs": { + "RegisterImageRequest$KernelId": "The ID of the kernel.
" + } + }, "KeyNameStringList": { "base": null, "refs": { @@ -7762,6 +7880,12 @@ "refs": { } }, + "KeyPairIdStringList": { + "base": null, + "refs": { + "DescribeKeyPairsRequest$KeyPairIds": "The IDs of the key pairs.
" + } + }, "KeyPairInfo": { "base": "Describes a key pair.
", "refs": { @@ -7777,15 +7901,25 @@ "KeyPairName": { "base": null, "refs": { - "CreateKeyPairRequest$KeyName": "A unique name for the key pair.
Constraints: Up to 255 ASCII characters
" + "CreateKeyPairRequest$KeyName": "A unique name for the key pair.
Constraints: Up to 255 ASCII characters
", + "DeleteKeyPairRequest$KeyName": "The name of the key pair.
", + "ImportKeyPairRequest$KeyName": "A unique name for the key pair.
" } }, "KmsKeyId": { "base": null, "refs": { + "CopyImageRequest$KmsKeyId": "An identifier for the symmetric AWS Key Management Service (AWS KMS) customer master key (CMK) to use when creating the encrypted volume. This parameter is only required if you want to use a non-default CMK; if this parameter is not specified, the default CMK for EBS is used. If a KmsKeyId
is specified, the Encrypted
flag must also be set.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\". For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
AWS parses KmsKeyId
asynchronously, meaning that the action you call may appear to complete even though you provided an invalid identifier. This action will eventually report failure.
The specified CMK must exist in the Region that the snapshot is being copied to.
Amazon EBS does not support asymmetric CMKs.
", "CreateVolumeRequest$KmsKeyId": "The identifier of the AWS Key Management Service (AWS KMS) customer master key (CMK) to use for Amazon EBS encryption. If this parameter is not specified, your AWS managed CMK for EBS is used. If KmsKeyId
is specified, the encrypted state must be true
.
You can specify the CMK using any of the following:
Key ID. For example, key/1234abcd-12ab-34cd-56ef-1234567890ab.
Key alias. For example, alias/ExampleAlias.
Key ARN. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.
Alias ARN. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
AWS authenticates the CMK asynchronously. Therefore, if you specify an ID, alias, or ARN that is not valid, the action can appear to complete, but eventually fails.
", - "ImportSnapshotRequest$KmsKeyId": "An identifier for the AWS Key Management Service (AWS KMS) customer master key (CMK) to use when creating the encrypted snapshot. This parameter is only required if you want to use a non-default CMK; if this parameter is not specified, the default CMK for EBS is used. If a KmsKeyId
is specified, the Encrypted
flag must also be set.
The CMK identifier may be provided in any of the following formats:
Key ID
Key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
ARN using key ID. The ID ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the key
namespace, and then the CMK ID. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.
ARN using key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
AWS parses KmsKeyId
asynchronously, meaning that the action you call may appear to complete even though you provided an invalid identifier. This action will eventually report failure.
The specified CMK must exist in the Region that the snapshot is being copied to.
", - "ModifyEbsDefaultKmsKeyIdRequest$KmsKeyId": "The identifier of the AWS Key Management Service (AWS KMS) customer master key (CMK) to use for Amazon EBS encryption. If this parameter is not specified, your AWS managed CMK for EBS is used. If KmsKeyId
is specified, the encrypted state must be true
.
You can specify the CMK using any of the following:
Key ID. For example, key/1234abcd-12ab-34cd-56ef-1234567890ab.
Key alias. For example, alias/ExampleAlias.
Key ARN. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.
Alias ARN. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
AWS authenticates the CMK asynchronously. Therefore, if you specify an ID, alias, or ARN that is not valid, the action can appear to complete, but eventually fails.
" + "ImportImageRequest$KmsKeyId": "An identifier for the symmetric AWS Key Management Service (AWS KMS) customer master key (CMK) to use when creating the encrypted AMI. This parameter is only required if you want to use a non-default CMK; if this parameter is not specified, the default CMK for EBS is used. If a KmsKeyId
is specified, the Encrypted
flag must also be set.
The CMK identifier may be provided in any of the following formats:
Key ID
Key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
ARN using key ID. The ID ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the key
namespace, and then the CMK ID. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.
ARN using key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
AWS parses KmsKeyId
asynchronously, meaning that the action you call may appear to complete even though you provided an invalid identifier. This action will eventually report failure.
The specified CMK must exist in the Region that the AMI is being copied to.
Amazon EBS does not support asymmetric CMKs.
", + "ImportSnapshotRequest$KmsKeyId": "An identifier for the symmetric AWS Key Management Service (AWS KMS) customer master key (CMK) to use when creating the encrypted snapshot. This parameter is only required if you want to use a non-default CMK; if this parameter is not specified, the default CMK for EBS is used. If a KmsKeyId
is specified, the Encrypted
flag must also be set.
The CMK identifier may be provided in any of the following formats:
Key ID
Key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
ARN using key ID. The ID ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the key
namespace, and then the CMK ID. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.
ARN using key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
AWS parses KmsKeyId
asynchronously, meaning that the action you call may appear to complete even though you provided an invalid identifier. This action will eventually report failure.
The specified CMK must exist in the Region that the snapshot is being copied to.
Amazon EBS does not support asymmetric CMKs.
", + "ModifyEbsDefaultKmsKeyIdRequest$KmsKeyId": "The identifier of the AWS Key Management Service (AWS KMS) customer master key (CMK) to use for Amazon EBS encryption. If this parameter is not specified, your AWS managed CMK for EBS is used. If KmsKeyId
is specified, the encrypted state must be true
.
You can specify the CMK using any of the following:
Key ID. For example, key/1234abcd-12ab-34cd-56ef-1234567890ab.
Key alias. For example, alias/ExampleAlias.
Key ARN. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.
Alias ARN. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
AWS authenticates the CMK asynchronously. Therefore, if you specify an ID, alias, or ARN that is not valid, the action can appear to complete, but eventually fails.
Amazon EBS does not support asymmetric CMKs.
" + } + }, + "LastError": { + "base": "The last error that occurred for a VPC endpoint.
", + "refs": { + "VpcEndpoint$LastError": "The last error that occurred for VPC endpoint.
" } }, "LaunchPermission": { @@ -7832,10 +7966,10 @@ "LaunchTemplateAndOverridesResponse": { "base": "Describes a launch template and overrides.
", "refs": { - "CreateFleetError$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. Any parameters that you specify in the Overrides override the same parameters in the launch template.
", - "CreateFleetInstance$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. Any parameters that you specify in the Overrides override the same parameters in the launch template.
", - "DescribeFleetError$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. Any parameters that you specify in the Overrides override the same parameters in the launch template.
", - "DescribeFleetsInstances$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. Any parameters that you specify in the Overrides override the same parameters in the launch template.
" + "CreateFleetError$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. The values that you specify in the Overrides replace the values in the launch template.
", + "CreateFleetInstance$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. The values that you specify in the Overrides replace the values in the launch template.
", + "DescribeFleetError$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. The values that you specify in the Overrides replace the values in the launch template.
", + "DescribeFleetsInstances$LaunchTemplateAndOverrides": "The launch templates and overrides that were used for launching the instances. The values that you specify in the Overrides replace the values in the launch template.
" } }, "LaunchTemplateBlockDeviceMapping": { @@ -7859,7 +7993,7 @@ "LaunchTemplateBlockDeviceMappingRequestList": { "base": null, "refs": { - "RequestLaunchTemplateData$BlockDeviceMappings": "The block device mapping.
Supplying both a snapshot ID and an encryption value as arguments for block-device mapping results in an error. This is because only blank volumes can be encrypted on start, and these are not created from a snapshot. If a snapshot is the basis for the volume, it contains data by definition and its encryption status cannot be changed using this action.
The block device mapping.
" } }, "LaunchTemplateCapacityReservationSpecificationRequest": { @@ -7919,7 +8053,7 @@ "LaunchTemplateElasticInferenceAcceleratorCount": { "base": null, "refs": { - "LaunchTemplateElasticInferenceAccelerator$Count": "The number of elastic inference accelerators of given type to be attached to the instance. Only positive values allowed. If not specified defaults to 1.
" + "LaunchTemplateElasticInferenceAccelerator$Count": "The number of elastic inference accelerators to attach to the instance.
Default: 1
" } }, "LaunchTemplateElasticInferenceAcceleratorList": { @@ -7958,6 +8092,13 @@ "RequestLaunchTemplateData$HibernationOptions": "Indicates whether an instance is enabled for hibernation. This parameter is valid only if the instance meets the hibernation prerequisites. For more information, see Hibernate Your Instance in the Amazon Elastic Compute Cloud User Guide.
" } }, + "LaunchTemplateHttpTokensState": { + "base": null, + "refs": { + "LaunchTemplateInstanceMetadataOptions$HttpTokens": "The state of token usage for your instance metadata requests. If the parameter is not specified in the request, the default state is optional
.
If the state is optional
, you can choose to retrieve instance metadata with or without a signed token header on your request. If you retrieve the IAM role credentials without a token, the version 1.0 role credentials are returned. If you retrieve the IAM role credentials using a valid signed token, the version 2.0 role credentials are returned.
If the state is required
, you must send a signed token header with any instance metadata retrieval requests. In this state, retrieving the IAM role credentials always returns the version 2.0 credentials; the version 1.0 credentials are not available.
The state of token usage for your instance metadata requests. If the parameter is not specified in the request, the default state is optional
.
If the state is optional
, you can choose to retrieve instance metadata with or without a signed token header on your request. If you retrieve the IAM role credentials without a token, the version 1.0 role credentials are returned. If you retrieve the IAM role credentials using a valid signed token, the version 2.0 role credentials are returned.
If the state is required
, you must send a signed token header with any instance metadata retrieval requests. In this state, retrieving the IAM role credentials always returns the version 2.0 credentials; the version 1.0 credentials are not available.
Describes an IAM instance profile.
", "refs": { @@ -7991,6 +8132,31 @@ "RequestLaunchTemplateData$InstanceMarketOptions": "The market (purchasing) option for the instances.
" } }, + "LaunchTemplateInstanceMetadataEndpointState": { + "base": null, + "refs": { + "LaunchTemplateInstanceMetadataOptions$HttpEndpoint": "This parameter enables or disables the HTTP metadata endpoint on your instances. If the parameter is not specified, the default state is enabled
.
If you specify a value of disabled
, you will not be able to access your instance metadata.
This parameter enables or disables the HTTP metadata endpoint on your instances. If the parameter is not specified, the default state is enabled
.
If you specify a value of disabled
, you will not be able to access your instance metadata.
The metadata options for the instance. For more information, see Instance Metadata and User Data in the Amazon Elastic Compute Cloud User Guide.
", + "refs": { + "ResponseLaunchTemplateData$MetadataOptions": "The metadata options for the instance. For more information, see Instance Metadata and User Data in the Amazon Elastic Compute Cloud User Guide.
" + } + }, + "LaunchTemplateInstanceMetadataOptionsRequest": { + "base": "The metadata options for the instance. For more information, see Instance Metadata and User Data in the Amazon Elastic Compute Cloud User Guide.
", + "refs": { + "RequestLaunchTemplateData$MetadataOptions": "The metadata options for the instance. For more information, see Instance Metadata and User Data in the Amazon Elastic Compute Cloud User Guide.
" + } + }, + "LaunchTemplateInstanceMetadataOptionsState": { + "base": null, + "refs": { + "LaunchTemplateInstanceMetadataOptions$State": "The state of the metadata option changes.
pending
- The metadata options are being updated and the instance is not ready to process metadata traffic with the new selection.
applied
- The metadata options have been successfully applied on the instance.
Describes a network interface.
", "refs": { @@ -8498,8 +8664,8 @@ "SnapshotInfo$StartTime": "Time this snapshot was started. This is the same for all snapshots initiated by the same request.
", "VolumeStatusEvent$NotAfter": "The latest end time of the event.
", "VolumeStatusEvent$NotBefore": "The earliest start time of the event.
", - "VpcEndpoint$CreationTimestamp": "The date and time the VPC endpoint was created.
", - "VpcEndpointConnection$CreationTimestamp": "The date and time the VPC endpoint was created.
" + "VpcEndpoint$CreationTimestamp": "The date and time that the VPC endpoint was created.
", + "VpcEndpointConnection$CreationTimestamp": "The date and time that the VPC endpoint was created.
" } }, "ModifyCapacityReservationRequest": { @@ -8725,7 +8891,7 @@ "ModifyTransitGatewayVpcAttachmentRequestOptions": { "base": "Describes the options for a VPC attachment.
", "refs": { - "ModifyTransitGatewayVpcAttachmentRequest$Options": "The new VPC attachment options.
" + "ModifyTransitGatewayVpcAttachmentRequest$Options": "The new VPC attachment options.
You cannot modify the IPv6 options.
Information about the interface attachment. If modifying the 'delete on termination' attribute, you must specify the ID of the interface attachment.
" } }, + "NetworkInterfaceAttachmentId": { + "base": null, + "refs": { + "DetachNetworkInterfaceRequest$AttachmentId": "The ID of the attachment.
" + } + }, "NetworkInterfaceAttribute": { "base": null, "refs": { @@ -9187,10 +9359,13 @@ "DescribeInstanceTypeOfferingsResult$NextToken": "The token to use to retrieve the next page of results. This value is null
when there are no more results to return.
The token to retrieve the next page of results.
", "DescribeInstanceTypesResult$NextToken": "The token to use to retrieve the next page of results. This value is null
when there are no more results to return.
The token for the next page of results.
", + "DescribeIpv6PoolsResult$NextToken": "The token to use to retrieve the next page of results. This value is null
when there are no more results to return.
The token for the next page of results.
", "DescribeTrafficMirrorFiltersRequest$NextToken": "The token for the next page of results.
", "DescribeTrafficMirrorSessionsRequest$NextToken": "The token for the next page of results.
", - "DescribeTrafficMirrorTargetsRequest$NextToken": "The token for the next page of results.
" + "DescribeTrafficMirrorTargetsRequest$NextToken": "The token for the next page of results.
", + "GetAssociatedIpv6PoolCidrsRequest$NextToken": "The token for the next page of results.
" } }, "OccurrenceDayRequestSet": { @@ -9479,6 +9654,12 @@ "PlacementGroupList$member": null } }, + "PlacementGroupIdStringList": { + "base": null, + "refs": { + "DescribePlacementGroupsRequest$GroupIds": "The IDs of the placement groups.
" + } + }, "PlacementGroupInfo": { "base": "Describes the placement group support of the instance type.
", "refs": { @@ -9537,14 +9718,26 @@ "PlatformValues": { "base": null, "refs": { - "CreateFleetInstance$Platform": "The value is Windows
for Windows instances; otherwise blank.
The value is Windows
for Windows instances; otherwise blank.
The value is Windows
for Windows instances. Otherwise, the value is blank.
The value is Windows
for Windows instances. Otherwise, the value is blank.
This value is set to windows
for Windows AMIs; otherwise, it is blank.
The instance operating system.
", "ImportInstanceTaskDetails$Platform": "The instance operating system.
", "Instance$Platform": "The value is Windows
for Windows instances; otherwise blank.
Describes a CIDR block for an address pool.
", + "refs": { + "PoolCidrBlocksSet$member": null + } + }, + "PoolCidrBlocksSet": { + "base": null, + "refs": { + "Ipv6Pool$PoolCidrBlocks": "The CIDR blocks for the address pool.
" + } + }, "PoolMaxResults": { "base": null, "refs": { @@ -9643,6 +9836,12 @@ "AllowedPrincipal$PrincipalType": "The type of principal.
" } }, + "PrivateDnsNameConfiguration": { + "base": "Information about the private DNS name for the service endpoint. For more information about these parameters, see VPC Endpoint Service Private DNS Name Verification in the Amazon Virtual Private Cloud User Guide.
", + "refs": { + "ServiceConfiguration$PrivateDnsNameConfiguration": "Information about the endpoint service private DNS name configuration.
" + } + }, "PrivateIpAddressConfigSet": { "base": null, "refs": { @@ -9756,7 +9955,7 @@ } }, "PublicIpv4Pool": { - "base": "Describes an address pool.
", + "base": "Describes an IPv4 address pool.
", "refs": { "PublicIpv4PoolSet$member": null } @@ -9850,6 +10049,12 @@ "SpotPrice$ProductDescription": "A general description of the AMI.
" } }, + "RamdiskId": { + "base": null, + "refs": { + "RegisterImageRequest$RamdiskId": "The ID of the RAM disk.
" + } + }, "ReasonCodesList": { "base": null, "refs": { @@ -10346,7 +10551,7 @@ "LaunchTemplateTagSpecificationRequest$ResourceType": "The type of resource to tag. Currently, the resource types that support tagging on creation are instance
and volume
. To tag a resource after it has been created, see CreateTags.
The type of resource. Currently, the only resource type that is supported is instance
.
The resource type.
", - "TagSpecification$ResourceType": "The type of resource to tag. Currently, the resource types that support tagging on creation are: capacity-reservation
| client-vpn-endpoint
| dedicated-host
| fleet
| fpga-image
| instance
| launch-template
| snapshot
| traffic-mirror-filter
| traffic-mirror-session
| traffic-mirror-target
| transit-gateway
| transit-gateway-attachment
| transit-gateway-route-table
| volume
.
To tag a resource after it has been created, see CreateTags.
" + "TagSpecification$ResourceType": "The type of resource to tag. Currently, the resource types that support tagging on creation are: capacity-reservation
| client-vpn-endpoint
| dedicated-host
| fleet
| fpga-image
| instance
| key-pair
| launch-template
| placement-group
| snapshot
| traffic-mirror-filter
| traffic-mirror-session
| traffic-mirror-target
| transit-gateway
| transit-gateway-attachment
| transit-gateway-route-table
| volume
.
To tag a resource after it has been created, see CreateTags.
" } }, "ResponseError": { @@ -10422,7 +10627,7 @@ "RootDeviceTypeList": { "base": null, "refs": { - "InstanceTypeInfo$SupportedRootDevices": "Indicates the supported root devices.
" + "InstanceTypeInfo$SupportedRootDeviceTypes": "Indicates the supported root device types.
" } }, "Route": { @@ -10798,10 +11003,11 @@ "ServiceId": { "base": null, "refs": { - "AcceptVpcEndpointConnectionsRequest$ServiceId": "The ID of the endpoint service.
", + "AcceptVpcEndpointConnectionsRequest$ServiceId": "The ID of the VPC endpoint service.
", "CreateVpcEndpointConnectionNotificationRequest$ServiceId": "The ID of the endpoint service.
", "ModifyVpcEndpointServiceConfigurationRequest$ServiceId": "The ID of the service.
", - "RejectVpcEndpointConnectionsRequest$ServiceId": "The ID of the service.
" + "RejectVpcEndpointConnectionsRequest$ServiceId": "The ID of the service.
", + "StartVpcEndpointServicePrivateDnsVerificationRequest$ServiceId": "The ID of the endpoint service.
" } }, "ServiceState": { @@ -10935,8 +11141,8 @@ "SpotAllocationStrategy": { "base": null, "refs": { - "SpotOptions$AllocationStrategy": "Indicates how to allocate the target Spot Instance capacity across the Spot Instance pools specified by the EC2 Fleet.
If the allocation strategy is lowest-price
, EC2 Fleet launches instances from the Spot Instance pools with the lowest price. This is the default allocation strategy.
If the allocation strategy is diversified
, EC2 Fleet launches instances from all the Spot Instance pools that you specify.
If the allocation strategy is capacity-optimized
, EC2 Fleet launches instances from Spot Instance pools with optimal capacity for the number of instances that are launching.
Indicates how to allocate the target Spot Instance capacity across the Spot Instance pools specified by the EC2 Fleet.
If the allocation strategy is lowest-price
, EC2 Fleet launches instances from the Spot Instance pools with the lowest price. This is the default allocation strategy.
If the allocation strategy is diversified
, EC2 Fleet launches instances from all the Spot Instance pools that you specify.
If the allocation strategy is capacity-optimized
, EC2 Fleet launches instances from Spot Instance pools with optimal capacity for the number of instances that are launching.
Indicates how to allocate the target Spot Instance capacity across the Spot Instance pools specified by the EC2 Fleet.
If the allocation strategy is lowest-price
, EC2 Fleet launches instances from the Spot Instance pools with the lowest price. This is the default allocation strategy.
If the allocation strategy is diversified
, EC2 Fleet launches instances from all of the Spot Instance pools that you specify.
If the allocation strategy is capacity-optimized
, EC2 Fleet launches instances from Spot Instance pools with optimal capacity for the number of instances that are launching.
Indicates how to allocate the target Spot Instance capacity across the Spot Instance pools specified by the EC2 Fleet.
If the allocation strategy is lowest-price
, EC2 Fleet launches instances from the Spot Instance pools with the lowest price. This is the default allocation strategy.
If the allocation strategy is diversified
, EC2 Fleet launches instances from all of the Spot Instance pools that you specify.
If the allocation strategy is capacity-optimized
, EC2 Fleet launches instances from Spot Instance pools with optimal capacity for the number of instances that are launching.
The name of the location from which the IP address is advertised.
", "Address$CustomerOwnedIp": "The customer-owned IP address.
", "Address$CustomerOwnedIpv4Pool": "The ID of the customer-owned address pool.
", - "AdvertiseByoipCidrRequest$Cidr": "The IPv4 address range, in CIDR notation. This must be the exact range that you provisioned. You can't advertise only a portion of the provisioned range.
", + "AdvertiseByoipCidrRequest$Cidr": "The address range, in CIDR notation. This must be the exact range that you provisioned. You can't advertise only a portion of the provisioned range.
", "AllocateAddressRequest$Address": "[EC2-VPC] The Elastic IP address to recover or an IPv4 address from an address pool.
", "AllocateAddressRequest$PublicIpv4Pool": "The ID of an address pool that you own. Use this parameter to let Amazon EC2 select an address from the address pool. To specify a specific address from the address pool, use the Address
parameter instead.
The location from which the IP address is advertised. Use this parameter to limit the address to this location.
Use DescribeVpcs to view the network border groups.
You cannot use a network border group with EC2 Classic. If you attempt this operation on EC2 classic, you will receive an InvalidParameterCombination
error. For more information, see Error Codes.
The ID of the internet gateway or virtual private gateway.
", "AssociateRouteTableResult$AssociationId": "The route table association ID. This ID is required for disassociating the route table.
", "AssociateSubnetCidrBlockRequest$Ipv6CidrBlock": "The IPv6 CIDR block for your subnet. The subnet must have a /64 prefix length.
", - "AssociateSubnetCidrBlockRequest$SubnetId": "The ID of your subnet.
", "AssociateSubnetCidrBlockResult$SubnetId": "The ID of the subnet.
", "AssociateTransitGatewayMulticastDomainRequest$TransitGatewayMulticastDomainId": "The ID of the transit gateway multicast domain.
", "AssociateTransitGatewayMulticastDomainRequest$TransitGatewayAttachmentId": "The ID of the transit gateway attachment to associate with the transit gateway multicast domain.
", "AssociateTransitGatewayRouteTableRequest$TransitGatewayRouteTableId": "The ID of the transit gateway route table.
", "AssociateTransitGatewayRouteTableRequest$TransitGatewayAttachmentId": "The ID of the attachment.
", "AssociateVpcCidrBlockRequest$CidrBlock": "An IPv4 CIDR block to associate with the VPC.
", + "AssociateVpcCidrBlockRequest$Ipv6Pool": "The ID of an IPv6 address pool from which to allocate the IPv6 CIDR block.
", + "AssociateVpcCidrBlockRequest$Ipv6CidrBlock": "An IPv6 CIDR block from the IPv6 address pool. You must also specify Ipv6Pool
in the request.
To let Amazon choose the IPv6 CIDR block for you, omit this parameter.
", "AssociateVpcCidrBlockRequest$Ipv6CidrBlockNetworkBorderGroup": "The name of the location from which we advertise the IPV6 CIDR block. Use this parameter to limit the CiDR block to this location.
You must set AmazonProvidedIpv6CidrBlock
to true
to use this parameter.
You can have one IPv6 CIDR block association per network border group.
", "AssociateVpcCidrBlockResult$VpcId": "The ID of the VPC.
", "AssociatedTargetNetwork$NetworkId": "The ID of the subnet.
", @@ -11284,10 +11501,9 @@ "BundleTask$Progress": "The level of task completion, as a percent (for example, 20%).
", "BundleTaskError$Code": "The error code.
", "BundleTaskError$Message": "The error message.
", - "ByoipCidr$Cidr": "The public IPv4 address range, in CIDR notation.
", + "ByoipCidr$Cidr": "The address range, in CIDR notation.
", "ByoipCidr$Description": "The description of the address range.
", "ByoipCidr$StatusMessage": "Upon success, contains the ID of the address pool. Otherwise, contains an error message.
", - "CancelBundleTaskRequest$BundleId": "The ID of the bundle task.
", "CancelConversionRequest$ReasonMessage": "The reason for canceling the conversion task.
", "CancelImportTaskRequest$CancelReason": "The reason for canceling the task.
", "CancelImportTaskResult$ImportTaskId": "The ID of the task being canceled.
", @@ -11376,7 +11592,6 @@ "CopyFpgaImageResult$FpgaImageId": "The ID of the new AFI.
", "CopyImageRequest$ClientToken": "Unique, case-sensitive identifier you provide to ensure idempotency of the request. For more information, see How to Ensure Idempotency in the Amazon Elastic Compute Cloud User Guide.
", "CopyImageRequest$Description": "A description for the new AMI in the destination Region.
", - "CopyImageRequest$KmsKeyId": "An identifier for the AWS Key Management Service (AWS KMS) customer master key (CMK) to use when creating the encrypted volume. This parameter is only required if you want to use a non-default CMK; if this parameter is not specified, the default CMK for EBS is used. If a KmsKeyId
is specified, the Encrypted
flag must also be set.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\". For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
AWS parses KmsKeyId
asynchronously, meaning that the action you call may appear to complete even though you provided an invalid identifier. This action will eventually report failure.
The specified CMK must exist in the Region that the snapshot is being copied to.
", "CopyImageRequest$Name": "The name of the new AMI in the destination Region.
", "CopyImageRequest$SourceImageId": "The ID of the AMI to copy.
", "CopyImageRequest$SourceRegion": "The name of the Region that contains the AMI to copy.
", @@ -11399,7 +11614,6 @@ "CreateClientVpnEndpointResult$ClientVpnEndpointId": "The ID of the Client VPN endpoint.
", "CreateClientVpnEndpointResult$DnsName": "The DNS name to be used by clients when establishing their VPN session.
", "CreateClientVpnRouteRequest$DestinationCidrBlock": "The IPv4 address range, in CIDR notation, of the route destination. For example:
To add a route for Internet access, enter 0.0.0.0/0
To add a route for a peered VPC, enter the peered VPC's IPv4 CIDR range
To add a route for an on-premises network, enter the AWS Site-to-Site VPN connection's IPv4 CIDR range
Route address ranges cannot overlap with the CIDR range specified for client allocation.
", - "CreateClientVpnRouteRequest$TargetVpcSubnetId": "The ID of the subnet through which you want to route traffic. The specified subnet must be an existing target network of the Client VPN endpoint.
", "CreateClientVpnRouteRequest$Description": "A brief description of the route.
", "CreateClientVpnRouteRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", "CreateCustomerGatewayRequest$PublicIp": "The Internet-routable IP address for the customer gateway's outside interface. The address must be static.
", @@ -11409,8 +11623,8 @@ "CreateEgressOnlyInternetGatewayRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", "CreateEgressOnlyInternetGatewayResult$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", "CreateFleetError$ErrorCode": "The error code that indicates why the instance could not be launched. For more information about error codes, see Error Codes.
", - "CreateFleetError$ErrorMessage": "The error message that describes why the instance could not be launched. For more information about error messages, see ee Error Codes.
", - "CreateFleetRequest$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see Ensuring Idempotency.
", + "CreateFleetError$ErrorMessage": "The error message that describes why the instance could not be launched. For more information about error messages, see Error Codes.
", + "CreateFleetRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see Ensuring Idempotency.
", "CreateFlowLogsRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", "CreateFlowLogsRequest$DeliverLogsPermissionArn": "The ARN for the IAM role that permits Amazon EC2 to publish flow logs to a CloudWatch Logs log group in your account.
If you specify LogDestinationType
as s3
, do not specify DeliverLogsPermissionArn
or LogGroupName
.
The name of a new or existing CloudWatch Logs log group where Amazon EC2 publishes your flow logs.
If you specify LogDestinationType
as s3
, do not specify DeliverLogsPermissionArn
or LogGroupName
.
The FPGA image identifier (AFI ID).
", "CreateFpgaImageResult$FpgaImageGlobalId": "The global FPGA image identifier (AGFI ID).
", "CreateImageRequest$Description": "A description for the new image.
", - "CreateImageRequest$InstanceId": "The ID of the instance.
", "CreateImageRequest$Name": "A name for the new image.
Constraints: 3-128 alphanumeric characters, parentheses (()), square brackets ([]), spaces ( ), periods (.), slashes (/), dashes (-), single quotes ('), at-signs (@), or underscores(_)
", "CreateImageResult$ImageId": "The ID of the new AMI.
", "CreateInstanceExportTaskRequest$Description": "A description for the conversion task or the resource being exported. The maximum length is 255 bytes.
", @@ -11462,7 +11675,6 @@ "CreateSubnetRequest$CidrBlock": "The IPv4 network range for the subnet, in CIDR notation. For example, 10.0.0.0/24
.
The IPv6 network range for the subnet, in CIDR notation. The subnet size must use a /64 prefix length.
", "CreateSubnetRequest$OutpostArn": "The Amazon Resource Name (ARN) of the Outpost.
", - "CreateSubnetRequest$VpcId": "The ID of the VPC.
", "CreateTrafficMirrorFilterRequest$Description": "The description of the Traffic Mirror filter.
", "CreateTrafficMirrorFilterRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", "CreateTrafficMirrorFilterResult$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", @@ -11494,18 +11706,21 @@ "CreateVolumeRequest$AvailabilityZone": "The Availability Zone in which to create the volume.
", "CreateVolumeRequest$OutpostArn": "The Amazon Resource Name (ARN) of the Outpost.
", "CreateVpcEndpointConnectionNotificationRequest$ConnectionNotificationArn": "The ARN of the SNS topic for the notifications.
", - "CreateVpcEndpointConnectionNotificationRequest$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", - "CreateVpcEndpointConnectionNotificationResult$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request.
", + "CreateVpcEndpointConnectionNotificationRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", + "CreateVpcEndpointConnectionNotificationResult$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", "CreateVpcEndpointRequest$ServiceName": "The service name. To get a list of available services, use the DescribeVpcEndpointServices request, or get the name from the service provider.
", "CreateVpcEndpointRequest$PolicyDocument": "A policy to attach to the endpoint that controls access to the service. The policy must be in valid JSON format. If this parameter is not specified, we attach a default policy that allows full access to the service.
", - "CreateVpcEndpointRequest$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", - "CreateVpcEndpointResult$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request.
", - "CreateVpcEndpointServiceConfigurationRequest$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", - "CreateVpcEndpointServiceConfigurationResult$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request.
", + "CreateVpcEndpointRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", + "CreateVpcEndpointResult$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", + "CreateVpcEndpointServiceConfigurationRequest$PrivateDnsName": "The private DNS name to assign to the VPC endpoint service.
", + "CreateVpcEndpointServiceConfigurationRequest$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see How to Ensure Idempotency.
", + "CreateVpcEndpointServiceConfigurationResult$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", "CreateVpcPeeringConnectionRequest$PeerOwnerId": "The AWS account ID of the owner of the accepter VPC.
Default: Your AWS account ID
", "CreateVpcPeeringConnectionRequest$PeerVpcId": "The ID of the VPC with which you are creating the VPC peering connection. You must specify this parameter in the request.
", "CreateVpcPeeringConnectionRequest$PeerRegion": "The Region code for the accepter VPC, if the accepter VPC is located in a Region other than the Region in which you make the request.
Default: The Region in which you make the request.
", "CreateVpcRequest$CidrBlock": "The IPv4 network range for the VPC, in CIDR notation. For example, 10.0.0.0/16
.
The ID of an IPv6 address pool from which to allocate the IPv6 CIDR block.
", + "CreateVpcRequest$Ipv6CidrBlock": "The IPv6 CIDR block from the IPv6 address pool. You must also specify Ipv6Pool
in the request.
To let Amazon choose the IPv6 CIDR block for you, omit this parameter.
", "CreateVpcRequest$Ipv6CidrBlockNetworkBorderGroup": "The name of the location from which we advertise the IPV6 CIDR block. Use this parameter to limit the address to this location.
You must set AmazonProvidedIpv6CidrBlock
to true
to use this parameter.
The type of VPN connection (ipsec.1
).
The CIDR block associated with the local subnet of the customer network.
", @@ -11520,10 +11735,8 @@ "CustomerGateway$Type": "The type of VPN connection the customer gateway supports (ipsec.1
).
The name of customer gateway device.
", "CustomerGatewayIdStringList$member": null, - "DeleteClientVpnRouteRequest$TargetVpcSubnetId": "The ID of the target subnet used by the route.
", "DeleteClientVpnRouteRequest$DestinationCidrBlock": "The IPv4 address range, in CIDR notation, of the route to be deleted.
", "DeleteFleetError$Message": "The description for the error code.
", - "DeleteKeyPairRequest$KeyName": "The name of the key pair.
", "DeleteLaunchTemplateVersionsResponseErrorItem$LaunchTemplateId": "The ID of the launch template.
", "DeleteLaunchTemplateVersionsResponseErrorItem$LaunchTemplateName": "The name of the launch template.
", "DeleteLaunchTemplateVersionsResponseSuccessItem$LaunchTemplateId": "The ID of the launch template.
", @@ -11541,7 +11754,6 @@ "DeleteRouteTableRequest$RouteTableId": "The ID of the route table.
", "DeleteSecurityGroupRequest$GroupId": "The ID of the security group. Required for a nondefault VPC.
", "DeleteSecurityGroupRequest$GroupName": "[EC2-Classic, default VPC] The name of the security group. You can specify either the security group name or the security group ID.
", - "DeleteSubnetRequest$SubnetId": "The ID of the subnet.
", "DeleteTrafficMirrorFilterResult$TrafficMirrorFilterId": "The ID of the Traffic Mirror filter.
", "DeleteTrafficMirrorFilterRuleResult$TrafficMirrorFilterRuleId": "The ID of the deleted Traffic Mirror rule.
", "DeleteTrafficMirrorSessionResult$TrafficMirrorSessionId": "The ID of the deleted Traffic Mirror session.
", @@ -11554,8 +11766,7 @@ "DeleteTransitGatewayRouteTableRequest$TransitGatewayRouteTableId": "The ID of the transit gateway route table.
", "DeleteTransitGatewayVpcAttachmentRequest$TransitGatewayAttachmentId": "The ID of the attachment.
", "DeleteVpnConnectionRouteRequest$DestinationCidrBlock": "The CIDR block associated with the local subnet of the customer network.
", - "DeprovisionByoipCidrRequest$Cidr": "The public IPv4 address range, in CIDR notation. The prefix must be the same prefix that you specified when you provisioned the address range.
", - "DeregisterImageRequest$ImageId": "The ID of the AMI.
", + "DeprovisionByoipCidrRequest$Cidr": "The address range, in CIDR notation. The prefix must be the same prefix that you specified when you provisioned the address range.
", "DeregisterTransitGatewayMulticastGroupMembersRequest$TransitGatewayMulticastDomainId": "The ID of the transit gateway multicast domain.
", "DeregisterTransitGatewayMulticastGroupMembersRequest$GroupIpAddress": "The IP address assigned to the transit gateway multicast group.
", "DeregisterTransitGatewayMulticastGroupSourcesRequest$TransitGatewayMulticastDomainId": "The ID of the transit gateway multicast domain.
", @@ -11583,7 +11794,7 @@ "DescribeFastSnapshotRestoreSuccessItem$OwnerId": "The ID of the AWS account that owns the snapshot.
", "DescribeFastSnapshotRestoreSuccessItem$OwnerAlias": "The alias of the snapshot owner.
", "DescribeFleetError$ErrorCode": "The error code that indicates why the instance could not be launched. For more information about error codes, see Error Codes.
", - "DescribeFleetError$ErrorMessage": "The error message that describes why the instance could not be launched. For more information about error messages, see ee Error Codes.
", + "DescribeFleetError$ErrorMessage": "The error message that describes why the instance could not be launched. For more information about error messages, see Error Codes.
", "DescribeFleetHistoryRequest$NextToken": "The token for the next set of results.
", "DescribeFleetHistoryResult$NextToken": "The token for the next set of results.
", "DescribeFleetInstancesRequest$NextToken": "The token for the next set of results.
", @@ -11753,12 +11964,10 @@ "DisableFastSnapshotRestoreSuccessItem$OwnerAlias": "The alias of the snapshot owner.
", "DisableTransitGatewayRouteTablePropagationRequest$TransitGatewayRouteTableId": "The ID of the propagation route table.
", "DisableTransitGatewayRouteTablePropagationRequest$TransitGatewayAttachmentId": "The ID of the attachment.
", - "DisassociateAddressRequest$AssociationId": "[EC2-VPC] The association ID. Required for EC2-VPC.
", "DisassociateAddressRequest$PublicIp": "[EC2-Classic] The Elastic IP address. Required for EC2-Classic.
", "DisassociateClientVpnTargetNetworkRequest$AssociationId": "The ID of the target network association.
", "DisassociateClientVpnTargetNetworkResult$AssociationId": "The ID of the target network association.
", "DisassociateIamInstanceProfileRequest$AssociationId": "The ID of the IAM instance profile association.
", - "DisassociateSubnetCidrBlockRequest$AssociationId": "The association ID for the CIDR block.
", "DisassociateSubnetCidrBlockResult$SubnetId": "The ID of the subnet.
", "DisassociateTransitGatewayMulticastDomainRequest$TransitGatewayMulticastDomainId": "The ID of the transit gateway multicast domain.
", "DisassociateTransitGatewayMulticastDomainRequest$TransitGatewayAttachmentId": "The ID of the attachment.
", @@ -11775,13 +11984,12 @@ "EbsBlockDevice$SnapshotId": "The ID of the snapshot.
", "EbsBlockDevice$KmsKeyId": "Identifier (key ID, key alias, ID ARN, or alias ARN) for a customer managed CMK under which the EBS volume is encrypted.
This parameter is only supported on BlockDeviceMapping
objects called by RunInstances, RequestSpotFleet, and RequestSpotInstances.
The ID of the EBS volume.
", - "EbsInstanceBlockDeviceSpecification$VolumeId": "The ID of the EBS volume.
", "ElasticGpuAssociation$ElasticGpuId": "The ID of the Elastic Graphics accelerator.
", "ElasticGpuAssociation$ElasticGpuAssociationId": "The ID of the association.
", "ElasticGpuAssociation$ElasticGpuAssociationState": "The state of the association between the instance and the Elastic Graphics accelerator.
", "ElasticGpuAssociation$ElasticGpuAssociationTime": "The time the Elastic Graphics accelerator was associated with the instance.
", "ElasticGpuIdSet$member": null, - "ElasticGpuSpecification$Type": "The type of Elastic Graphics accelerator.
", + "ElasticGpuSpecification$Type": "The type of Elastic Graphics accelerator. For more information about the values to specify for Type
, see Elastic Graphics Basics, specifically the Elastic Graphics accelerator column, in the Amazon Elastic Compute Cloud User Guide for Windows Instances.
The elastic GPU type.
", "ElasticGpus$ElasticGpuId": "The ID of the Elastic Graphics accelerator.
", "ElasticGpus$AvailabilityZone": "The Availability Zone in the which the Elastic Graphics accelerator resides.
", @@ -11842,7 +12050,7 @@ "ExportTransitGatewayRoutesResult$S3Location": "The URL of the exported file in Amazon S3. For example, s3://bucket_name/VPCTransitGateway/TransitGatewayRouteTables/file_name.
", "FailedQueuedPurchaseDeletion$ReservedInstancesId": "The ID of the Reserved Instance.
", "Filter$Name": "The name of the filter. Filter names are case-sensitive.
", - "FleetData$ClientToken": "Unique, case-sensitive identifier you provide to ensure the idempotency of the request. For more information, see Ensuring Idempotency.
Constraints: Maximum 64 ASCII characters
", + "FleetData$ClientToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request. For more information, see Ensuring Idempotency.
Constraints: Maximum 64 ASCII characters
", "FleetLaunchTemplateOverrides$MaxPrice": "The maximum price per unit hour that you are willing to pay for a Spot Instance.
", "FleetLaunchTemplateOverrides$SubnetId": "The ID of the subnet in which to launch the instances.
", "FleetLaunchTemplateOverrides$AvailabilityZone": "The Availability Zone in which to launch the instances.
", @@ -11874,6 +12082,8 @@ "FpgaImageAttribute$Description": "The description of the AFI.
", "FpgaImageIdList$member": null, "FpgaImageState$Message": "If the state is failed
, this is the error message.
The ID of the IPv6 address pool.
", + "GetAssociatedIpv6PoolCidrsResult$NextToken": "The token to use to retrieve the next page of results. This value is null
when there are no more results to return.
The token to retrieve the next page of results.
", "GetCapacityReservationUsageResult$NextToken": "The token to use to retrieve the next page of results. This value is null
when there are no more results to return.
The ID of the Capacity Reservation.
", @@ -11966,7 +12176,6 @@ "ImportImageRequest$ClientToken": "The token to enable idempotency for VM import requests.
", "ImportImageRequest$Description": "A description string for the import image task.
", "ImportImageRequest$Hypervisor": "The target hypervisor platform.
Valid values: xen
An identifier for the AWS Key Management Service (AWS KMS) customer master key (CMK) to use when creating the encrypted AMI. This parameter is only required if you want to use a non-default CMK; if this parameter is not specified, the default CMK for EBS is used. If a KmsKeyId
is specified, the Encrypted
flag must also be set.
The CMK identifier may be provided in any of the following formats:
Key ID
Key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
ARN using key ID. The ID ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the key
namespace, and then the CMK ID. For example, arn:aws:kms:us-east-1:012345678910:key/abcd1234-a123-456a-a12b-a123b4cd56ef.
ARN using key alias. The alias ARN contains the arn:aws:kms
namespace, followed by the Region of the CMK, the AWS account ID of the CMK owner, the alias
namespace, and then the CMK alias. For example, arn:aws:kms:us-east-1:012345678910:alias/ExampleAlias.
AWS parses KmsKeyId
asynchronously, meaning that the action you call may appear to complete even though you provided an invalid identifier. This action will eventually report failure.
The specified CMK must exist in the Region that the AMI is being copied to.
", "ImportImageRequest$LicenseType": "The license type to be used for the Amazon Machine Image (AMI) after importing.
By default, we detect the source-system operating system (OS) and apply the appropriate license. Specify AWS
to replace the source-system license with an AWS license, if appropriate. Specify BYOL
to retain the source-system license, if appropriate.
To use BYOL
, you must have existing licenses with rights to use these licenses in a third party cloud, such as AWS. For more information, see Prerequisites in the VM Import/Export User Guide.
The operating system of the virtual machine.
Valid values: Windows
| Linux
The name of the role to use when not using the default role, 'vmimport'.
", @@ -11975,7 +12184,7 @@ "ImportImageResult$Hypervisor": "The target hypervisor of the import task.
", "ImportImageResult$ImageId": "The ID of the Amazon Machine Image (AMI) created by the import task.
", "ImportImageResult$ImportTaskId": "The task ID of the import image task.
", - "ImportImageResult$KmsKeyId": "The identifier for the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used to create the encrypted AMI.
", + "ImportImageResult$KmsKeyId": "The identifier for the symmetric AWS Key Management Service (AWS KMS) customer master key (CMK) that was used to create the encrypted AMI.
", "ImportImageResult$LicenseType": "The license type of the virtual machine.
", "ImportImageResult$Platform": "The operating system of the virtual machine.
", "ImportImageResult$Progress": "The progress of the task.
", @@ -12002,7 +12211,6 @@ "ImportInstanceVolumeDetailItem$Description": "A description of the task.
", "ImportInstanceVolumeDetailItem$Status": "The status of the import of this particular disk image.
", "ImportInstanceVolumeDetailItem$StatusMessage": "The status information or errors related to the disk image.
", - "ImportKeyPairRequest$KeyName": "A unique name for the key pair.
", "ImportKeyPairResult$KeyFingerprint": "The MD5 public key fingerprint as specified in section 4 of RFC 4716.
", "ImportKeyPairResult$KeyName": "The key pair name you provided.
", "ImportSnapshotRequest$ClientToken": "Token to enable idempotency for VM import requests.
", @@ -12084,17 +12292,26 @@ "InternetGatewayAttachment$VpcId": "The ID of the VPC.
", "IpPermission$IpProtocol": "The IP protocol name (tcp
, udp
, icmp
, icmpv6
) or number (see Protocol Numbers).
[VPC only] Use -1
to specify all protocols. When authorizing security group rules, specifying -1
or a protocol number other than tcp
, udp
, icmp
, or icmpv6
allows traffic on all ports, regardless of any port range you specify. For tcp
, udp
, and icmp
, you must specify a port range. For icmpv6
, the port range is optional; if you omit the port range, traffic for all types and codes is allowed.
The IPv4 CIDR range. You can either specify a CIDR range or a source security group, not both. To specify a single IPv4 address, use the /32 prefix length.
", - "IpRange$Description": "A description for the security group rule that references this IPv4 address range.
Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*
", + "IpRange$Description": "A description for the security group rule that references this IPv4 address range.
Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*
", "IpRanges$member": null, "Ipv6AddressList$member": null, + "Ipv6CidrAssociation$Ipv6Cidr": "The IPv6 CIDR block.
", + "Ipv6CidrAssociation$AssociatedResource": "The resource that's associated with the IPv6 CIDR block.
", "Ipv6CidrBlock$Ipv6CidrBlock": "The IPv6 CIDR block.
", + "Ipv6Pool$PoolId": "The ID of the address pool.
", + "Ipv6Pool$Description": "The description for the address pool.
", "Ipv6Range$CidrIpv6": "The IPv6 CIDR range. You can either specify a CIDR range or a source security group, not both. To specify a single IPv6 address, use the /128 prefix length.
", - "Ipv6Range$Description": "A description for the security group rule that references this IPv6 address range.
Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*
", + "Ipv6Range$Description": "A description for the security group rule that references this IPv6 address range.
Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=&;{}!$*
", "KeyNameStringList$member": null, "KeyPair$KeyFingerprint": "The SHA-1 digest of the DER encoded private key.
", "KeyPair$KeyName": "The name of the key pair.
", + "KeyPair$KeyPairId": "The ID of the key pair.
", + "KeyPairIdStringList$member": null, + "KeyPairInfo$KeyPairId": "The ID of the key pair.
", "KeyPairInfo$KeyFingerprint": "If you used CreateKeyPair to create the key pair, this is the SHA-1 digest of the DER encoded private key. If you used ImportKeyPair to provide AWS the public key, this is the MD5 public key fingerprint as specified in section 4 of RFC4716.
", "KeyPairInfo$KeyName": "The name of the key pair.
", + "LastError$Message": "The error message for the VPC endpoint error.
", + "LastError$Code": "The error code for the VPC endpoint error.
", "LaunchPermission$UserId": "The AWS account ID.
", "LaunchSpecification$UserData": "The Base64-encoded user data for the instance.
", "LaunchSpecification$AddressingType": "Deprecated.
", @@ -12113,7 +12330,7 @@ "LaunchTemplateBlockDeviceMappingRequest$NoDevice": "Suppresses the specified device included in the block device mapping of the AMI.
", "LaunchTemplateEbsBlockDevice$KmsKeyId": "The ARN of the AWS Key Management Service (AWS KMS) CMK used for encryption.
", "LaunchTemplateEbsBlockDevice$SnapshotId": "The ID of the snapshot.
", - "LaunchTemplateEbsBlockDeviceRequest$KmsKeyId": "The ARN of the AWS Key Management Service (AWS KMS) CMK used for encryption.
", + "LaunchTemplateEbsBlockDeviceRequest$KmsKeyId": "The ARN of the symmetric AWS Key Management Service (AWS KMS) CMK used for encryption.
", "LaunchTemplateEbsBlockDeviceRequest$SnapshotId": "The ID of the snapshot.
", "LaunchTemplateElasticInferenceAccelerator$Type": "The type of elastic inference accelerator. The possible values are eia1.medium, eia1.large, and eia1.xlarge.
", "LaunchTemplateElasticInferenceAcceleratorResponse$Type": "The type of elastic inference accelerator. The possible values are eia1.medium, eia1.large, and eia1.xlarge.
", @@ -12141,7 +12358,7 @@ "LaunchTemplatePlacement$GroupName": "The name of the placement group for the instance.
", "LaunchTemplatePlacement$HostId": "The ID of the Dedicated Host for the instance.
", "LaunchTemplatePlacement$SpreadDomain": "Reserved for future use.
", - "LaunchTemplatePlacement$HostResourceGroupArn": "The ARN of the host resource group in which to launch the instances.
", + "LaunchTemplatePlacement$HostResourceGroupArn": "The ARN of the host resource group in which to launch the instances.
", "LaunchTemplatePlacementRequest$AvailabilityZone": "The Availability Zone for the instance.
", "LaunchTemplatePlacementRequest$Affinity": "The affinity setting for an instance on a Dedicated Host.
", "LaunchTemplatePlacementRequest$GroupName": "The name of the placement group for the instance.
", @@ -12204,7 +12421,6 @@ "ModifyIdentityIdFormatRequest$PrincipalArn": "The ARN of the principal, which can be an IAM user, IAM role, or the root user. Specify all
to modify the ID format for all IAM users, IAM roles, and the root user of the account.
The type of resource: bundle
| conversion-task
| customer-gateway
| dhcp-options
| elastic-ip-allocation
| elastic-ip-association
| export-task
| flow-log
| image
| import-task
| internet-gateway
| network-acl
| network-acl-association
| network-interface
| network-interface-attachment
| prefix-list
| route-table
| route-table-association
| security-group
| subnet
| subnet-cidr-block-association
| vpc
| vpc-cidr-block-association
| vpc-endpoint
| vpc-peering-connection
| vpn-connection
| vpn-gateway
.
Alternatively, use the all-current
option to include all resource types that are currently within their opt-in period for longer IDs.
The name of the attribute to modify. The valid values are description
, launchPermission
, and productCodes
.
The ID of the AMI.
", "ModifyImageAttributeRequest$Value": "The value of the attribute being modified. This parameter can be used only when the Attribute
parameter is description
or productCodes
.
A new value for the attribute. Use only with the kernel
, ramdisk
, userData
, disableApiTermination
, or instanceInitiatedShutdownBehavior
attribute.
A unique, case-sensitive token that you provide to ensure idempotency of your modification request. For more information, see Ensuring Idempotency.
", @@ -12216,7 +12432,6 @@ "ModifyLaunchTemplateRequest$DefaultVersion": "The version number of the launch template to set as the default version.
", "ModifyReservedInstancesRequest$ClientToken": "A unique, case-sensitive token you provide to ensure idempotency of your modification request. For more information, see Ensuring Idempotency.
", "ModifyReservedInstancesResult$ReservedInstancesModificationId": "The ID for the modification.
", - "ModifySubnetAttributeRequest$SubnetId": "The ID of the subnet.
", "ModifyTrafficMirrorFilterRuleRequest$DestinationCidrBlock": "The destination CIDR block to assign to the Traffic Mirror rule.
", "ModifyTrafficMirrorFilterRuleRequest$SourceCidrBlock": "The source CIDR block to assign to the Traffic Mirror rule.
", "ModifyTrafficMirrorFilterRuleRequest$Description": "The description to assign to the Traffic Mirror rule.
", @@ -12224,6 +12439,7 @@ "ModifyTransitGatewayVpcAttachmentRequest$TransitGatewayAttachmentId": "The ID of the attachment.
", "ModifyVpcEndpointConnectionNotificationRequest$ConnectionNotificationArn": "The ARN for the SNS topic for the notification.
", "ModifyVpcEndpointRequest$PolicyDocument": "A policy to attach to the endpoint that controls access to the service. The policy must be in valid JSON format.
", + "ModifyVpcEndpointServiceConfigurationRequest$PrivateDnsName": "The private DNS name to assign to the endpoint service.
", "ModifyVpcEndpointServicePermissionsRequest$ServiceId": "The ID of the service.
", "ModifyVpnTunnelCertificateRequest$VpnTunnelOutsideIpAddress": "The external IP address of the VPN tunnel.
", "ModifyVpnTunnelOptionsRequest$VpnTunnelOutsideIpAddress": "The external IP address of the VPN tunnel.
", @@ -12308,27 +12524,33 @@ "Placement$SpreadDomain": "Reserved for future use.
", "Placement$HostResourceGroupArn": "The ARN of the host resource group in which to launch the instances. If you specify a host resource group ARN, omit the Tenancy parameter or set it to host
.
The name of the placement group.
", + "PlacementGroup$GroupId": "The ID of the placement group.
", + "PlacementGroupIdStringList$member": null, "PlacementGroupStringList$member": null, - "PlacementResponse$GroupName": "The name of the placement group the instance is in.
", + "PlacementResponse$GroupName": "The name of the placement group that the instance is in.
", + "PoolCidrBlock$Cidr": "The CIDR block.
", "PrefixList$PrefixListId": "The ID of the prefix.
", "PrefixList$PrefixListName": "The name of the prefix.
", "PrefixListId$Description": "A description for the security group rule that references this prefix list ID.
Constraints: Up to 255 characters in length. Allowed characters are a-z, A-Z, 0-9, spaces, and ._-:/()#,@[]+=;{}!$*
", "PrefixListId$PrefixListId": "The ID of the prefix.
", "PrefixListIdSet$member": null, "PrincipalIdFormat$Arn": "PrincipalIdFormatARN description
", + "PrivateDnsNameConfiguration$Type": "The endpoint service verification type, for example TXT.
", + "PrivateDnsNameConfiguration$Value": "The value the service provider adds to the private DNS name domain record before verification.
", + "PrivateDnsNameConfiguration$Name": "The name of the record subdomain the service provider needs to create. The service provider adds the value
text to the name
.
The private IPv4 addresses.
", "PrivateIpAddressStringList$member": null, "ProductCode$ProductCodeId": "The product code.
", "ProductCodeStringList$member": null, "ProductDescriptionList$member": null, "PropagatingVgw$GatewayId": "The ID of the virtual private gateway.
", - "ProvisionByoipCidrRequest$Cidr": "The public IPv4 address range, in CIDR notation. The most specific prefix that you can specify is /24. The address range cannot overlap with another address range that you've brought to this or another Region.
", + "ProvisionByoipCidrRequest$Cidr": "The public IPv4 or IPv6 address range, in CIDR notation. The most specific IPv4 prefix that you can specify is /24. The most specific IPv6 prefix you can specify is /56. The address range cannot overlap with another address range that you've brought to this or another Region.
", "ProvisionByoipCidrRequest$Description": "A description for the address range and the address pool.
", "ProvisionedBandwidth$Provisioned": "Reserved. If you need to sustain traffic greater than the documented limits, contact us through the Support Center.
", "ProvisionedBandwidth$Requested": "Reserved. If you need to sustain traffic greater than the documented limits, contact us through the Support Center.
", "ProvisionedBandwidth$Status": "Reserved. If you need to sustain traffic greater than the documented limits, contact us through the Support Center.
", "PublicIpStringList$member": null, - "PublicIpv4Pool$PoolId": "The ID of the IPv4 address pool.
", + "PublicIpv4Pool$PoolId": "The ID of the address pool.
", "PublicIpv4Pool$Description": "A description of the address pool.
", "PublicIpv4PoolRange$FirstAddress": "The first IP address in the range.
", "PublicIpv4PoolRange$LastAddress": "The last IP address in the range.
", @@ -12351,9 +12573,7 @@ "RegionNameStringList$member": null, "RegisterImageRequest$ImageLocation": "The full path to your AMI manifest in Amazon S3 storage. The specified bucket must have the aws-exec-read
canned access control list (ACL) to ensure that it can be accessed by Amazon EC2. For more information, see Canned ACLs in the Amazon S3 Service Developer Guide.
A description for your AMI.
", - "RegisterImageRequest$KernelId": "The ID of the kernel.
", "RegisterImageRequest$Name": "A name for your AMI.
Constraints: 3-128 alphanumeric characters, parentheses (()), square brackets ([]), spaces ( ), periods (.), slashes (/), dashes (-), single quotes ('), at-signs (@), or underscores(_)
", - "RegisterImageRequest$RamdiskId": "The ID of the RAM disk.
", "RegisterImageRequest$RootDeviceName": "The device name of the root device volume (for example, /dev/sda1
).
Set to simple
to enable enhanced networking with the Intel 82599 Virtual Function interface for the AMI and any instances that you launch from the AMI.
There is no way to disable sriovNetSupport
at this time.
This option is supported only for HVM AMIs. Specifying this option with a PV AMI can make instances launched from the AMI unreachable.
", "RegisterImageRequest$VirtualizationType": "The type of virtualization (hvm
| paravirtual
).
Default: paravirtual
The IP address assigned to the transit gateway multicast group.
", "RejectTransitGatewayPeeringAttachmentRequest$TransitGatewayAttachmentId": "The ID of the transit gateway peering attachment.
", "RejectTransitGatewayVpcAttachmentRequest$TransitGatewayAttachmentId": "The ID of the attachment.
", - "ReleaseAddressRequest$AllocationId": "[EC2-VPC] The allocation ID. Required for EC2-VPC.
", "ReleaseAddressRequest$PublicIp": "[EC2-Classic] The Elastic IP address. Required for EC2-Classic.
", "ReleaseAddressRequest$NetworkBorderGroup": "The location that the IP address is released from.
If you provide an incorrect network border group, you will receive an InvalidAddress.NotFound
error. For more information, see Error Codes.
You cannot use a network border group with EC2 Classic. If you attempt this operation on EC2 classic, you will receive an InvalidParameterCombination
error. For more information, see Error Codes.
The ID of the existing IAM instance profile association.
", @@ -12426,7 +12645,6 @@ "ReservedInstancesOffering$ReservedInstancesOfferingId": "The ID of the Reserved Instance offering. This is the offering ID used in GetReservedInstancesExchangeQuote to confirm that an exchange can be made.
", "ReservedInstancesOfferingIdStringList$member": null, "ResetEbsDefaultKmsKeyIdResult$KmsKeyId": "The Amazon Resource Name (ARN) of the default CMK for EBS encryption by default.
", - "ResetImageAttributeRequest$ImageId": "The ID of the AMI.
", "ResetNetworkInterfaceAttributeRequest$SourceDestCheck": "The source/destination checking attribute. Resets the value to true
.
The association ID for the IPv6 CIDR block.
", "VpcIpv6CidrBlockAssociation$Ipv6CidrBlock": "The IPv6 CIDR block.
", + "VpcIpv6CidrBlockAssociation$Ipv6Pool": "The ID of the IPv6 address pool from which the IPv6 CIDR block is allocated.
", "VpcIpv6CidrBlockAssociation$NetworkBorderGroup": "The name of the location from which we advertise the IPV6 CIDR block.
", "VpcPeeringConnection$VpcPeeringConnectionId": "The ID of the VPC peering connection.
", "VpcPeeringConnectionStateReason$Message": "A message that provides more information about the status, if applicable.
", @@ -12834,7 +13053,7 @@ "VpnStaticRoute$DestinationCidrBlock": "The CIDR block associated with the local subnet of the customer data center.
", "VpnTunnelOptionsSpecification$TunnelInsideCidr": "The range of inside IP addresses for the tunnel. Any specified CIDR blocks must be unique across all VPN connections that use the same virtual private gateway.
Constraints: A size /30 CIDR block from the 169.254.0.0/16
range. The following CIDR blocks are reserved and cannot be used:
169.254.0.0/30
169.254.1.0/30
169.254.2.0/30
169.254.3.0/30
169.254.4.0/30
169.254.5.0/30
169.254.169.252/30
The pre-shared key (PSK) to establish initial authentication between the virtual private gateway and customer gateway.
Constraints: Allowed characters are alphanumeric characters, periods (.), and underscores (_). Must be between 8 and 64 characters in length and cannot start with zero (0).
", - "WithdrawByoipCidrRequest$Cidr": "The public IPv4 address range, in CIDR notation.
", + "WithdrawByoipCidrRequest$Cidr": "The address range, in CIDR notation.
", "ZoneIdStringList$member": null, "ZoneNameStringList$member": null } @@ -12860,6 +13079,12 @@ "TransitGatewayMulticastDomainAssociations$Subnets": "The subnets associated with the multicast domain.
" } }, + "SubnetCidrAssociationId": { + "base": null, + "refs": { + "DisassociateSubnetCidrBlockRequest$AssociationId": "The association ID for the CIDR block.
" + } + }, "SubnetCidrBlockState": { "base": "Describes the state of a CIDR block.
", "refs": { @@ -12876,8 +13101,13 @@ "base": null, "refs": { "AssociateClientVpnTargetNetworkRequest$SubnetId": "The ID of the subnet to associate with the Client VPN endpoint.
", + "AssociateSubnetCidrBlockRequest$SubnetId": "The ID of your subnet.
", + "CreateClientVpnRouteRequest$TargetVpcSubnetId": "The ID of the subnet through which you want to route traffic. The specified subnet must be an existing target network of the Client VPN endpoint.
", "CreateNatGatewayRequest$SubnetId": "The subnet in which to create the NAT gateway.
", - "CreateNetworkInterfaceRequest$SubnetId": "The ID of the subnet to associate with the network interface.
" + "CreateNetworkInterfaceRequest$SubnetId": "The ID of the subnet to associate with the network interface.
", + "DeleteClientVpnRouteRequest$TargetVpcSubnetId": "The ID of the target subnet used by the route.
", + "DeleteSubnetRequest$SubnetId": "The ID of the subnet.
", + "ModifySubnetAttributeRequest$SubnetId": "The ID of the subnet.
" } }, "SubnetIdStringList": { @@ -12974,19 +13204,33 @@ "CustomerGateway$Tags": "Any tags assigned to the customer gateway.
", "DeleteTagsRequest$Tags": "The tags to delete. Specify a tag key and an optional tag value to delete specific tags. If you specify a tag key without a tag value, we delete any tag with this key regardless of its value. If you specify a tag key with an empty string as the tag value, we delete the tag only if its value is an empty string.
If you omit this parameter, we delete all user-defined tags for the specified resources. We do not delete AWS-generated tags (tags that have the aws:
prefix).
Any tags assigned to the DHCP options set.
", + "EgressOnlyInternetGateway$Tags": "The tags assigned to the egress-only internet gateway.
", + "ElasticGpus$Tags": "The tags assigned to the Elastic Graphics accelerator.
", + "ExportTask$Tags": "The tags for the export task.
", "FleetData$Tags": "The tags for an EC2 Fleet resource.
", "FpgaImage$Tags": "Any tags assigned to the AFI.
", "Host$Tags": "Any tags assigned to the Dedicated Host.
", "HostReservation$Tags": "Any tags assigned to the Dedicated Host Reservation.
", "Image$Tags": "Any tags assigned to the image.
", + "ImportImageTask$Tags": "The tags for the import image task.
", + "ImportSnapshotTask$Tags": "The tags for the import snapshot task.
", "Instance$Tags": "Any tags assigned to the instance.
", "InternetGateway$Tags": "Any tags assigned to the internet gateway.
", + "Ipv6Pool$Tags": "Any tags for the address pool.
", + "KeyPairInfo$Tags": "Any tags applied to the key pair.
", "LaunchTemplate$Tags": "The tags for the launch template.
", "LaunchTemplateTagSpecification$Tags": "The tags for the resource.
", "LaunchTemplateTagSpecificationRequest$Tags": "The tags to apply to the resource.
", + "LocalGateway$Tags": "The tags assigned to the local gateway.
", + "LocalGatewayRouteTable$Tags": "The tags assigned to the local gateway route table.
", + "LocalGatewayRouteTableVirtualInterfaceGroupAssociation$Tags": "The tags assigned to the association.
", + "LocalGatewayRouteTableVpcAssociation$Tags": "The tags assigned to the association.
", + "LocalGatewayVirtualInterface$Tags": "The tags assigned to the virtual interface.
", + "LocalGatewayVirtualInterfaceGroup$Tags": "The tags assigned to the virtual interface group.
", "NatGateway$Tags": "The tags for the NAT gateway.
", "NetworkAcl$Tags": "Any tags assigned to the network ACL.
", "NetworkInterface$TagSet": "Any tags assigned to the network interface.
", + "PlacementGroup$Tags": "Any tags applied to the placement group.
", "ReservedInstances$Tags": "Any tags assigned to the resource.
", "ReservedInstancesListing$Tags": "Any tags assigned to the resource.
", "RouteTable$Tags": "Any tags assigned to the route table.
", @@ -13048,13 +13292,13 @@ } }, "TargetCapacitySpecification": { - "base": "The number of units to request. You can choose to set the target capacity in terms of instances or a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
, or both to ensure your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in and
The number of units to request. You can choose to set the target capacity in terms of instances or a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
, or both to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in and
The number of units to request. You can choose to set the target capacity in terms of instances or a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
The number of units to request. You can choose to set the target capacity as the number of instances. Or you can set the target capacity to a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
parameter, or both parameters to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in and .
The number of units to request. You can choose to set the target capacity as the number of instances. Or you can set the target capacity to a performance characteristic that is important to your application workload, such as vCPUs, memory, or I/O. If the request type is maintain
, you can specify a target capacity of 0 and add capacity later.
You can use the On-Demand Instance MaxTotalPrice
parameter, the Spot Instance MaxTotalPrice
parameter, or both parameters to ensure that your fleet cost does not exceed your budget. If you set a maximum price per hour for the On-Demand Instances and Spot Instances in your request, EC2 Fleet will launch instances until it reaches the maximum amount that you're willing to pay. When the maximum amount you're willing to pay is reached, the fleet stops launching instances even if it hasn’t met the target capacity. The MaxTotalPrice
parameters are located in and .
The number of units to request.
", "ModifyFleetRequest$TargetCapacitySpecification": "The size of the EC2 Fleet.
" @@ -13968,6 +14212,7 @@ "DescribeClientVpnTargetNetworksRequest$AssociationIds": "The IDs of the target network associations.
", "DescribeFlowLogsRequest$FlowLogIds": "One or more flow log IDs.
Constraint: Maximum of 1000 flow log IDs.
", "DescribeInternetGatewaysRequest$InternetGatewayIds": "One or more internet gateway IDs.
Default: Describes all your internet gateways.
", + "DescribeIpv6PoolsRequest$PoolIds": "The IDs of the IPv6 address pools.
", "DescribeLaunchTemplatesRequest$LaunchTemplateIds": "One or more launch template IDs.
", "DescribeMovingAddressesRequest$PublicIps": "One or more Elastic IP addresses.
", "DescribeNatGatewaysRequest$NatGatewayIds": "One or more NAT gateway IDs.
", @@ -14009,7 +14254,7 @@ "RequestSpotLaunchSpecification$SecurityGroups": "One or more security groups. When requesting instances in a VPC, you must specify the IDs of the security groups. When requesting instances in EC2-Classic, you can specify the names or the IDs of the security groups.
", "ResponseLaunchTemplateData$SecurityGroupIds": "The security group IDs.
", "ResponseLaunchTemplateData$SecurityGroups": "The security group names.
", - "ServiceConfiguration$AvailabilityZones": "In the Availability Zones in which the service is available.
", + "ServiceConfiguration$AvailabilityZones": "The Availability Zones in which the service is available.
", "ServiceConfiguration$NetworkLoadBalancerArns": "The Amazon Resource Names (ARNs) of the Network Load Balancers for the service.
", "ServiceConfiguration$BaseEndpointDnsNames": "The DNS names for the service.
", "ServiceDetail$AvailabilityZones": "The Availability Zones in which the service is available.
", @@ -14104,6 +14349,7 @@ "CreateSnapshotRequest$VolumeId": "The ID of the EBS volume.
", "DeleteVolumeRequest$VolumeId": "The ID of the volume.
", "DetachVolumeRequest$VolumeId": "The ID of the volume.
", + "EbsInstanceBlockDeviceSpecification$VolumeId": "The ID of the EBS volume.
", "EnableVolumeIORequest$VolumeId": "The ID of the volume.
", "ModifyVolumeAttributeRequest$VolumeId": "The ID of the volume.
", "ModifyVolumeRequest$VolumeId": "The ID of the volume.
" @@ -14218,7 +14464,7 @@ "base": null, "refs": { "CreateVolumeRequest$VolumeType": "The volume type. This can be gp2
for General Purpose SSD, io1
for Provisioned IOPS SSD, st1
for Throughput Optimized HDD, sc1
for Cold HDD, or standard
for Magnetic volumes.
Default: gp2
The volume type. If you set the type to io1
, you must also specify the IOPS that the volume supports.
Default: gp2
The volume type. If you set the type to io1
, you must also specify the Iops parameter. If you set the type to gp2
, st1
, sc1
, or standard
, you must omit the Iops parameter.
Default: gp2
The volume type.
", "LaunchTemplateEbsBlockDeviceRequest$VolumeType": "The volume type.
", "ModifyVolumeRequest$VolumeType": "The target EBS volume type of the volume.
Default: If no type is specified, the existing type is retained.
", @@ -14328,7 +14574,7 @@ "VpcEndpointId": { "base": null, "refs": { - "CreateVpcEndpointConnectionNotificationRequest$VpcEndpointId": "The ID of the endpoint.
", + "CreateVpcEndpointConnectionNotificationRequest$VpcEndpointId": "The ID of the endpoint.
", "ModifyVpcEndpointRequest$VpcEndpointId": "The ID of the endpoint.
" } }, @@ -14358,6 +14604,7 @@ "CreateNetworkAclRequest$VpcId": "The ID of the VPC.
", "CreateRouteTableRequest$VpcId": "The ID of the VPC.
", "CreateSecurityGroupRequest$VpcId": "[EC2-VPC] The ID of the VPC. Required for EC2-VPC.
", + "CreateSubnetRequest$VpcId": "The ID of the VPC.
", "CreateVpcEndpointRequest$VpcId": "The ID of the VPC in which the endpoint will be used.
", "CreateVpcPeeringConnectionRequest$VpcId": "The ID of the requester VPC. You must specify this parameter in the request.
", "DeleteVpcRequest$VpcId": "The ID of the VPC.
", diff --git a/models/apis/ec2/2016-11-15/paginators-1.json b/models/apis/ec2/2016-11-15/paginators-1.json index bec922cfe95..b94ef69635b 100755 --- a/models/apis/ec2/2016-11-15/paginators-1.json +++ b/models/apis/ec2/2016-11-15/paginators-1.json @@ -174,6 +174,12 @@ "output_token": "NextToken", "result_key": "InternetGateways" }, + "DescribeIpv6Pools": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken", + "result_key": "Ipv6Pools" + }, "DescribeKeyPairs": { "result_key": "KeyPairs" }, @@ -440,6 +446,12 @@ "DescribeVpnGateways": { "result_key": "VpnGateways" }, + "GetAssociatedIpv6PoolCidrs": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken", + "result_key": "Ipv6CidrAssociations" + }, "GetTransitGatewayAttachmentPropagations": { "input_token": "NextToken", "limit_key": "MaxResults", diff --git a/models/apis/ecr/2015-09-21/waiters-2.json b/models/apis/ecr/2015-09-21/waiters-2.json new file mode 100644 index 00000000000..9ef9608fda4 --- /dev/null +++ b/models/apis/ecr/2015-09-21/waiters-2.json @@ -0,0 +1,45 @@ +{ + "version": 2, + "waiters": { + "ImageScanComplete": { + "description": "Wait until an image scan is complete and findings can be accessed", + "operation": "DescribeImageScanFindings", + "delay": 5, + "maxAttempts": 60, + "acceptors": [ + { + "state": "success", + "matcher": "path", + "argument": "imageScanStatus.status", + "expected": "COMPLETE" + }, + { + "state": "failure", + "matcher": "path", + "argument": "imageScanStatus.status", + "expected": "FAILED" + } + ] + }, + "LifecyclePolicyPreviewComplete": { + "description": "Wait until a lifecycle policy preview request is complete and results can be accessed", + "operation": "GetLifecyclePolicyPreview", + "delay": 5, + "maxAttempts": 20, + "acceptors": [ + { + "state": "success", + "matcher": "path", + "argument": "status", + "expected": "COMPLETE" + }, + { + "state": "failure", + "matcher": "path", + "argument": "status", + "expected": "FAILED" + } + ] + } + } +} \ No newline at end of file diff --git a/models/apis/ecs/2014-11-13/api-2.json b/models/apis/ecs/2014-11-13/api-2.json index 8257d1a257d..21340441e6a 100644 --- a/models/apis/ecs/2014-11-13/api-2.json +++ b/models/apis/ecs/2014-11-13/api-2.json @@ -1283,7 +1283,8 @@ "capacityProviderStrategy":{"shape":"CapacityProviderStrategy"}, "platformVersion":{"shape":"String"}, "scale":{"shape":"Scale"}, - "clientToken":{"shape":"String"} + "clientToken":{"shape":"String"}, + "tags":{"shape":"Tags"} } }, "CreateTaskSetResponse":{ @@ -1526,7 +1527,8 @@ "members":{ "cluster":{"shape":"String"}, "service":{"shape":"String"}, - "taskSets":{"shape":"StringList"} + "taskSets":{"shape":"StringList"}, + "include":{"shape":"TaskSetFieldList"} } }, "DescribeTaskSetsResponse":{ @@ -1615,6 +1617,14 @@ } }, "Double":{"type":"double"}, + "EFSVolumeConfiguration":{ + "type":"structure", + "required":["fileSystemId"], + "members":{ + "fileSystemId":{"shape":"String"}, + "rootDirectory":{"shape":"String"} + } + }, "EnvironmentVariables":{ "type":"list", "member":{"shape":"KeyValuePair"} @@ -2869,9 +2879,18 @@ "serviceRegistries":{"shape":"ServiceRegistries"}, "scale":{"shape":"Scale"}, "stabilityStatus":{"shape":"StabilityStatus"}, - "stabilityStatusAt":{"shape":"Timestamp"} + "stabilityStatusAt":{"shape":"Timestamp"}, + "tags":{"shape":"Tags"} } }, + "TaskSetField":{ + "type":"string", + "enum":["TAGS"] + }, + "TaskSetFieldList":{ + "type":"list", + "member":{"shape":"TaskSetField"} + }, "TaskSetNotFoundException":{ "type":"structure", "members":{ @@ -3108,7 +3127,8 @@ "members":{ "name":{"shape":"String"}, "host":{"shape":"HostVolumeProperties"}, - "dockerVolumeConfiguration":{"shape":"DockerVolumeConfiguration"} + "dockerVolumeConfiguration":{"shape":"DockerVolumeConfiguration"}, + "efsVolumeConfiguration":{"shape":"EFSVolumeConfiguration"} } }, "VolumeFrom":{ diff --git a/models/apis/ecs/2014-11-13/docs-2.json b/models/apis/ecs/2014-11-13/docs-2.json index 847ea9c2845..5a21727bb14 100644 --- a/models/apis/ecs/2014-11-13/docs-2.json +++ b/models/apis/ecs/2014-11-13/docs-2.json @@ -8,7 +8,7 @@ "CreateTaskSet": "Create a task set in the specified cluster and service. This is used when a service uses the EXTERNAL
deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
Disables an account setting for a specified IAM user, IAM role, or the root user for an account.
", "DeleteAttributes": "Deletes one or more custom attributes from an Amazon ECS resource.
", - "DeleteCluster": "Deletes the specified cluster. You must deregister all container instances from this cluster before you may delete it. You can list the container instances in a cluster with ListContainerInstances and deregister them with DeregisterContainerInstance.
", + "DeleteCluster": "Deletes the specified cluster. The cluster will transition to the INACTIVE
state. Clusters with an INACTIVE
status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future, so you should not rely on INACTIVE
clusters persisting.
You must deregister all container instances from this cluster before you may delete it. You can list the container instances in a cluster with ListContainerInstances and deregister them with DeregisterContainerInstance.
", "DeleteService": "Deletes a specified service within a cluster. You can delete a service if you have no running tasks in it and the desired task count is zero. If the service is actively maintaining tasks, you cannot delete it, and you must update the service to a desired task count of zero. For more information, see UpdateService.
When you delete a service, if there are still running tasks that require cleanup, the service status moves from ACTIVE
to DRAINING
, and the service is no longer visible in the console or in the ListServices API operation. After all tasks have transitioned to either STOPPING
or STOPPED
status, the service status moves from DRAINING
to INACTIVE
. Services in the DRAINING
or INACTIVE
status can still be viewed with the DescribeServices API operation. However, in the future, INACTIVE
services may be cleaned up and purged from Amazon ECS record keeping, and DescribeServices calls on those services return a ServiceNotFoundException
error.
If you attempt to create a new service with the same name as an existing service in either ACTIVE
or DRAINING
status, you receive an error.
Deletes a specified task set within a service. This is used when a service uses the EXTERNAL
deployment controller type. For more information, see Amazon ECS Deployment Types in the Amazon Elastic Container Service Developer Guide.
Deregisters an Amazon ECS container instance from the specified cluster. This instance is no longer available to run tasks.
If you intend to use the container instance for some other purpose after deregistration, you should stop all of the tasks running on the container instance before deregistration. That prevents any orphaned tasks from consuming resources.
Deregistering a container instance removes the instance from a cluster, but it does not terminate the EC2 instance. If you are finished using the instance, be sure to terminate it in the Amazon EC2 console to stop billing.
If you terminate a running container instance, Amazon ECS automatically deregisters the instance from your cluster (stopped container instances or instances with disconnected agents are not automatically deregistered when terminated).
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to enable container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
If you are using tasks that use the Fargate launch type, container dependency parameters are not supported.
The dependencies defined for container startup and shutdown. A container can contain multiple dependencies. When a dependency is defined for container startup, for container shutdown it is reversed.
Your Amazon ECS container instances require at least version 1.26.0 of the container agent to enable container dependencies. However, we recommend using the latest container agent version. For information about checking your agent version and updating to the latest version, see Updating the Amazon ECS Container Agent in the Amazon Elastic Container Service Developer Guide. If you are using an Amazon ECS-optimized Linux AMI, your instance needs at least version 1.26.0-1 of the ecs-init
package. If your container instances are launched from version 20190301
or later, then they contain the required versions of the container agent and ecs-init
. For more information, see Amazon ECS-optimized Linux AMI in the Amazon Elastic Container Service Developer Guide.
For tasks using the Fargate launch type, this parameter requires that the task or service uses platform version 1.3.0 or later.
This parameter is specified when you are using Docker volumes. Docker volumes are only supported when you are using the EC2 launch type. Windows containers only support the use of the local
driver. To use bind mounts, specify a host
instead.
This parameter is specified when you are using Docker volumes. Docker volumes are only supported when you are using the EC2 launch type. Windows containers only support the use of the local
driver. To use bind mounts, specify a host
instead.
This parameter is specified when you are using Docker volumes. Docker volumes are only supported when you are using the EC2 launch type. Windows containers only support the use of the local
driver. To use bind mounts, specify the host
parameter instead.
The value, specified as a percent total of a service's desiredCount
, to scale the task set. Accepted values are numbers between 0 and 100.
This parameter is specified when you are using an Amazon Elastic File System (Amazon EFS) file storage. Amazon EFS file systems are only supported when you are using the EC2 launch type.
EFSVolumeConfiguration
remains in preview and is a Beta Service as defined by and subject to the Beta Service Participation Service Terms located at https://aws.amazon.com/service-terms (\"Beta Terms\"). These Beta Terms apply to your participation in this preview of EFSVolumeConfiguration
.
This parameter is specified when you are using an Amazon Elastic File System (Amazon EFS) file storage. Amazon EFS file systems are only supported when you are using the EC2 launch type.
EFSVolumeConfiguration
remains in preview and is a Beta Service as defined by and subject to the Beta Service Participation Service Terms located at https://aws.amazon.com/service-terms (\"Beta Terms\"). These Beta Terms apply to your participation in this preview of EFSVolumeConfiguration
.
The number of tasks in the cluster that are in the RUNNING
state.
The number of tasks in the cluster that are in the PENDING
state.
The number of services that are running on the cluster in an ACTIVE
state. You can view these services with ListServices.
The number of cpu
units reserved for the container. This parameter maps to CpuShares
in the Create a container section of the Docker Remote API and the --cpu-shares
option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu
value.
You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024.
For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that is the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task would be guaranteed a minimum of 512 CPU units when needed, and each container could float to higher CPU usage if the other container was not using it, but if both tasks were 100% active all of the time, they would be limited to 512 CPU units.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that is the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task would be guaranteed a minimum of 512 CPU units when needed, and each container could float to higher CPU usage if the other container was not using it, but if both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see CPU share constraint in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter is not required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:
Agent versions less than or equal to 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
Agent versions greater than or equal to 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that is described in the task definition.
", + "ContainerDefinition$cpu": "The number of cpu
units reserved for the container. This parameter maps to CpuShares
in the Create a container section of the Docker Remote API and the --cpu-shares
option to docker run.
This field is optional for tasks using the Fargate launch type, and the only requirement is that the total amount of CPU reserved for all containers within a task be lower than the task-level cpu
value.
You can determine the number of CPU units that are available per EC2 instance type by multiplying the vCPUs listed for that instance type on the Amazon EC2 Instances detail page by 1,024.
Linux containers share unallocated CPU units with other containers on the container instance with the same ratio as their allocated amount. For example, if you run a single-container task on a single-core instance type with 512 CPU units specified for that container, and that is the only task running on the container instance, that container could use the full 1,024 CPU unit share at any given time. However, if you launched another copy of the same task on that container instance, each task would be guaranteed a minimum of 512 CPU units when needed, and each container could float to higher CPU usage if the other container was not using it, but if both tasks were 100% active all of the time, they would be limited to 512 CPU units.
On Linux container instances, the Docker daemon on the container instance uses the CPU value to calculate the relative CPU share ratios for running containers. For more information, see CPU share constraint in the Docker documentation. The minimum valid CPU share value that the Linux kernel allows is 2. However, the CPU parameter is not required, and you can use CPU values below 2 in your container definitions. For CPU values below 2 (including null), the behavior varies based on your Amazon ECS container agent version:
Agent versions less than or equal to 1.1.0: Null and zero CPU values are passed to Docker as 0, which Docker then converts to 1,024 CPU shares. CPU values of 1 are passed to Docker as 1, which the Linux kernel converts to two CPU shares.
Agent versions greater than or equal to 1.2.0: Null, zero, and CPU values of 1 are passed to Docker as 2.
On Windows container instances, the CPU limit is enforced as an absolute limit, or a quota. Windows containers only have access to the specified amount of CPU that is described in the task definition.
", "ContainerInstance$runningTasksCount": "The number of tasks on the container instance that are in the RUNNING
status.
The number of tasks on the container instance that are in the PENDING
status.
The most recent desired count of tasks that was specified for the service to deploy or maintain.
", @@ -1044,7 +1050,7 @@ } }, "LogConfiguration": { - "base": "Log configuration options to send to a custom log driver for the container.
", + "base": "The log configuration specification for the container.
This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run
. By default, containers use the same logging driver that the Docker daemon uses; however the container may use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information on the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
The following should be noted when specifying a log configuration for your containers:
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the valid values below). Additional log drivers may be available in future releases of the Amazon ECS container agent.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance.
For tasks using the EC2 launch type, the Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.
For tasks using the Fargate launch type, because you do not have access to the underlying infrastructure your tasks are hosted on, any additional software needed will have to be installed outside of the task. For example, the Fluentd output aggregators or a remote host running Logstash to send Gelf logs to.
The log configuration specification for the container.
This parameter maps to LogConfig
in the Create a container section of the Docker Remote API and the --log-driver
option to docker run. By default, containers use the same logging driver that the Docker daemon uses. However the container may use a different logging driver than the Docker daemon by specifying a log driver with this parameter in the container definition. To use a different logging driver for a container, the log system must be configured properly on the container instance (or on a different log server for remote logging options). For more information on the options for different supported log drivers, see Configure logging drivers in the Docker documentation.
Amazon ECS currently supports a subset of the logging drivers available to the Docker daemon (shown in the LogConfiguration data type). Additional log drivers may be available in future releases of the Amazon ECS container agent.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
The Amazon ECS container agent running on a container instance must register the logging drivers available on that instance with the ECS_AVAILABLE_LOGGING_DRIVERS
environment variable before containers placed on that instance can use these log configuration options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.
The log driver to use for the container. The valid values listed for this parameter are log drivers that the Amazon ECS container agent can communicate with by default.
For tasks using the Fargate launch type, the supported log drivers are awslogs
and splunk
.
For tasks using the EC2 launch type, the supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
, syslog
, and splunk
.
For more information about using the awslogs
log driver, see Using the awslogs Log Driver in the Amazon Elastic Container Service Developer Guide.
If you have a custom driver that is not listed above that you would like to work with the Amazon ECS container agent, you can fork the Amazon ECS container agent project that is available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, Amazon Web Services does not currently support running modified copies of this software.
This parameter requires version 1.18 of the Docker Remote API or greater on your container instance. To check the Docker Remote API version on your container instance, log in to your container instance and run the following command: sudo docker version --format '{{.Server.APIVersion}}'
The log driver to use for the container. The valid values listed earlier are log drivers that the Amazon ECS container agent can communicate with by default.
For tasks using the Fargate launch type, the supported log drivers are awslogs
, splunk
, and awsfirelens
.
For tasks using the EC2 launch type, the supported log drivers are awslogs
, fluentd
, gelf
, json-file
, journald
, logentries
,syslog
, splunk
, and awsfirelens
.
For more information about using the awslogs
log driver, see Using the awslogs Log Driver in the Amazon Elastic Container Service Developer Guide.
For more information about using the awsfirelens
log driver, see Custom Log Routing in the Amazon Elastic Container Service Developer Guide.
If you have a custom driver that is not listed, you can fork the Amazon ECS container agent project that is available on GitHub and customize it to work with that driver. We encourage you to submit pull requests for changes that you would like to have included. However, we do not currently provide support for running modified copies of this software.
The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection.
When managed termination protection is enabled, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions enabled as well. For more information, see Instance Protection in the AWS Auto Scaling User Guide.
When managed termination protection is disabled, your Amazon EC2 instances are not protected from termination when the Auto Scaling group scales in.
" + "AutoScalingGroupProvider$managedTerminationProtection": "The managed termination protection setting to use for the Auto Scaling group capacity provider. This determines whether the Auto Scaling group has managed termination protection.
When using managed termination protection, managed scaling must also be used otherwise managed termination protection will not work.
When managed termination protection is enabled, Amazon ECS prevents the Amazon EC2 instances in an Auto Scaling group that contain tasks from being terminated during a scale-in action. The Auto Scaling group and each instance in the Auto Scaling group must have instance protection from scale-in actions enabled as well. For more information, see Instance Protection in the AWS Auto Scaling User Guide.
When managed termination protection is disabled, your Amazon EC2 instances are not protected from termination when the Auto Scaling group scales in.
" } }, "MissingVersionException": { @@ -1599,7 +1605,7 @@ "ClientException$message": null, "Cluster$clusterArn": "The Amazon Resource Name (ARN) that identifies the cluster. The ARN contains the arn:aws:ecs
namespace, followed by the Region of the cluster, the AWS account ID of the cluster owner, the cluster
namespace, and then the cluster name. For example, arn:aws:ecs:region:012345678910:cluster/test
.
A user-generated string that you use to identify your cluster.
", - "Cluster$status": "The status of the cluster. The valid values are ACTIVE
or INACTIVE
. ACTIVE
indicates that you can register container instances with the cluster and the associated instances can accept tasks.
The status of the cluster. The following are the possible states that will be returned.
The cluster is ready to accept tasks and if applicable you can register container instances with the cluster.
The cluster has capacity providers associated with it and the resources needed for the capacity provider are being created.
The cluster has capacity providers associated with it and the resources needed for the capacity provider are being deleted.
The cluster has capacity providers associated with it and the resources needed for the capacity provider have failed to create.
The cluster has been deleted. Clusters with an INACTIVE
status may remain discoverable in your account for a period of time. However, this behavior is subject to change in the future, so you should not rely on INACTIVE
clusters persisting.
The status of the capacity providers associated with the cluster. The following are the states that will be returned:
The available capacity providers for the cluster are updating. This occurs when the Auto Scaling plan is provisioning or deprovisioning.
The capacity providers have successfully updated.
The capacity provider updates failed.
The value to set for the cluster setting. The supported values are enabled
and disabled
. If enabled
is specified, CloudWatch Container Insights will be enabled for the cluster, otherwise it will be disabled unless the containerInsights
account setting is enabled. If a cluster value is specified, it will override the containerInsights
value set with PutAccountSetting or PutAccountSettingDefault.
The Amazon Resource Name (ARN) of the container.
", @@ -1676,6 +1682,8 @@ "DockerLabelsMap$key": null, "DockerLabelsMap$value": null, "DockerVolumeConfiguration$driver": "The Docker volume driver to use. The driver value must match the driver name provided by Docker because it is used for task placement. If the driver was installed using the Docker plugin CLI, use docker plugin ls
to retrieve the driver name from your container instance. If the driver was installed using another method, use Docker plugin discovery to retrieve the driver name. For more information, see Docker plugin discovery. This parameter maps to Driver
in the Create a volume section of the Docker Remote API and the xxdriver
option to docker volume create.
The Amazon EFS file system ID to use.
", + "EFSVolumeConfiguration$rootDirectory": "The directory within the Amazon EFS file system to mount as the root directory inside the host.
", "Failure$arn": "The Amazon Resource Name (ARN) of the failed resource.
", "Failure$reason": "The reason for the failure.
", "Failure$detail": "The details of the failure.
", @@ -1822,7 +1830,7 @@ "Task$taskArn": "The Amazon Resource Name (ARN) of the task.
", "Task$taskDefinitionArn": "The ARN of the task definition that creates the task.
", "TaskDefinition$taskDefinitionArn": "The full Amazon Resource Name (ARN) of the task definition.
", - "TaskDefinition$family": "The name of a family that this task definition is registered to. A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
", + "TaskDefinition$family": "The name of a family that this task definition is registered to. Up to 255 letters (uppercase and lowercase), numbers, hyphens, and underscores are allowed.
A family groups multiple versions of a task definition. Amazon ECS gives the first task definition that you registered to a family a revision number of 1. Amazon ECS gives sequential revision numbers to each task definition that you add.
", "TaskDefinition$taskRoleArn": "The short name or full Amazon Resource Name (ARN) of the AWS Identity and Access Management (IAM) role that grants containers in the task permission to call AWS APIs on your behalf. For more information, see Amazon ECS Task Role in the Amazon Elastic Container Service Developer Guide.
IAM roles for tasks on Windows require that the -EnableTaskIAMRole
option is set when you launch the Amazon ECS-optimized Windows AMI. Your containers must also run some configuration code in order to take advantage of the feature. For more information, see Windows IAM Roles for Tasks in the Amazon Elastic Container Service Developer Guide.
The Amazon Resource Name (ARN) of the task execution role that containers in this task can assume. All containers in this task are granted the permissions that are specified in this role.
", "TaskDefinition$cpu": "The number of cpu
units used by the task. If you are using the EC2 launch type, this field is optional and any value can be used. If you are using the Fargate launch type, this field is required and you must use one of the following values, which determines your range of valid values for the memory
parameter:
256 (.25 vCPU) - Available memory
values: 512 (0.5 GB), 1024 (1 GB), 2048 (2 GB)
512 (.5 vCPU) - Available memory
values: 1024 (1 GB), 2048 (2 GB), 3072 (3 GB), 4096 (4 GB)
1024 (1 vCPU) - Available memory
values: 2048 (2 GB), 3072 (3 GB), 4096 (4 GB), 5120 (5 GB), 6144 (6 GB), 7168 (7 GB), 8192 (8 GB)
2048 (2 vCPU) - Available memory
values: Between 4096 (4 GB) and 16384 (16 GB) in increments of 1024 (1 GB)
4096 (4 vCPU) - Available memory
values: Between 8192 (8 GB) and 30720 (30 GB) in increments of 1024 (1 GB)
The command that is passed to the container. This parameter maps to Cmd
in the Create a container section of the Docker Remote API and the COMMAND
parameter to docker run. For more information, see https://docs.docker.com/engine/reference/builder/#cmd. If there are multiple arguments, each argument should be a separated string in the array.
A list of DNS servers that are presented to the container. This parameter maps to Dns
in the Create a container section of the Docker Remote API and the --dns
option to docker run.
This parameter is not supported for Windows containers.
A list of DNS search domains that are presented to the container. This parameter maps to DnsSearch
in the Create a container section of the Docker Remote API and the --dns-search
option to docker run.
This parameter is not supported for Windows containers.
A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. This field is not valid for containers in tasks using the Fargate launch type.
This parameter maps to SecurityOpt
in the Create a container section of the Docker Remote API and the --security-opt
option to docker run.
The Amazon ECS container agent running on a container instance must register with the ECS_SELINUX_CAPABLE=true
or ECS_APPARMOR_CAPABLE=true
environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.
This parameter is not supported for Windows containers.
A list of strings to provide custom labels for SELinux and AppArmor multi-level security systems. This field is not valid for containers in tasks using the Fargate launch type.
With Windows containers, this parameter can be used to reference a credential spec file when configuring a container for Active Directory authentication. For more information, see Using gMSAs for Windows Containers in the Amazon Elastic Container Service Developer Guide.
This parameter maps to SecurityOpt
in the Create a container section of the Docker Remote API and the --security-opt
option to docker run.
The Amazon ECS container agent running on a container instance must register with the ECS_SELINUX_CAPABLE=true
or ECS_APPARMOR_CAPABLE=true
environment variables before containers placed on that instance can use these security options. For more information, see Amazon ECS Container Agent Configuration in the Amazon Elastic Container Service Developer Guide.
The command to send to the container that overrides the default command from the Docker image or the task definition. You must also specify a container name.
", "CreateClusterRequest$capacityProviders": "The short name or full Amazon Resource Name (ARN) of one or more capacity providers to associate with the cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created and not already associated with another cluster. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The PutClusterCapacityProviders API operation is used to update the list of available capacity providers for a cluster after the cluster is created.
", "DescribeCapacityProvidersRequest$capacityProviders": "The short name or full Amazon Resource Name (ARN) of one or more capacity providers. Up to 100
capacity providers can be described in an action.
The list of task definition family names that match the ListTaskDefinitionFamilies
request.
The list of task definition Amazon Resource Name (ARN) entries for the ListTaskDefinitions
request.
The list of task ARN entries for the ListTasks
request.
The short name or full Amazon Resource Name (ARN) of one or more capacity providers to associate with the cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
The name of one or more capacity providers to associate with the cluster.
If specifying a capacity provider that uses an Auto Scaling group, the capacity provider must already be created. New capacity providers can be created with the CreateCapacityProvider API operation.
To use a AWS Fargate capacity provider, specify either the FARGATE
or FARGATE_SPOT
capacity providers. The AWS Fargate capacity providers are available to all accounts and only need to be associated with a cluster to be used.
When the stringSetValue
type is set, the value of the resource must be a string type.
The container instance IDs or full ARN entries for the container instances on which you would like to place your task. You can specify up to 10 container instances.
", "Tmpfs$mountOptions": "The list of tmpfs volume mount options.
Valid values: \"defaults\" | \"ro\" | \"rw\" | \"suid\" | \"nosuid\" | \"dev\" | \"nodev\" | \"exec\" | \"noexec\" | \"sync\" | \"async\" | \"dirsync\" | \"remount\" | \"mand\" | \"nomand\" | \"atime\" | \"noatime\" | \"diratime\" | \"nodiratime\" | \"bind\" | \"rbind\" | \"unbindable\" | \"runbindable\" | \"private\" | \"rprivate\" | \"shared\" | \"rshared\" | \"slave\" | \"rslave\" | \"relatime\" | \"norelatime\" | \"strictatime\" | \"nostrictatime\" | \"mode\" | \"uid\" | \"gid\" | \"nr_inodes\" | \"nr_blocks\" | \"mpol\"
The metadata that you apply to the capacity provider to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the cluster to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define. When a service is deleted, the tags are deleted as well.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that is applied to the task definition to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The tags for the resource.
", "RegisterContainerInstanceRequest$tags": "The metadata that you apply to the container instance to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the service to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The tags to add to the resource. A tag is an array of key-value pairs.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the task to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
The metadata that you apply to the task set to help you categorize and organize them. Each tag consists of a key and an optional value, both of which you define.
The following basic restrictions apply to tags:
Maximum number of tags per resource - 50
For each resource, each tag key must be unique, and each tag key can have only one value.
Maximum key length - 128 Unicode characters in UTF-8
Maximum value length - 256 Unicode characters in UTF-8
If your tagging schema is used across multiple services and resources, remember that other services may have restrictions on allowed characters. Generally allowed characters are: letters, numbers, and spaces representable in UTF-8, and the following characters: + - = . _ : / @.
Tag keys and values are case-sensitive.
Do not use aws:
, AWS:
, or any upper or lowercase combination of such as a prefix for either keys or values as it is reserved for AWS use. You cannot edit or delete tag keys or values with this prefix. Tags with this prefix do not count against your tags per resource limit.
Specifies whether to see the resource tags for the task set. If TAGS
is specified, the tags are included in the response. If this field is omitted, tags are not included in the response.
The specified task set could not be found. You can view your available task sets with DescribeTaskSets. Task sets are specific to each cluster, service and Region.
", "refs": { diff --git a/models/apis/ecs/2014-11-13/paginators-1.json b/models/apis/ecs/2014-11-13/paginators-1.json index 10b471968e4..16fa7ccaba4 100644 --- a/models/apis/ecs/2014-11-13/paginators-1.json +++ b/models/apis/ecs/2014-11-13/paginators-1.json @@ -1,5 +1,11 @@ { "pagination": { + "ListAccountSettings": { + "input_token": "nextToken", + "limit_key": "maxResults", + "output_token": "nextToken", + "result_key": "settings" + }, "ListAttributes": { "input_token": "nextToken", "limit_key": "maxResults", diff --git a/models/apis/eks/2017-11-01/api-2.json b/models/apis/eks/2017-11-01/api-2.json index 6b696687901..a3c6012279c 100644 --- a/models/apis/eks/2017-11-01/api-2.json +++ b/models/apis/eks/2017-11-01/api-2.json @@ -1017,10 +1017,12 @@ "type":"string", "enum":[ "AutoScalingGroupNotFound", + "AutoScalingGroupInvalidConfiguration", "Ec2SecurityGroupNotFound", "Ec2SecurityGroupDeletionFailure", "Ec2LaunchTemplateNotFound", "Ec2LaunchTemplateVersionMismatch", + "Ec2SubnetNotFound", "IamInstanceProfileNotFound", "IamNodeRoleNotFound", "AsgInstanceLaunchFailures", @@ -1361,7 +1363,8 @@ "LabelsToRemove", "MaxSize", "MinSize", - "ReleaseVersion" + "ReleaseVersion", + "PublicAccessCidrs" ] }, "UpdateParams":{ @@ -1392,7 +1395,8 @@ "subnetIds":{"shape":"StringList"}, "securityGroupIds":{"shape":"StringList"}, "endpointPublicAccess":{"shape":"BoxedBoolean"}, - "endpointPrivateAccess":{"shape":"BoxedBoolean"} + "endpointPrivateAccess":{"shape":"BoxedBoolean"}, + "publicAccessCidrs":{"shape":"StringList"} } }, "VpcConfigResponse":{ @@ -1403,7 +1407,8 @@ "clusterSecurityGroupId":{"shape":"String"}, "vpcId":{"shape":"String"}, "endpointPublicAccess":{"shape":"Boolean"}, - "endpointPrivateAccess":{"shape":"Boolean"} + "endpointPrivateAccess":{"shape":"Boolean"}, + "publicAccessCidrs":{"shape":"StringList"} } }, "labelKey":{ diff --git a/models/apis/eks/2017-11-01/docs-2.json b/models/apis/eks/2017-11-01/docs-2.json index 272ce90a583..644a29ca3c0 100644 --- a/models/apis/eks/2017-11-01/docs-2.json +++ b/models/apis/eks/2017-11-01/docs-2.json @@ -3,10 +3,10 @@ "service": "Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required.
", "operations": { "CreateCluster": "Creates an Amazon EKS control plane.
The Amazon EKS control plane consists of control plane instances that run the Kubernetes software, such as etcd
and the API server. The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS API server endpoint. Each Amazon EKS cluster control plane is single-tenant and unique and runs on its own set of Amazon EC2 instances.
The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your VPC subnets to provide connectivity from the control plane instances to the worker nodes (for example, to support kubectl exec
, logs
, and proxy
data flows).
Amazon EKS worker nodes run in your AWS account and connect to your cluster's control plane via the Kubernetes API server endpoint and a certificate file that is created for your cluster.
You can use the endpointPublicAccess
and endpointPrivateAccess
parameters to enable or disable public and private access to your cluster's Kubernetes API server endpoint. By default, public access is enabled, and private access is disabled. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
You can use the logging
parameter to enable or disable exporting the Kubernetes control plane logs for your cluster to CloudWatch Logs. By default, cluster control plane logs aren't exported to CloudWatch Logs. For more information, see Amazon EKS Cluster Control Plane Logs in the Amazon EKS User Guide .
CloudWatch Logs ingestion, archive storage, and data scanning rates apply to exported control plane logs. For more information, see Amazon CloudWatch Pricing.
Cluster creation typically takes between 10 and 15 minutes. After you create an Amazon EKS cluster, you must configure your Kubernetes tooling to communicate with the API server and launch worker nodes into your cluster. For more information, see Managing Cluster Authentication and Launching Amazon EKS Worker Nodes in the Amazon EKS User Guide.
", - "CreateFargateProfile": "Creates an AWS Fargate profile for your Amazon EKS cluster. You must have at least one Fargate profile in a cluster to be able to schedule pods on Fargate infrastructure.
The Fargate profile allows an administrator to declare which pods run on Fargate infrastructure and specify which pods run on which Fargate profile. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and labels. A namespace is required for every selector. The label field consists of multiple optional key-value pairs. Pods that match the selectors are scheduled on Fargate infrastructure. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is scheduled on Fargate infrastructure.
When you create a Fargate profile, you must specify a pod execution role to use with the pods that are scheduled with the profile. This role is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization so that the kubelet
that is running on the Fargate infrastructure can register with your Amazon EKS cluster. This role is what allows Fargate infrastructure to appear in your cluster as nodes. The pod execution role also provides IAM permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For more information, see Pod Execution Role in the Amazon EKS User Guide.
Fargate profiles are immutable. However, you can create a new updated profile to replace an existing profile and then delete the original after the updated profile has finished creating.
If any Fargate profiles in a cluster are in the DELETING
status, you must wait for that Fargate profile to finish deleting before you can create any other profiles in that cluster.
For more information, see AWS Fargate Profile in the Amazon EKS User Guide.
", + "CreateFargateProfile": "Creates an AWS Fargate profile for your Amazon EKS cluster. You must have at least one Fargate profile in a cluster to be able to run pods on Fargate.
The Fargate profile allows an administrator to declare which pods run on Fargate and specify which pods run on which Fargate profile. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and labels. A namespace is required for every selector. The label field consists of multiple optional key-value pairs. Pods that match the selectors are scheduled on Fargate. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is run on Fargate.
When you create a Fargate profile, you must specify a pod execution role to use with the pods that are scheduled with the profile. This role is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization so that the kubelet
that is running on the Fargate infrastructure can register with your Amazon EKS cluster so that it can appear in your cluster as a node. The pod execution role also provides IAM permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For more information, see Pod Execution Role in the Amazon EKS User Guide.
Fargate profiles are immutable. However, you can create a new updated profile to replace an existing profile and then delete the original after the updated profile has finished creating.
If any Fargate profiles in a cluster are in the DELETING
status, you must wait for that Fargate profile to finish deleting before you can create any other profiles in that cluster.
For more information, see AWS Fargate Profile in the Amazon EKS User Guide.
", "CreateNodegroup": "Creates a managed worker node group for an Amazon EKS cluster. You can only create a node group for your cluster that is equal to the current Kubernetes version for the cluster. All node groups are created with the latest AMI release version for the respective minor Kubernetes version of the cluster.
An Amazon EKS managed node group is an Amazon EC2 Auto Scaling group and associated Amazon EC2 instances that are managed by AWS for an Amazon EKS cluster. Each node group uses a version of the Amazon EKS-optimized Amazon Linux 2 AMI. For more information, see Managed Node Groups in the Amazon EKS User Guide.
", "DeleteCluster": "Deletes the Amazon EKS cluster control plane.
If you have active services in your cluster that are associated with a load balancer, you must delete those services before deleting the cluster so that the load balancers are deleted properly. Otherwise, you can have orphaned resources in your VPC that prevent you from being able to delete the VPC. For more information, see Deleting a Cluster in the Amazon EKS User Guide.
If you have managed node groups or Fargate profiles attached to the cluster, you must delete them first. For more information, see DeleteNodegroup andDeleteFargateProfile.
", - "DeleteFargateProfile": "Deletes an AWS Fargate profile.
When you delete a Fargate profile, any pods that were scheduled onto Fargate infrastructure with the profile are deleted. If those pods match another Fargate profile, then they are scheduled on Fargate infrastructure with that profile. If they no longer match any Fargate profiles, then they are not scheduled on Fargate infrastructure.
Only one Fargate profile in a cluster can be in the DELETING
status at a time. You must wait for a Fargate profile to finish deleting before you can delete any other profiles in that cluster.
Deletes an AWS Fargate profile.
When you delete a Fargate profile, any pods running on Fargate that were created with the profile are deleted. If those pods match another Fargate profile, then they are scheduled on Fargate with that profile. If they no longer match any Fargate profiles, then they are not scheduled on Fargate and they may remain in a pending state.
Only one Fargate profile in a cluster can be in the DELETING
status at a time. You must wait for a Fargate profile to finish deleting before you can delete any other profiles in that cluster.
Deletes an Amazon EKS node group for a cluster.
", "DescribeCluster": "Returns descriptive information about an Amazon EKS cluster.
The API server endpoint and certificate authority data returned by this operation are required for kubelet
and kubectl
to communicate with your Kubernetes API server. For more information, see Create a kubeconfig for Amazon EKS.
The API server endpoint and certificate authority data aren't available until the cluster reaches the ACTIVE
state.
Returns descriptive information about an AWS Fargate profile.
", @@ -53,16 +53,16 @@ "base": null, "refs": { "UpdateNodegroupVersionRequest$force": "Force the update if the existing node group's pods are unable to be drained due to a pod disruption budget issue. If an update fails because pods could not be drained, you can force the update after it fails to terminate the old node whether or not any pods are running on the node.
", - "VpcConfigResponse$endpointPublicAccess": "This parameter indicates whether the Amazon EKS public API server endpoint is enabled. If the Amazon EKS public API server endpoint is disabled, your cluster's Kubernetes API server can receive only requests that originate from within the cluster VPC.
", - "VpcConfigResponse$endpointPrivateAccess": "This parameter indicates whether the Amazon EKS private API server endpoint is enabled. If the Amazon EKS private API server endpoint is enabled, Kubernetes API requests that originate from within your cluster's VPC use the private VPC endpoint instead of traversing the internet.
" + "VpcConfigResponse$endpointPublicAccess": "This parameter indicates whether the Amazon EKS public API server endpoint is enabled. If the Amazon EKS public API server endpoint is disabled, your cluster's Kubernetes API server can only receive requests that originate from within the cluster VPC.
", + "VpcConfigResponse$endpointPrivateAccess": "This parameter indicates whether the Amazon EKS private API server endpoint is enabled. If the Amazon EKS private API server endpoint is enabled, Kubernetes API requests that originate from within your cluster's VPC use the private VPC endpoint instead of traversing the internet. If this value is disabled and you have worker nodes or AWS Fargate pods in the cluster, then ensure that publicAccessCidrs
includes the necessary CIDR blocks for communication with the worker nodes or Fargate pods. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
If a log type is enabled, that log type exports its control plane logs to CloudWatch Logs. If a log type isn't enabled, that log type doesn't export its control plane logs. Each individual log type can be enabled or disabled independently.
", - "VpcConfigRequest$endpointPublicAccess": "Set this value to false
to disable public access for your cluster's Kubernetes API server endpoint. If you disable public access, your cluster's Kubernetes API server can receive only requests from within the cluster VPC. The default value for this parameter is true
, which enables public access for your Kubernetes API server. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
Set this value to true
to enable private access for your cluster's Kubernetes API server endpoint. If you enable private access, Kubernetes API requests from within your cluster's VPC use the private VPC endpoint. The default value for this parameter is false
, which disables private access for your Kubernetes API server. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
Set this value to false
to disable public access to your cluster's Kubernetes API server endpoint. If you disable public access, your cluster's Kubernetes API server can only receive requests from within the cluster VPC. The default value for this parameter is true
, which enables public access for your Kubernetes API server. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
Set this value to true
to enable private access for your cluster's Kubernetes API server endpoint. If you enable private access, Kubernetes API requests from within your cluster's VPC use the private VPC endpoint. The default value for this parameter is false
, which disables private access for your Kubernetes API server. If you disable private access and you have worker nodes or AWS Fargate pods in the cluster, then ensure that publicAccessCidrs
includes the necessary CIDR blocks for communication with the worker nodes or Fargate pods. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
The name of the Fargate profile.
", "FargateProfile$fargateProfileArn": "The full Amazon Resource Name (ARN) of the Fargate profile.
", "FargateProfile$clusterName": "The name of the Amazon EKS cluster that the Fargate profile belongs to.
", - "FargateProfile$podExecutionRoleArn": "The Amazon Resource Name (ARN) of the pod execution role to use for pods that match the selectors in the Fargate profile. For more information, see Pod Execution Role in the Amazon EKS User Guide.
", + "FargateProfile$podExecutionRoleArn": "The Amazon Resource Name (ARN) of the pod execution role to use for pods that match the selectors in the Fargate profile. For more information, see Pod Execution Role in the Amazon EKS User Guide.
", "FargateProfileLabel$key": null, "FargateProfileLabel$value": null, "FargateProfileSelector$namespace": "The Kubernetes namespace that the selector should match.
", @@ -603,11 +603,11 @@ "StringList": { "base": null, "refs": { - "CreateFargateProfileRequest$subnets": "The IDs of subnets to launch Fargate pods into. At this time, Fargate pods are not assigned public IP addresses, so only private subnets (with no direct route to an Internet Gateway) are accepted for this parameter.
", + "CreateFargateProfileRequest$subnets": "The IDs of subnets to launch your pods into. At this time, pods running on Fargate are not assigned public IP addresses, so only private subnets (with no direct route to an Internet Gateway) are accepted for this parameter.
", "CreateNodegroupRequest$subnets": "The subnets to use for the Auto Scaling group that is created for your node group. These subnets must have the tag key kubernetes.io/cluster/CLUSTER_NAME
with a value of shared
, where CLUSTER_NAME
is replaced with the name of your cluster.
The instance type to use for your node group. Currently, you can specify a single instance type for a node group. The default value for this parameter is t3.medium
. If you choose a GPU instance type, be sure to specify the AL2_x86_64_GPU
with the amiType
parameter.
An optional field that contains the resource IDs associated with the error.
", - "FargateProfile$subnets": "The IDs of subnets to launch Fargate pods into.
", + "FargateProfile$subnets": "The IDs of subnets to launch pods into.
", "Issue$resourceIds": "The AWS resources that are afflicted by this issue.
", "ListClustersResponse$clusters": "A list of all of the clusters for your account in the specified Region.
", "ListFargateProfilesResponse$fargateProfileNames": "A list of all of the Fargate profiles associated with the specified cluster.
", @@ -619,8 +619,10 @@ "UnsupportedAvailabilityZoneException$validZones": "The supported Availability Zones for your account. Choose subnets in these Availability Zones for your cluster.
", "VpcConfigRequest$subnetIds": "Specify subnets for your Amazon EKS worker nodes. Amazon EKS creates cross-account elastic network interfaces in these subnets to allow communication between your worker nodes and the Kubernetes control plane.
", "VpcConfigRequest$securityGroupIds": "Specify one or more security groups for the cross-account elastic network interfaces that Amazon EKS creates to use to allow communication between your worker nodes and the Kubernetes control plane. If you don't specify a security group, the default security group for your VPC is used.
", + "VpcConfigRequest$publicAccessCidrs": "The CIDR blocks that are allowed access to your cluster's public Kubernetes API server endpoint. Communication to the endpoint from addresses outside of the CIDR blocks that you specify is denied. The default value is 0.0.0.0/0
. If you've disabled private endpoint access and you have worker nodes or AWS Fargate pods in the cluster, then ensure that you specify the necessary CIDR blocks. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
The subnets associated with your cluster.
", - "VpcConfigResponse$securityGroupIds": "The security groups associated with the cross-account elastic network interfaces that are used to allow communication between your worker nodes and the Kubernetes control plane.
" + "VpcConfigResponse$securityGroupIds": "The security groups associated with the cross-account elastic network interfaces that are used to allow communication between your worker nodes and the Kubernetes control plane.
", + "VpcConfigResponse$publicAccessCidrs": "The CIDR blocks that are allowed access to your cluster's public Kubernetes API server endpoint. Communication to the endpoint from addresses outside of the listed CIDR blocks is denied. The default value is 0.0.0.0/0
. If you've disabled private endpoint access and you have worker nodes or AWS Fargate pods in the cluster, then ensure that the necessary CIDR blocks are listed. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as you add and remove files, so your applications have the storage they need, when they need it. For more information, see the User Guide.
", "operations": { + "CreateAccessPoint": "Creates an EFS access point. An access point is an application-specific view into an EFS file system that applies an operating system user and group, and a file system path, to any file system request made through the access point. The operating system user and group override any identity information provided by the NFS client. The file system path is exposed as the access point's root directory. Applications using the access point can only access data in its own directory and below. To learn more, see Mounting a File System Using EFS Access Points.
This operation requires permissions for the elasticfilesystem:CreateAccessPoint
action.
Creates a new, empty file system. The operation requires a creation token in the request that Amazon EFS uses to ensure idempotent creation (calling the operation with same creation token has no effect). If a file system does not currently exist that is owned by the caller's AWS account with the specified creation token, this operation does the following:
Creates a new, empty file system. The file system will have an Amazon EFS assigned ID, and an initial lifecycle state creating
.
Returns with the description of the created file system.
Otherwise, this operation returns a FileSystemAlreadyExists
error with the ID of the existing file system.
For basic use cases, you can use a randomly generated UUID for the creation token.
The idempotent operation allows you to retry a CreateFileSystem
call without risk of creating an extra file system. This can happen when an initial call fails in a way that leaves it uncertain whether or not a file system was actually created. An example might be that a transport level timeout occurred or your connection was reset. As long as you use the same creation token, if the initial call had succeeded in creating a file system, the client can learn of its existence from the FileSystemAlreadyExists
error.
The CreateFileSystem
call returns while the file system's lifecycle state is still creating
. You can check the file system creation status by calling the DescribeFileSystems operation, which among other things returns the file system state.
This operation also takes an optional PerformanceMode
parameter that you choose for your file system. We recommend generalPurpose
performance mode for most file systems. File systems using the maxIO
performance mode can scale to higher levels of aggregate throughput and operations per second with a tradeoff of slightly higher latencies for most file operations. The performance mode can't be changed after the file system has been created. For more information, see Amazon EFS: Performance Modes.
After the file system is fully created, Amazon EFS sets its lifecycle state to available
, at which point you can create one or more mount targets for the file system in your VPC. For more information, see CreateMountTarget. You mount your Amazon EFS file system on an EC2 instances in your VPC by using the mount target. For more information, see Amazon EFS: How it Works.
This operation requires permissions for the elasticfilesystem:CreateFileSystem
action.
Creates a mount target for a file system. You can then mount the file system on EC2 instances by using the mount target.
You can create one mount target in each Availability Zone in your VPC. All EC2 instances in a VPC within a given Availability Zone share a single mount target for a given file system. If you have multiple subnets in an Availability Zone, you create a mount target in one of the subnets. EC2 instances do not need to be in the same subnet as the mount target in order to access their file system. For more information, see Amazon EFS: How it Works.
In the request, you also specify a file system ID for which you are creating the mount target and the file system's lifecycle state must be available
. For more information, see DescribeFileSystems.
In the request, you also provide a subnet ID, which determines the following:
VPC in which Amazon EFS creates the mount target
Availability Zone in which Amazon EFS creates the mount target
IP address range from which Amazon EFS selects the IP address of the mount target (if you don't specify an IP address in the request)
After creating the mount target, Amazon EFS returns a response that includes, a MountTargetId
and an IpAddress
. You use this IP address when mounting the file system in an EC2 instance. You can also use the mount target's DNS name when mounting the file system. The EC2 instance on which you mount the file system by using the mount target can resolve the mount target's DNS name to its IP address. For more information, see How it Works: Implementation Overview.
Note that you can create mount targets for a file system in only one VPC, and there can be only one mount target per Availability Zone. That is, if the file system already has one or more mount targets created for it, the subnet specified in the request to add another mount target must meet the following requirements:
Must belong to the same VPC as the subnets of the existing mount targets
Must not be in the same Availability Zone as any of the subnets of the existing mount targets
If the request satisfies the requirements, Amazon EFS does the following:
Creates a new mount target in the specified subnet.
Also creates a new network interface in the subnet as follows:
If the request provides an IpAddress
, Amazon EFS assigns that IP address to the network interface. Otherwise, Amazon EFS assigns a free address in the subnet (in the same way that the Amazon EC2 CreateNetworkInterface
call does when a request does not specify a primary private IP address).
If the request provides SecurityGroups
, this network interface is associated with those security groups. Otherwise, it belongs to the default security group for the subnet's VPC.
Assigns the description Mount target fsmt-id for file system fs-id
where fsmt-id
is the mount target ID, and fs-id
is the FileSystemId
.
Sets the requesterManaged
property of the network interface to true
, and the requesterId
value to EFS
.
Each Amazon EFS mount target has one corresponding requester-managed EC2 network interface. After the network interface is created, Amazon EFS sets the NetworkInterfaceId
field in the mount target's description to the network interface ID, and the IpAddress
field to its address. If network interface creation fails, the entire CreateMountTarget
operation fails.
The CreateMountTarget
call returns only after creating the network interface, but while the mount target state is still creating
, you can check the mount target creation status by calling the DescribeMountTargets operation, which among other things returns the mount target state.
We recommend that you create a mount target in each of the Availability Zones. There are cost considerations for using a file system in an Availability Zone through a mount target created in another Availability Zone. For more information, see Amazon EFS. In addition, by always using a mount target local to the instance's Availability Zone, you eliminate a partial failure scenario. If the Availability Zone in which your mount target is created goes down, then you can't access your file system through that mount target.
This operation requires permissions for the following action on the file system:
elasticfilesystem:CreateMountTarget
This operation also requires permissions for the following Amazon EC2 actions:
ec2:DescribeSubnets
ec2:DescribeNetworkInterfaces
ec2:CreateNetworkInterface
Creates or overwrites tags associated with a file system. Each tag is a key-value pair. If a tag key specified in the request already exists on the file system, this operation overwrites its value with the value provided in the request. If you add the Name
tag to your file system, Amazon EFS returns it in the response to the DescribeFileSystems operation.
This operation requires permission for the elasticfilesystem:CreateTags
action.
Deletes the specified access point. After deletion is complete, new clients can no longer connect to the access points. Clients connected to the access point at the time of deletion will continue to function until they terminate their connection.
This operation requires permissions for the elasticfilesystem:DeleteAccessPoint
action.
Deletes a file system, permanently severing access to its contents. Upon return, the file system no longer exists and you can't access any contents of the deleted file system.
You can't delete a file system that is in use. That is, if the file system has any mount targets, you must first delete them. For more information, see DescribeMountTargets and DeleteMountTarget.
The DeleteFileSystem
call returns while the file system state is still deleting
. You can check the file system deletion status by calling the DescribeFileSystems operation, which returns a list of file systems in your account. If you pass file system ID or creation token for the deleted file system, the DescribeFileSystems returns a 404 FileSystemNotFound
error.
This operation requires permissions for the elasticfilesystem:DeleteFileSystem
action.
Deletes the FileSystemPolicy
for the specified file system. The default FileSystemPolicy
goes into effect once the existing policy is deleted. For more information about the default file system policy, see Using Resource-based Policies with EFS.
This operation requires permissions for the elasticfilesystem:DeleteFileSystemPolicy
action.
Deletes the specified mount target.
This operation forcibly breaks any mounts of the file system by using the mount target that is being deleted, which might disrupt instances or applications using those mounts. To avoid applications getting cut off abruptly, you might consider unmounting any mounts of the mount target, if feasible. The operation also deletes the associated network interface. Uncommitted writes might be lost, but breaking a mount target using this operation does not corrupt the file system itself. The file system you created remains. You can mount an EC2 instance in your VPC by using another mount target.
This operation requires permissions for the following action on the file system:
elasticfilesystem:DeleteMountTarget
The DeleteMountTarget
call returns while the mount target state is still deleting
. You can check the mount target deletion by calling the DescribeMountTargets operation, which returns a list of mount target descriptions for the given file system.
The operation also requires permissions for the following Amazon EC2 action on the mount target's network interface:
ec2:DeleteNetworkInterface
Deletes the specified tags from a file system. If the DeleteTags
request includes a tag key that doesn't exist, Amazon EFS ignores it and doesn't cause an error. For more information about tags and related restrictions, see Tag Restrictions in the AWS Billing and Cost Management User Guide.
This operation requires permissions for the elasticfilesystem:DeleteTags
action.
Returns the description of a specific Amazon EFS access point if the AccessPointId
is provided. If you provide an EFS FileSystemId
, it returns descriptions of all access points for that file system. You can provide either an AccessPointId
or a FileSystemId
in the request, but not both.
This operation requires permissions for the elasticfilesystem:DescribeAccessPoints
action.
Returns the FileSystemPolicy
for the specified EFS file system.
This operation requires permissions for the elasticfilesystem:DescribeFileSystemPolicy
action.
Returns the description of a specific Amazon EFS file system if either the file system CreationToken
or the FileSystemId
is provided. Otherwise, it returns descriptions of all file systems owned by the caller's AWS account in the AWS Region of the endpoint that you're calling.
When retrieving all file system descriptions, you can optionally specify the MaxItems
parameter to limit the number of descriptions in a response. Currently, this number is automatically set to 10. If more file system descriptions remain, Amazon EFS returns a NextMarker
, an opaque token, in the response. In this case, you should send a subsequent request with the Marker
request parameter set to the value of NextMarker
.
To retrieve a list of your file system descriptions, this operation is used in an iterative process, where DescribeFileSystems
is called first without the Marker
and then the operation continues to call it with the Marker
parameter set to the value of the NextMarker
from the previous response until the response has no NextMarker
.
The order of file systems returned in the response of one DescribeFileSystems
call and the order of file systems returned across the responses of a multi-call iteration is unspecified.
This operation requires permissions for the elasticfilesystem:DescribeFileSystems
action.
Returns the current LifecycleConfiguration
object for the specified Amazon EFS file system. EFS lifecycle management uses the LifecycleConfiguration
object to identify which files to move to the EFS Infrequent Access (IA) storage class. For a file system without a LifecycleConfiguration
object, the call returns an empty array in the response.
This operation requires permissions for the elasticfilesystem:DescribeLifecycleConfiguration
operation.
Returns the security groups currently in effect for a mount target. This operation requires that the network interface of the mount target has been created and the lifecycle state of the mount target is not deleted
.
This operation requires permissions for the following actions:
elasticfilesystem:DescribeMountTargetSecurityGroups
action on the mount target's file system.
ec2:DescribeNetworkInterfaceAttribute
action on the mount target's network interface.
Returns the descriptions of all the current mount targets, or a specific mount target, for a file system. When requesting all of the current mount targets, the order of mount targets returned in the response is unspecified.
This operation requires permissions for the elasticfilesystem:DescribeMountTargets
action, on either the file system ID that you specify in FileSystemId
, or on the file system of the mount target that you specify in MountTargetId
.
Returns the tags associated with a file system. The order of tags returned in the response of one DescribeTags
call and the order of tags returned across the responses of a multiple-call iteration (when using pagination) is unspecified.
This operation requires permissions for the elasticfilesystem:DescribeTags
action.
Lists all tags for a top-level EFS resource. You must provide the ID of the resource that you want to retrieve the tags for.
This operation requires permissions for the elasticfilesystem:DescribeAccessPoints
action.
Modifies the set of security groups in effect for a mount target.
When you create a mount target, Amazon EFS also creates a new network interface. For more information, see CreateMountTarget. This operation replaces the security groups in effect for the network interface associated with a mount target, with the SecurityGroups
provided in the request. This operation requires that the network interface of the mount target has been created and the lifecycle state of the mount target is not deleted
.
The operation requires permissions for the following actions:
elasticfilesystem:ModifyMountTargetSecurityGroups
action on the mount target's file system.
ec2:ModifyNetworkInterfaceAttribute
action on the mount target's network interface.
Applies an Amazon EFS FileSystemPolicy
to an Amazon EFS file system. A file system policy is an IAM resource-based policy and can contain multiple policy statements. A file system always has exactly one file system policy, which can be the default policy or an explicit policy set or updated using this API operation. When an explicit policy is set, it overrides the default policy. For more information about the default file system policy, see Using Resource-based Policies with EFS.
This operation requires permissions for the elasticfilesystem:PutFileSystemPolicy
action.
Enables lifecycle management by creating a new LifecycleConfiguration
object. A LifecycleConfiguration
object defines when files in an Amazon EFS file system are automatically transitioned to the lower-cost EFS Infrequent Access (IA) storage class. A LifecycleConfiguration
applies to all files in a file system.
Each Amazon EFS file system supports one lifecycle configuration, which applies to all files in the file system. If a LifecycleConfiguration
object already exists for the specified file system, a PutLifecycleConfiguration
call modifies the existing configuration. A PutLifecycleConfiguration
call with an empty LifecyclePolicies
array in the request body deletes any existing LifecycleConfiguration
and disables lifecycle management.
In the request, specify the following:
The ID for the file system for which you are enabling, disabling, or modifying lifecycle management.
A LifecyclePolicies
array of LifecyclePolicy
objects that define when files are moved to the IA storage class. The array can contain only one LifecyclePolicy
item.
This operation requires permissions for the elasticfilesystem:PutLifecycleConfiguration
operation.
To apply a LifecycleConfiguration
object to an encrypted file system, you need the same AWS Key Management Service (AWS KMS) permissions as when you created the encrypted file system.
Creates a tag for an EFS resource. You can create tags for EFS file systems and access points using this API operation.
This operation requires permissions for the elasticfilesystem:TagResource
action.
Removes tags from an EFS resource. You can remove tags from EFS file systems and access points using this API operation.
This operation requires permissions for the elasticfilesystem:UntagResource
action.
Updates the throughput mode or the amount of provisioned throughput of an existing file system.
" }, "shapes": { + "AccessPointAlreadyExists": { + "base": "Returned if the access point you are trying to create already exists, with the creation token you provided in the request.
", + "refs": { + } + }, + "AccessPointArn": { + "base": null, + "refs": { + "AccessPointDescription$AccessPointArn": "The unique Amazon Resource Name (ARN) associated with the access point.
" + } + }, + "AccessPointDescription": { + "base": "Provides a description of an EFS file system access point.
", + "refs": { + "AccessPointDescriptions$member": null + } + }, + "AccessPointDescriptions": { + "base": null, + "refs": { + "DescribeAccessPointsResponse$AccessPoints": "An array of access point descriptions.
" + } + }, + "AccessPointId": { + "base": null, + "refs": { + "AccessPointAlreadyExists$AccessPointId": null, + "AccessPointDescription$AccessPointId": "The ID of the access point, assigned by Amazon EFS.
", + "DeleteAccessPointRequest$AccessPointId": "The ID of the access point that you want to delete.
", + "DescribeAccessPointsRequest$AccessPointId": "(Optional) Specifies an EFS access point to describe in the response; mutually exclusive with FileSystemId
.
(Optional) The ID of the access point whose mount targets that you want to list. It must be included in your request if a FileSystemId
or MountTargetId
is not included in your request. Accepts either an access point ID or ARN as input.
Returned if the AWS account has already created the maximum number of access points allowed per file system.
", + "refs": { + } + }, + "AccessPointNotFound": { + "base": "Returned if the specified AccessPointId
value doesn't exist in the requester's AWS account.
The unique and consistent identifier of the Availability Zone (AZ) that the mount target resides in. For example, use1-az1
is an AZ ID for the us-east-1 Region and it has the same location in every AWS account.
The name of the Availability Zone (AZ) that the mount target resides in. AZs are independently mapped to names for each AWS account. For example, the Availability Zone us-east-1a
for your AWS account might not be the same location as us-east-1a
for another AWS account.
Identified the AWS account that owns the access point resource.
", "FileSystemDescription$OwnerId": "The AWS account that created the file system. If the file system was created by an IAM user, the parent account to which the user belongs is the owner.
", "MountTargetDescription$OwnerId": "AWS account ID that owns the resource.
" } @@ -30,6 +95,24 @@ "refs": { } }, + "BypassPolicyLockoutSafetyCheck": { + "base": null, + "refs": { + "PutFileSystemPolicyRequest$BypassPolicyLockoutSafetyCheck": "(Optional) A flag to indicate whether to bypass the FileSystemPolicy
lockout safety check. The policy lockout safety check determines whether the policy in the request will prevent the principal making the request will be locked out from making future PutFileSystemPolicy
requests on the file system. Set BypassPolicyLockoutSafetyCheck
to True
only when you intend to prevent the principal that is making the request from making a subsequent PutFileSystemPolicy
request on the file system. The default value is False.
The opaque string specified in the request to ensure idempotent creation.
", + "CreateAccessPointRequest$ClientToken": "A string of up to 64 ASCII characters that Amazon EFS uses to ensure idempotent creation.
" + } + }, + "CreateAccessPointRequest": { + "base": null, + "refs": { + } + }, "CreateFileSystemRequest": { "base": null, "refs": { @@ -45,6 +128,12 @@ "refs": { } }, + "CreationInfo": { + "base": "Required if the RootDirectory
> Path
specified does not exist. Specifies the POSIX IDs and permissions to apply to the access point's RootDirectory
> Path
. If the access point root directory does not exist, EFS creates it with these settings when a client connects to the access point. When specifying CreationInfo
, you must include values for all properties.
If you do not provide CreationInfo
and the specified RootDirectory
does not exist, attempts to mount the file system using the access point will fail.
(Optional) Specifies the POSIX IDs and permissions to apply to the access point's RootDirectory
. If the RootDirectory
> Path
specified does not exist, EFS creates the root directory using the CreationInfo
settings when a client connects to an access point. When specifying the CreationInfo
, you must provide values for all properties.
If you do not provide CreationInfo
and the specified RootDirectory
> Path
does not exist, attempts to mount the file system using the access point will fail.
The opaque string specified in the request.
" } }, + "DeleteAccessPointRequest": { + "base": null, + "refs": { + } + }, + "DeleteFileSystemPolicyRequest": { + "base": null, + "refs": { + } + }, "DeleteFileSystemRequest": { "base": "", "refs": { @@ -73,6 +172,21 @@ "refs": { } }, + "DescribeAccessPointsRequest": { + "base": null, + "refs": { + } + }, + "DescribeAccessPointsResponse": { + "base": null, + "refs": { + } + }, + "DescribeFileSystemPolicyRequest": { + "base": null, + "refs": { + } + }, "DescribeFileSystemsRequest": { "base": "", "refs": { @@ -128,6 +242,9 @@ "ErrorCode": { "base": null, "refs": { + "AccessPointAlreadyExists$ErrorCode": null, + "AccessPointLimitExceeded$ErrorCode": null, + "AccessPointNotFound$ErrorCode": null, "BadRequest$ErrorCode": null, "DependencyTimeout$ErrorCode": null, "FileSystemAlreadyExists$ErrorCode": null, @@ -138,11 +255,13 @@ "IncorrectMountTargetState$ErrorCode": null, "InsufficientThroughputCapacity$ErrorCode": null, "InternalServerError$ErrorCode": null, + "InvalidPolicyException$ErrorCode": null, "IpAddressInUse$ErrorCode": null, "MountTargetConflict$ErrorCode": null, "MountTargetNotFound$ErrorCode": null, "NetworkInterfaceLimitExceeded$ErrorCode": null, "NoFreeAddressesInSubnet$ErrorCode": null, + "PolicyNotFound$ErrorCode": null, "SecurityGroupLimitExceeded$ErrorCode": null, "SecurityGroupNotFound$ErrorCode": null, "SubnetNotFound$ErrorCode": null, @@ -154,6 +273,9 @@ "ErrorMessage": { "base": null, "refs": { + "AccessPointAlreadyExists$Message": null, + "AccessPointLimitExceeded$Message": null, + "AccessPointNotFound$Message": null, "BadRequest$Message": null, "DependencyTimeout$Message": null, "FileSystemAlreadyExists$Message": null, @@ -164,11 +286,13 @@ "IncorrectMountTargetState$Message": null, "InsufficientThroughputCapacity$Message": null, "InternalServerError$Message": null, + "InvalidPolicyException$Message": null, "IpAddressInUse$Message": null, "MountTargetConflict$Message": null, "MountTargetNotFound$Message": null, "NetworkInterfaceLimitExceeded$Message": null, "NoFreeAddressesInSubnet$Message": null, + "PolicyNotFound$Message": null, "SecurityGroupLimitExceeded$Message": null, "SecurityGroupNotFound$Message": null, "SubnetNotFound$Message": null, @@ -197,17 +321,24 @@ "FileSystemId": { "base": null, "refs": { + "AccessPointDescription$FileSystemId": "The ID of the EFS file system that the access point applies to.
", + "CreateAccessPointRequest$FileSystemId": "The ID of the EFS file system that the access point provides access to.
", "CreateMountTargetRequest$FileSystemId": "The ID of the file system for which to create the mount target.
", "CreateTagsRequest$FileSystemId": "The ID of the file system whose tags you want to modify (String). This operation modifies the tags only, not the file system.
", + "DeleteFileSystemPolicyRequest$FileSystemId": "Specifies the EFS file system for which to delete the FileSystemPolicy
.
The ID of the file system you want to delete.
", "DeleteTagsRequest$FileSystemId": "The ID of the file system whose tags you want to delete (String).
", + "DescribeAccessPointsRequest$FileSystemId": "(Optional) If you provide a FileSystemId
, EFS returns all access points for that file system; mutually exclusive with AccessPointId
.
Specifies which EFS file system to retrieve the FileSystemPolicy
for.
(Optional) ID of the file system whose description you want to retrieve (String).
", "DescribeLifecycleConfigurationRequest$FileSystemId": "The ID of the file system whose LifecycleConfiguration
object you want to retrieve (String).
(Optional) ID of the file system whose mount targets you want to list (String). It must be included in your request if MountTargetId
is not included.
(Optional) ID of the file system whose mount targets you want to list (String). It must be included in your request if an AccessPointId
or MountTargetId
is not included. Accepts either a file system ID or ARN as input.
The ID of the file system whose tag set you want to retrieve.
", "FileSystemAlreadyExists$FileSystemId": null, "FileSystemDescription$FileSystemId": "The ID of the file system, assigned by Amazon EFS.
", + "FileSystemPolicyDescription$FileSystemId": "Specifies the EFS file system to which the FileSystemPolicy
applies.
The ID of the file system for which the mount target is intended.
", + "PutFileSystemPolicyRequest$FileSystemId": "The ID of the EFS file system that you want to create or update the FileSystemPolicy
for.
The ID of the file system for which you are creating the LifecycleConfiguration
object (String).
The ID of the file system that you want to update.
" } @@ -234,6 +365,11 @@ "FileSystemSize$ValueInStandard": "The latest known metered size (in bytes) of data stored in the Standard storage class.
" } }, + "FileSystemPolicyDescription": { + "base": null, + "refs": { + } + }, "FileSystemSize": { "base": "The latest known metered size (in bytes) of data stored in the file system, in its Value
field, and the time at which that size was determined in its Timestamp
field. The value doesn't represent the size of a consistent snapshot of the file system, but it is eventually consistent when there are no writes to the file system. That is, the value represents the actual size only if the file system is not modified for a period longer than a couple of hours. Otherwise, the value is not necessarily the exact size the file system was at any instant in time.
The latest known metered size (in bytes) of data stored in the file system.
" } }, + "Gid": { + "base": null, + "refs": { + "PosixUser$Gid": "The POSIX group ID used for all file system operations using this access point.
", + "SecondaryGids$member": null + } + }, "IncorrectFileSystemLifeCycleState": { "base": "Returned if the file system's lifecycle state is not \"available\".
", "refs": { @@ -266,6 +409,11 @@ "refs": { } }, + "InvalidPolicyException": { + "base": "Returned if the FileSystemPolicy
is is malformed or contains an error such as an invalid parameter value or a missing required parameter. Returned in the case of a policy lockout safety check error.
Identifies the lifecycle phase of the access point.
", "FileSystemDescription$LifeCycleState": "The lifecycle phase of the file system.
", "MountTargetDescription$LifeCycleState": "Lifecycle state of the mount target.
" } @@ -310,6 +459,16 @@ "LifecyclePolicies$member": null } }, + "ListTagsForResourceRequest": { + "base": null, + "refs": { + } + }, + "ListTagsForResourceResponse": { + "base": null, + "refs": { + } + }, "Marker": { "base": null, "refs": { @@ -327,9 +486,16 @@ "MaxItems": { "base": null, "refs": { - "DescribeFileSystemsRequest$MaxItems": "(Optional) Specifies the maximum number of file systems to return in the response (integer). Currently, this number is automatically set to 10, and other values are ignored. The response is paginated at 10 per page if you have more than 10 file systems.
", - "DescribeMountTargetsRequest$MaxItems": "(Optional) Maximum number of mount targets to return in the response. Currently, this number is automatically set to 10, and other values are ignored. The response is paginated at 10 per page if you have more than 10 mount targets.
", - "DescribeTagsRequest$MaxItems": "(Optional) The maximum number of file system tags to return in the response. Currently, this number is automatically set to 10, and other values are ignored. The response is paginated at 10 per page if you have more than 10 tags.
" + "DescribeFileSystemsRequest$MaxItems": "(Optional) Specifies the maximum number of file systems to return in the response (integer). This number is automatically set to 100. The response is paginated at 100 per page if you have more than 100 file systems.
", + "DescribeMountTargetsRequest$MaxItems": "(Optional) Maximum number of mount targets to return in the response. Currently, this number is automatically set to 10, and other values are ignored. The response is paginated at 100 per page if you have more than 100 mount targets.
", + "DescribeTagsRequest$MaxItems": "(Optional) The maximum number of file system tags to return in the response. Currently, this number is automatically set to 100, and other values are ignored. The response is paginated at 100 per page if you have more than 100 tags.
" + } + }, + "MaxResults": { + "base": null, + "refs": { + "DescribeAccessPointsRequest$MaxResults": "(Optional) When retrieving all access points for a file system, you can optionally specify the MaxItems
parameter to limit the number of objects returned in a response. The default value is 100.
(Optional) Specifies the maximum number of tag objects to return in the response. The default value is 100.
" } }, "ModifyMountTargetSecurityGroupsRequest": { @@ -365,7 +531,7 @@ "refs": { "DeleteMountTargetRequest$MountTargetId": "The ID of the mount target to delete (String).
", "DescribeMountTargetSecurityGroupsRequest$MountTargetId": "The ID of the mount target whose security groups you want to retrieve.
", - "DescribeMountTargetsRequest$MountTargetId": "(Optional) ID of the mount target that you want to have described (String). It must be included in your request if FileSystemId
is not included.
(Optional) ID of the mount target that you want to have described (String). It must be included in your request if FileSystemId
is not included. Accepts either a mount target ID or ARN as input.
The ID of the mount target whose security groups you want to modify.
", "MountTargetDescription$MountTargetId": "System-assigned mount target ID.
" } @@ -375,6 +541,12 @@ "refs": { } }, + "Name": { + "base": null, + "refs": { + "AccessPointDescription$Name": "The name of the access point. This is the value of the Name
tag.
Specifies the POSIX group ID to apply to the RootDirectory
. Accepts values from 0 to 2^32 (4294967295).
Specifies the POSIX user ID to apply to the RootDirectory
. Accepts values from 0 to 2^32 (4294967295).
Specifies the path on the EFS file system to expose as the root directory to NFS clients using the access point to access the EFS file system. A path can have up to four subdirectories. If the specified path does not exist, you are required to provide the CreationInfo
.
The performance mode of the file system.
" } }, + "Permissions": { + "base": null, + "refs": { + "CreationInfo$Permissions": "Specifies the POSIX permissions to apply to the RootDirectory
, in the format of an octal number representing the file's mode bits.
The JSON formatted FileSystemPolicy
for the EFS file system.
The FileSystemPolicy
that you're creating. Accepts a JSON formatted policy definition. To find out more about the elements that make up a file system policy, see EFS Resource-based Policies.
Returned if the default file system policy is in effect for the EFS file system specified.
", + "refs": { + } + }, + "PosixUser": { + "base": "The full POSIX identity, including the user ID, group ID, and any secondary group IDs, on the access point that is used for all file system operations performed by NFS clients using the access point.
", + "refs": { + "AccessPointDescription$PosixUser": "The full POSIX identity, including the user ID, group ID, and secondary group IDs on the access point that is used for all file operations by NFS clients using the access point.
", + "CreateAccessPointRequest$PosixUser": "The operating system user and group applied to all file system requests made using the access point.
" + } + }, "ProvisionedThroughputInMibps": { "base": null, "refs": { @@ -406,11 +621,37 @@ "UpdateFileSystemRequest$ProvisionedThroughputInMibps": "(Optional) The amount of throughput, in MiB/s, that you want to provision for your file system. Valid values are 1-1024. Required if ThroughputMode
is changed to provisioned
on update. If you're not updating the amount of provisioned throughput for your file system, you don't need to provide this value in your request.
Specifies the EFS resource you want to retrieve tags for. You can retrieve tags for EFS file systems and access points using this API endpoint.
", + "TagResourceRequest$ResourceId": "The ID specifying the EFS resource that you want to create a tag for.
", + "UntagResourceRequest$ResourceId": "Specifies the EFS resource that you want to remove tags from.
" + } + }, + "RootDirectory": { + "base": "Specifies the directory on the Amazon EFS file system that the access point provides access to. The access point exposes the specified file system path as the root directory of your file system to applications using the access point. NFS clients using the access point can only access data in the access point's RootDirectory
and it's subdirectories.
The directory on the Amazon EFS file system that the access point exposes as the root directory to NFS clients using the access point.
", + "CreateAccessPointRequest$RootDirectory": "Specifies the directory on the Amazon EFS file system that the access point exposes as the root directory of your file system to NFS clients using the access point. The clients using the access point can only access the root directory and below. If the RootDirectory
> Path
specified does not exist, EFS creates it and applies the CreationInfo
settings when a client connects to an access point. When specifying a RootDirectory
, you need to provide the Path
, and the CreationInfo
is optional.
Secondary POSIX group IDs used for all file system operations using this access point.
" + } + }, "SecurityGroup": { "base": null, "refs": { @@ -463,7 +704,13 @@ "TagKeys": { "base": null, "refs": { - "DeleteTagsRequest$TagKeys": "A list of tag keys to delete.
" + "DeleteTagsRequest$TagKeys": "A list of tag keys to delete.
", + "UntagResourceRequest$TagKeys": "The keys of the key:value tag pairs that you want to remove from the specified EFS resource.
" + } + }, + "TagResourceRequest": { + "base": null, + "refs": { } }, "TagValue": { @@ -476,10 +723,14 @@ "Tags": { "base": null, "refs": { + "AccessPointDescription$Tags": "The tags associated with the access point, presented as an array of Tag objects.
", + "CreateAccessPointRequest$Tags": "Creates tags associated with the access point. Each tag is a key-value pair.
", "CreateFileSystemRequest$Tags": "A value that specifies to create one or more tags associated with the file system. Each tag is a user-defined key-value pair. Name your file system on creation by including a \"Key\":\"Name\",\"Value\":\"{value}\"
key-value pair.
An array of Tag
objects to add. Each Tag
object is a key-value pair.
Returns tags associated with the file system as an array of Tag
objects.
The tags associated with the file system, presented as an array of Tag
objects.
The tags associated with the file system, presented as an array of Tag
objects.
An array of the tags for the specified EFS resource.
", + "TagResourceRequest$Tags": "" } }, "ThroughputLimitExceeded": { @@ -502,6 +753,15 @@ "FileSystemSize$Timestamp": "The time at which the size of data, returned in the Value
field, was determined. The value is the integer number of seconds since 1970-01-01T00:00:00Z.
NextToken
is present if the response is paginated. You can use NextMarker
in the subsequent request to fetch the next page of access point descriptions.
Present if there are more access points than returned in the response. You can use the NextMarker in the subsequent request to fetch the additional descriptions.
", + "ListTagsForResourceRequest$NextToken": "You can use NextToken
in a subsequent request to fetch the next page of access point descriptions if the response payload was paginated.
NextToken
is present if the response payload is paginated. You can use NextToken
in a subsequent request to fetch the next page of access point descriptions.
Returned if you don’t wait at least 24 hours before changing the throughput mode, or decreasing the Provisioned Throughput value.
", "refs": { @@ -513,11 +773,22 @@ "LifecyclePolicy$TransitionToIA": "A value that describes the period of time that a file is not accessed, after which it transitions to the IA storage class. Metadata operations such as listing the contents of a directory don't count as file access events.
" } }, + "Uid": { + "base": null, + "refs": { + "PosixUser$Uid": "The POSIX user ID used for all file system operations using this access point.
" + } + }, "UnsupportedAvailabilityZone": { "base": "", "refs": { } }, + "UntagResourceRequest": { + "base": null, + "refs": { + } + }, "UpdateFileSystemRequest": { "base": null, "refs": { diff --git a/models/apis/elasticfilesystem/2015-02-01/paginators-1.json b/models/apis/elasticfilesystem/2015-02-01/paginators-1.json index 5677bd8e4a2..418186d345b 100644 --- a/models/apis/elasticfilesystem/2015-02-01/paginators-1.json +++ b/models/apis/elasticfilesystem/2015-02-01/paginators-1.json @@ -1,4 +1,24 @@ { "pagination": { + "DescribeAccessPoints": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, + "DescribeFileSystems": { + "input_token": "Marker", + "output_token": "NextMarker", + "limit_key": "MaxItems" + }, + "DescribeTags": { + "input_token": "Marker", + "output_token": "NextMarker", + "limit_key": "MaxItems" + }, + "ListTagsForResource": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + } } } diff --git a/models/apis/fms/2018-01-01/api-2.json b/models/apis/fms/2018-01-01/api-2.json index 02817723c15..724166c7a50 100644 --- a/models/apis/fms/2018-01-01/api-2.json +++ b/models/apis/fms/2018-01-01/api-2.json @@ -177,6 +177,21 @@ {"shape":"InternalErrorException"} ] }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagsForResourceRequest"}, + "output":{"shape":"ListTagsForResourceResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidOperationException"}, + {"shape":"InternalErrorException"}, + {"shape":"InvalidInputException"} + ] + }, "PutNotificationChannel":{ "name":"PutNotificationChannel", "http":{ @@ -206,6 +221,37 @@ {"shape":"InternalErrorException"}, {"shape":"InvalidTypeException"} ] + }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TagResourceRequest"}, + "output":{"shape":"TagResourceResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidOperationException"}, + {"shape":"InternalErrorException"}, + {"shape":"InvalidInputException"}, + {"shape":"LimitExceededException"} + ] + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UntagResourceRequest"}, + "output":{"shape":"UntagResourceResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidOperationException"}, + {"shape":"InternalErrorException"}, + {"shape":"InvalidInputException"} + ] } }, "shapes":{ @@ -470,6 +516,19 @@ "NextToken":{"shape":"PaginationToken"} } }, + "ListTagsForResourceRequest":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "ResourceArn":{"shape":"ResourceArn"} + } + }, + "ListTagsForResourceResponse":{ + "type":"structure", + "members":{ + "TagList":{"shape":"TagList"} + } + }, "ManagedServiceData":{ "type":"string", "max":1024, @@ -486,6 +545,7 @@ }, "PaginationToken":{ "type":"string", + "max":4096, "min":1, "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" }, @@ -590,7 +650,8 @@ "type":"structure", "required":["Policy"], "members":{ - "Policy":{"shape":"Policy"} + "Policy":{"shape":"Policy"}, + "TagList":{"shape":"TagList"} } }, "PutPolicyResponse":{ @@ -633,10 +694,21 @@ "type":"structure", "required":["Key"], "members":{ - "Key":{"shape":"TagKey"}, - "Value":{"shape":"TagValue"} + "Key":{"shape":"ResourceTagKey"}, + "Value":{"shape":"ResourceTagValue"} } }, + "ResourceTagKey":{ + "type":"string", + "max":128, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, + "ResourceTagValue":{ + "type":"string", + "max":256, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" + }, "ResourceTags":{ "type":"list", "member":{"shape":"ResourceTag"}, @@ -671,18 +743,74 @@ "SECURITY_GROUPS_USAGE_AUDIT" ] }, + "Tag":{ + "type":"structure", + "required":[ + "Key", + "Value" + ], + "members":{ + "Key":{"shape":"TagKey"}, + "Value":{"shape":"TagValue"} + } + }, "TagKey":{ "type":"string", "max":128, "min":1, "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" }, + "TagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":200, + "min":0 + }, + "TagList":{ + "type":"list", + "member":{"shape":"Tag"}, + "max":200, + "min":0 + }, + "TagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "TagList" + ], + "members":{ + "ResourceArn":{"shape":"ResourceArn"}, + "TagList":{"shape":"TagList"} + } + }, + "TagResourceResponse":{ + "type":"structure", + "members":{ + } + }, "TagValue":{ "type":"string", "max":256, + "min":0, "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-@]*)$" }, "TimeStamp":{"type":"timestamp"}, + "UntagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "TagKeys" + ], + "members":{ + "ResourceArn":{"shape":"ResourceArn"}, + "TagKeys":{"shape":"TagKeyList"} + } + }, + "UntagResourceResponse":{ + "type":"structure", + "members":{ + } + }, "ViolationReason":{ "type":"string", "enum":[ diff --git a/models/apis/fms/2018-01-01/docs-2.json b/models/apis/fms/2018-01-01/docs-2.json index c19da82717d..e9964821b6c 100644 --- a/models/apis/fms/2018-01-01/docs-2.json +++ b/models/apis/fms/2018-01-01/docs-2.json @@ -14,8 +14,11 @@ "ListComplianceStatus": "Returns an array of PolicyComplianceStatus
objects in the response. Use PolicyComplianceStatus
to get a summary of which member accounts are protected by the specified policy.
Returns a MemberAccounts
object that lists the member accounts in the administrator's AWS organization.
The ListMemberAccounts
must be submitted by the account that is set as the AWS Firewall Manager administrator.
Returns an array of PolicySummary
objects in the response.
Retrieves the list of tags for the specified AWS resource.
", "PutNotificationChannel": "Designates the IAM role and Amazon Simple Notification Service (SNS) topic that AWS Firewall Manager uses to record SNS logs.
", - "PutPolicy": "Creates an AWS Firewall Manager policy.
Firewall Manager provides the following types of policies:
A Shield Advanced policy, which applies Shield Advanced protection to specified accounts and resources
An AWS WAF policy, which contains a rule group and defines which resources are to be protected by that rule group
A security group policy, which manages VPC security groups across your AWS organization.
Each policy is specific to one of the three types. If you want to enforce more than one policy type across accounts, you can create multiple policies. You can create multiple policies for each type.
You must be subscribed to Shield Advanced to create a Shield Advanced policy. For more information about subscribing to Shield Advanced, see CreateSubscription.
" + "PutPolicy": "Creates an AWS Firewall Manager policy.
Firewall Manager provides the following types of policies:
A Shield Advanced policy, which applies Shield Advanced protection to specified accounts and resources
An AWS WAF policy, which contains a rule group and defines which resources are to be protected by that rule group
A security group policy, which manages VPC security groups across your AWS organization.
Each policy is specific to one of the three types. If you want to enforce more than one policy type across accounts, you can create multiple policies. You can create multiple policies for each type.
You must be subscribed to Shield Advanced to create a Shield Advanced policy. For more information about subscribing to Shield Advanced, see CreateSubscription.
", + "TagResource": "Adds one or more tags to an AWS resource.
", + "UntagResource": "Removes one or more tags from an AWS resource.
" }, "shapes": { "AWSAccountId": { @@ -254,6 +257,16 @@ "refs": { } }, + "ListTagsForResourceRequest": { + "base": null, + "refs": { + } + }, + "ListTagsForResourceResponse": { + "base": null, + "refs": { + } + }, "ManagedServiceData": { "base": null, "refs": { @@ -379,10 +392,13 @@ "GetNotificationChannelResponse$SnsTopicArn": "The SNS topic that records AWS Firewall Manager activity.
", "GetNotificationChannelResponse$SnsRoleName": "The IAM role that is used by AWS Firewall Manager to record activity to SNS.
", "GetPolicyResponse$PolicyArn": "The Amazon Resource Name (ARN) of the specified policy.
", + "ListTagsForResourceRequest$ResourceArn": "The Amazon Resource Name (ARN) of the resource to return tags for. The Firewall Manager policy is the only AWS resource that supports tagging, so this ARN is a policy ARN..
", "PolicySummary$PolicyArn": "The Amazon Resource Name (ARN) of the specified policy.
", "PutNotificationChannelRequest$SnsTopicArn": "The Amazon Resource Name (ARN) of the SNS topic that collects notifications from AWS Firewall Manager.
", "PutNotificationChannelRequest$SnsRoleName": "The Amazon Resource Name (ARN) of the IAM role that allows Amazon SNS to record AWS Firewall Manager activity.
", - "PutPolicyResponse$PolicyArn": "The Amazon Resource Name (ARN) of the policy that was created.
" + "PutPolicyResponse$PolicyArn": "The Amazon Resource Name (ARN) of the policy that was created.
", + "TagResourceRequest$ResourceArn": "The Amazon Resource Name (ARN) of the resource. The Firewall Manager policy is the only AWS resource that supports tagging, so this ARN is a policy ARN.
", + "UntagResourceRequest$ResourceArn": "The Amazon Resource Name (ARN) of the resource. The Firewall Manager policy is the only AWS resource that supports tagging, so this ARN is a policy ARN.
" } }, "ResourceCount": { @@ -416,6 +432,18 @@ "ResourceTags$member": null } }, + "ResourceTagKey": { + "base": null, + "refs": { + "ResourceTag$Key": "The resource tag key.
" + } + }, + "ResourceTagValue": { + "base": null, + "refs": { + "ResourceTag$Value": "The resource tag value.
" + } + }, "ResourceTags": { "base": null, "refs": { @@ -451,16 +479,47 @@ "SecurityServicePolicyData$Type": "The service that the policy is using to protect the resources. This specifies the type of policy that is created, either an AWS WAF policy, a Shield Advanced policy, or a security group policy. For security group policies, Firewall Manager supports one security group for each common policy and for each content audit policy. This is an adjustable limit that you can increase by contacting AWS Support.
" } }, + "Tag": { + "base": "A collection of key:value pairs associated with an AWS resource. The key:value pair can be anything you define. Typically, the tag key represents a category (such as \"environment\") and the tag value represents a specific value within that category (such as \"test,\" \"development,\" or \"production\"). You can add up to 50 tags to each AWS resource.
", + "refs": { + "TagList$member": null + } + }, "TagKey": { "base": null, "refs": { - "ResourceTag$Key": "The resource tag key.
" + "Tag$Key": "Part of the key:value pair that defines a tag. You can use a tag key to describe a category of information, such as \"customer.\" Tag keys are case-sensitive.
", + "TagKeyList$member": null + } + }, + "TagKeyList": { + "base": null, + "refs": { + "UntagResourceRequest$TagKeys": "The keys of the tags to remove from the resource.
" + } + }, + "TagList": { + "base": null, + "refs": { + "ListTagsForResourceResponse$TagList": "The tags associated with the resource.
", + "PutPolicyRequest$TagList": "The tags to add to the AWS resource.
", + "TagResourceRequest$TagList": "The tags to add to the resource.
" + } + }, + "TagResourceRequest": { + "base": null, + "refs": { + } + }, + "TagResourceResponse": { + "base": null, + "refs": { } }, "TagValue": { "base": null, "refs": { - "ResourceTag$Value": "The resource tag value.
" + "Tag$Value": "Part of the key:value pair that defines a tag. You can use a tag value to describe a specific value within a category, such as \"companyA\" or \"companyB.\" Tag values are case-sensitive.
" } }, "TimeStamp": { @@ -472,6 +531,16 @@ "PolicyComplianceStatus$LastUpdated": "Timestamp of the last update to the EvaluationResult
objects.
Amazon FSx is a fully managed service that makes it easy for storage and application administrators to launch and use shared file storage.
", "operations": { + "CancelDataRepositoryTask": "Cancels an existing Amazon FSx for Lustre data repository task if that task is in either the PENDING
or EXECUTING
state. When you cancel a task, Amazon FSx does the following.
Any files that FSx has already exported are not reverted.
FSx continues to export any files that are \"in-flight\" when the cancel operation is received.
FSx does not export any files that have not yet been exported.
Creates a backup of an existing Amazon FSx for Windows File Server file system. Creating regular backups for your file system is a best practice that complements the replication that Amazon FSx for Windows File Server performs for your file system. It also enables you to restore from user modification of data.
If a backup with the specified client request token exists, and the parameters match, this operation returns the description of the existing backup. If a backup specified client request token exists, and the parameters don't match, this operation returns IncompatibleParameterError
. If a backup with the specified client request token doesn't exist, CreateBackup
does the following:
Creates a new Amazon FSx backup with an assigned ID, and an initial lifecycle state of CREATING
.
Returns the description of the backup.
By using the idempotent operation, you can retry a CreateBackup
operation without the risk of creating an extra backup. This approach can be useful when an initial call fails in a way that makes it unclear whether a backup was created. If you use the same client request token and the initial call created a backup, the operation returns a successful result because all the parameters are the same.
The CreateFileSystem
operation returns while the backup's lifecycle state is still CREATING
. You can check the file system creation status by calling the DescribeBackups operation, which returns the backup state along with other information.
Creates an Amazon FSx for Lustre data repository task. You use data repository tasks to perform bulk operations between your Amazon FSx file system and its linked data repository. An example of a data repository task is exporting any data and metadata changes, including POSIX metadata, to files, directories, and symbolic links (symlinks) from your FSx file system to its linked data repository. A CreateDataRepositoryTask
operation will fail if a data repository is not linked to the FSx file system. To learn more about data repository tasks, see Using Data Repository Tasks. To learn more about linking a data repository to your file system, see Step 1: Create Your Amazon FSx for Lustre File System.
Creates a new, empty Amazon FSx file system.
If a file system with the specified client request token exists and the parameters match, CreateFileSystem
returns the description of the existing file system. If a file system specified client request token exists and the parameters don't match, this call returns IncompatibleParameterError
. If a file system with the specified client request token doesn't exist, CreateFileSystem
does the following:
Creates a new, empty Amazon FSx file system with an assigned ID, and an initial lifecycle state of CREATING
.
Returns the description of the file system.
This operation requires a client request token in the request that Amazon FSx uses to ensure idempotent creation. This means that calling the operation multiple times with the same client request token has no effect. By using the idempotent operation, you can retry a CreateFileSystem
operation without the risk of creating an extra file system. This approach can be useful when an initial call fails in a way that makes it unclear whether a file system was created. Examples are if a transport level timeout occurred, or your connection was reset. If you use the same client request token and the initial call created a file system, the client receives success as long as the parameters are the same.
The CreateFileSystem
call returns while the file system's lifecycle state is still CREATING
. You can check the file-system creation status by calling the DescribeFileSystems operation, which returns the file system state along with other information.
Creates a new Amazon FSx file system from an existing Amazon FSx for Windows File Server backup.
If a file system with the specified client request token exists and the parameters match, this operation returns the description of the file system. If a client request token specified by the file system exists and the parameters don't match, this call returns IncompatibleParameterError
. If a file system with the specified client request token doesn't exist, this operation does the following:
Creates a new Amazon FSx file system from backup with an assigned ID, and an initial lifecycle state of CREATING
.
Returns the description of the file system.
Parameters like Active Directory, default share name, automatic backup, and backup settings default to the parameters of the file system that was backed up, unless overridden. You can explicitly supply other settings.
By using the idempotent operation, you can retry a CreateFileSystemFromBackup
call without the risk of creating an extra file system. This approach can be useful when an initial call fails in a way that makes it unclear whether a file system was created. Examples are if a transport level timeout occurred, or your connection was reset. If you use the same client request token and the initial call created a file system, the client receives success as long as the parameters are the same.
The CreateFileSystemFromBackup
call returns while the file system's lifecycle state is still CREATING
. You can check the file-system creation status by calling the DescribeFileSystems operation, which returns the file system state along with other information.
Deletes an Amazon FSx for Windows File Server backup, deleting its contents. After deletion, the backup no longer exists, and its data is gone.
The DeleteBackup
call returns instantly. The backup will not show up in later DescribeBackups
calls.
The data in a deleted backup is also deleted and can't be recovered by any means.
Deletes a file system, deleting its contents. After deletion, the file system no longer exists, and its data is gone. Any existing automatic backups will also be deleted.
By default, when you delete an Amazon FSx for Windows File Server file system, a final backup is created upon deletion. This final backup is not subject to the file system's retention policy, and must be manually deleted.
The DeleteFileSystem
action returns while the file system has the DELETING
status. You can check the file system deletion status by calling the DescribeFileSystems action, which returns a list of file systems in your account. If you pass the file system ID for a deleted file system, the DescribeFileSystems returns a FileSystemNotFound
error.
The data in a deleted file system is also deleted and can't be recovered by any means.
Deletes a file system, deleting its contents. After deletion, the file system no longer exists, and its data is gone. Any existing automatic backups will also be deleted.
By default, when you delete an Amazon FSx for Windows File Server file system, a final backup is created upon deletion. This final backup is not subject to the file system's retention policy, and must be manually deleted.
The DeleteFileSystem
action returns while the file system has the DELETING
status. You can check the file system deletion status by calling the DescribeFileSystems action, which returns a list of file systems in your account. If you pass the file system ID for a deleted file system, the DescribeFileSystems returns a FileSystemNotFound
error.
Deleting an Amazon FSx for Lustre file system will fail with a 400 BadRequest if a data repository task is in a PENDING
or EXECUTING
state.
The data in a deleted file system is also deleted and can't be recovered by any means.
Returns the description of specific Amazon FSx for Windows File Server backups, if a BackupIds
value is provided for that backup. Otherwise, it returns all backups owned by your AWS account in the AWS Region of the endpoint that you're calling.
When retrieving all backups, you can optionally specify the MaxResults
parameter to limit the number of backups in a response. If more backups remain, Amazon FSx returns a NextToken
value in the response. In this case, send a later request with the NextToken
request parameter set to the value of NextToken
from the last response.
This action is used in an iterative process to retrieve a list of your backups. DescribeBackups
is called first without a NextToken
value. Then the action continues to be called with the NextToken
parameter set to the value of the last NextToken
value until a response has no NextToken
.
When using this action, keep the following in mind:
The implementation might return fewer than MaxResults
file system descriptions while still including a NextToken
value.
The order of backups returned in the response of one DescribeBackups
call and the order of backups returned across the responses of a multi-call iteration is unspecified.
Returns the description of specific Amazon FSx for Lustre data repository tasks, if one or more TaskIds
values are provided in the request, or if filters are used in the request. You can use filters to narrow the response to include just tasks for specific file systems, or tasks in a specific lifecycle state. Otherwise, it returns all data repository tasks owned by your AWS account in the AWS Region of the endpoint that you're calling.
When retrieving all tasks, you can paginate the response by using the optional MaxResults
parameter to limit the number of tasks returned in a response. If more tasks remain, Amazon FSx returns a NextToken
value in the response. In this case, send a later request with the NextToken
request parameter set to the value of NextToken
from the last response.
Returns the description of specific Amazon FSx file systems, if a FileSystemIds
value is provided for that file system. Otherwise, it returns descriptions of all file systems owned by your AWS account in the AWS Region of the endpoint that you're calling.
When retrieving all file system descriptions, you can optionally specify the MaxResults
parameter to limit the number of descriptions in a response. If more file system descriptions remain, Amazon FSx returns a NextToken
value in the response. In this case, send a later request with the NextToken
request parameter set to the value of NextToken
from the last response.
This action is used in an iterative process to retrieve a list of your file system descriptions. DescribeFileSystems
is called first without a NextToken
value. Then the action continues to be called with the NextToken
parameter set to the value of the last NextToken
value until a response has no NextToken
.
When using this action, keep the following in mind:
The implementation might return fewer than MaxResults
file system descriptions while still including a NextToken
value.
The order of file systems returned in the response of one DescribeFileSystems
call and the order of file systems returned across the responses of a multicall iteration is unspecified.
Lists tags for an Amazon FSx file systems and backups in the case of Amazon FSx for Windows File Server.
When retrieving all tags, you can optionally specify the MaxResults
parameter to limit the number of tags in a response. If more tags remain, Amazon FSx returns a NextToken
value in the response. In this case, send a later request with the NextToken
request parameter set to the value of NextToken
from the last response.
This action is used in an iterative process to retrieve a list of your tags. ListTagsForResource
is called first without a NextToken
value. Then the action continues to be called with the NextToken
parameter set to the value of the last NextToken
value until a response has no NextToken
.
When using this action, keep the following in mind:
The implementation might return fewer than MaxResults
file system descriptions while still including a NextToken
value.
The order of tags returned in the response of one ListTagsForResource
call and the order of tags returned across the responses of a multi-call iteration is unspecified.
Tags an Amazon FSx resource.
", @@ -49,6 +52,7 @@ "ArchivePath": { "base": null, "refs": { + "CompletionReport$Path": "Required if Enabled
is set to true
. Specifies the location of the report on the file system's linked S3 data repository. An absolute path that defines where the completion report will be stored in the destination location. The Path
you provide must be located within the file system’s ExportPath. An example Path
value is \"s3://myBucket/myExportPath/optionalPrefix\". The report provides the following information for each file in the report: FilePath, FileStatus, and ErrorCode. To learn more about a file system's ExportPath
, see .
(Optional) The path to the Amazon S3 bucket (including the optional prefix) that you're using as the data repository for your Amazon FSx for Lustre file system. The root of your FSx for Lustre file system will be mapped to the root of the Amazon S3 bucket you select. An example is s3://import-bucket/optional-prefix
. If you specify a prefix after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
(Optional) The path in Amazon S3 where the root of your Amazon FSx file system is exported. The path must use the same Amazon S3 bucket as specified in ImportPath. You can provide an optional prefix to which new and changed data is to be exported from your Amazon FSx for Lustre file system. If an ExportPath
value is not provided, Amazon FSx sets a default export path, s3://import-bucket/FSxLustre[creation-timestamp]
. The timestamp is in UTC format, for example s3://import-bucket/FSxLustre20181105T222312Z
.
The Amazon S3 export bucket must be the same as the import bucket specified by ImportPath
. If you only specify a bucket name, such as s3://import-bucket
, you get a 1:1 mapping of file system objects to S3 bucket objects. This mapping means that the input data in S3 is overwritten on export. If you provide a custom prefix in the export path, such as s3://import-bucket/[custom-optional-prefix]
, Amazon FSx exports the contents of your file system to that export prefix in the Amazon S3 bucket.
The import path to the Amazon S3 bucket (and optional prefix) that you're using as the data repository for your FSx for Lustre file system, for example s3://import-bucket/optional-prefix
. If a prefix is specified after the Amazon S3 bucket name, only object keys with that prefix are loaded into the file system.
Cancels a data repository task.
", + "refs": { + } + }, + "CancelDataRepositoryTaskResponse": { + "base": null, + "refs": { + } + }, "ClientRequestToken": { "base": "(Optional) An idempotency token for resource creation, in a string of up to 64 ASCII characters. This token is automatically filled on your behalf when you use the AWS Command Line Interface (AWS CLI) or an AWS SDK.
", "refs": { "CreateBackupRequest$ClientRequestToken": "(Optional) A string of up to 64 ASCII characters that Amazon FSx uses to ensure idempotent creation. This string is automatically filled on your behalf when you use the AWS Command Line Interface (AWS CLI) or an AWS SDK.
", + "CreateDataRepositoryTaskRequest$ClientRequestToken": null, "CreateFileSystemFromBackupRequest$ClientRequestToken": "(Optional) A string of up to 64 ASCII characters that Amazon FSx uses to ensure idempotent creation. This string is automatically filled on your behalf when you use the AWS Command Line Interface (AWS CLI) or an AWS SDK.
", "CreateFileSystemRequest$ClientRequestToken": "(Optional) A string of up to 64 ASCII characters that Amazon FSx uses to ensure idempotent creation. This string is automatically filled on your behalf when you use the AWS Command Line Interface (AWS CLI) or an AWS SDK.
", "DeleteBackupRequest$ClientRequestToken": "(Optional) A string of up to 64 ASCII characters that Amazon FSx uses to ensure idempotent deletion. This is automatically filled on your behalf when using the AWS CLI or SDK.
", @@ -143,6 +158,13 @@ "UpdateFileSystemRequest$ClientRequestToken": "(Optional) A string of up to 64 ASCII characters that Amazon FSx uses to ensure idempotent updates. This string is automatically filled on your behalf when you use the AWS Command Line Interface (AWS CLI) or an AWS SDK.
" } }, + "CompletionReport": { + "base": "Provides a report detailing the data repository task results of the files processed that match the criteria specified in the report Scope
parameter. FSx delivers the report to the file system's linked data repository in Amazon S3, using the path specified in the report Path
parameter. You can specify whether or not a report gets generated for a task using the Enabled
parameter.
Defines whether or not Amazon FSx provides a CompletionReport once the task has completed. A CompletionReport provides a detailed report on the files that Amazon FSx processed that meet the criteria specified by the Scope
parameter.
The request object for the CreateBackup
operation.
The request object for the CreateFileSystemFromBackup
operation.
The time that the resource was created, in seconds (since 1970-01-01T00:00:00Z), also known as Unix time.
", "refs": { "Backup$CreationTime": "The time when a particular backup was created.
", + "DataRepositoryTask$CreationTime": null, "FileSystem$CreationTime": "The time that the file system was created, in seconds (since 1970-01-01T00:00:00Z), also known as Unix time.
" } }, @@ -214,6 +247,103 @@ "LustreFileSystemConfiguration$DataRepositoryConfiguration": null } }, + "DataRepositoryTask": { + "base": "A description of the data repository task. You use data repository tasks to perform bulk transfer operations between your Amazon FSx file system and its linked data repository.
", + "refs": { + "CreateDataRepositoryTaskResponse$DataRepositoryTask": "The description of the data repository task that you just created.
", + "DataRepositoryTasks$member": null + } + }, + "DataRepositoryTaskEnded": { + "base": "The data repository task could not be canceled because the task has already ended.
", + "refs": { + } + }, + "DataRepositoryTaskExecuting": { + "base": "An existing data repository task is currently executing on the file system. Wait until the existing task has completed, then create the new task.
", + "refs": { + } + }, + "DataRepositoryTaskFailureDetails": { + "base": "Provides information about why a data repository task failed. Only populated when the task Lifecycle
is set to FAILED
.
Failure message describing why the task failed, it is populated only when Lifecycle
is set to FAILED
.
(Optional) An array of filter objects you can use to filter the response of data repository tasks you will see in the the response. You can filter the tasks returned in the response by one or more file system IDs, task lifecycles, and by task type. A filter object consists of a filter Name
, and one or more Values
for the filter.
Name of the task property to use in filtering the tasks returned in the response.
Use file-system-id
to retrieve data repository tasks for specific file systems.
Use task-lifecycle
to retrieve data repository tasks with one or more specific lifecycle states, as follows: CANCELED, EXECUTING, FAILED, PENDING, and SUCCEEDED.
Use Values to include the specific file system IDs and task lifecycle states for the filters you are using.
" + } + }, + "DataRepositoryTaskFilters": { + "base": null, + "refs": { + "DescribeDataRepositoryTasksRequest$Filters": "(Optional) You can use filters to narrow the DescribeDataRepositoryTasks
response to include just tasks for specific file systems, or tasks in a specific lifecycle state.
The lifecycle status of the data repository task, as follows:
PENDING
- Amazon FSx has not started the task.
EXECUTING
- Amazon FSx is processing the task.
FAILED
- Amazon FSx was not able to complete the task. For example, there may be files the task failed to process. The DataRepositoryTaskFailureDetails property provides more information about task failures.
SUCCEEDED
- FSx completed the task successfully.
CANCELED
- Amazon FSx canceled the task and it did not complete.
CANCELING
- FSx is in process of canceling the task.
The lifecycle status of the data repository task, as follows:
PENDING
- Amazon FSx has not started the task.
EXECUTING
- Amazon FSx is processing the task.
FAILED
- Amazon FSx was not able to complete the task. For example, there may be files the task failed to process. The DataRepositoryTaskFailureDetails property provides more information about task failures.
SUCCEEDED
- FSx completed the task successfully.
CANCELED
- Amazon FSx canceled the task and it did not complete.
CANCELING
- FSx is in process of canceling the task.
You cannot delete an FSx for Lustre file system if there are data repository tasks for the file system in the PENDING
or EXECUTING
states. Please retry when the data repository task is finished (with a status of CANCELED
, SUCCEEDED
, or FAILED
). You can use the DescribeDataRepositoryTask action to monitor the task status. Contact the FSx team if you need to delete your file system immediately.
The data repository task or tasks you specified could not be found.
", + "refs": { + } + }, + "DataRepositoryTaskPath": { + "base": null, + "refs": { + "DataRepositoryTaskPaths$member": null + } + }, + "DataRepositoryTaskPaths": { + "base": null, + "refs": { + "CreateDataRepositoryTaskRequest$Paths": "(Optional) The path or paths on the Amazon FSx file system to use when the data repository task is processed. The default path is the file system root directory.
", + "DataRepositoryTask$Paths": "An array of paths on the Amazon FSx for Lustre file system that specify the data for the data repository task to process. For example, in an EXPORT_TO_REPOSITORY task, the paths specify which data to export to the linked data repository.
(Default) If Paths
is not specified, Amazon FSx uses the file system root directory.
Provides the task status showing a running total of the total number of files to be processed, the number successfully processed, and the number of files the task failed to process.
", + "refs": { + "DataRepositoryTask$Status": "Provides the status of the number of files that the task has processed successfully and failed to process.
" + } + }, + "DataRepositoryTaskType": { + "base": null, + "refs": { + "CreateDataRepositoryTaskRequest$Type": "Specifies the type of data repository task to create.
", + "DataRepositoryTask$Type": "The type of data repository task; EXPORT_TO_REPOSITORY is the only type currently supported.
" + } + }, + "DataRepositoryTasks": { + "base": null, + "refs": { + "DescribeDataRepositoryTasksResponse$DataRepositoryTasks": "The collection of data repository task descriptions returned.
" + } + }, "DeleteBackupRequest": { "base": "The request object for DeleteBackup
operation.
The request object for DescribeFileSystems
operation.
A list of up to two IP addresses of DNS servers or domain controllers in the self-managed AD directory.
", - "SelfManagedActiveDirectoryConfiguration$DnsIps": "A list of up to two IP addresses of DNS servers or domain controllers in the self-managed AD directory. The IP addresses need to be either in the same VPC CIDR range as the one in which your Amazon FSx file system is being created, or in the private IP version 4 (Iv4) address ranges, as specified in RFC 1918:
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
A list of up to two IP addresses of DNS servers or domain controllers in the self-managed AD directory. The IP addresses need to be either in the same VPC CIDR range as the one in which your Amazon FSx file system is being created, or in the private IP version 4 (IPv4) address ranges, as specified in RFC 1918:
10.0.0.0 - 10.255.255.255 (10/8 prefix)
172.16.0.0 - 172.31.255.255 (172.16/12 prefix)
192.168.0.0 - 192.168.255.255 (192.168/16 prefix)
A list of up to two IP addresses of DNS servers or domain controllers in the self-managed AD directory.
" } }, + "EndTime": { + "base": null, + "refs": { + "DataRepositoryTask$EndTime": "The time that Amazon FSx completed processing the task, populated after the task is complete.
" + } + }, "ErrorMessage": { "base": "A detailed error message.
", "refs": { @@ -307,6 +453,10 @@ "BackupNotFound$Message": null, "BackupRestoring$Message": null, "BadRequest$Message": null, + "DataRepositoryTaskEnded$Message": null, + "DataRepositoryTaskExecuting$Message": null, + "DataRepositoryTaskFailureDetails$Message": null, + "DataRepositoryTaskNotFound$Message": null, "FileSystemFailureDetails$Message": "A message describing any failures that occurred during file system creation.
", "FileSystemNotFound$Message": null, "IncompatibleParameterError$Message": null, @@ -322,6 +472,12 @@ "UnsupportedOperation$Message": null } }, + "FailedCount": { + "base": null, + "refs": { + "DataRepositoryTaskStatus$FailedCount": "A running total of the number of files that the task failed to process.
" + } + }, "FileSystem": { "base": "A description of a specific Amazon FSx file system.
", "refs": { @@ -350,6 +506,8 @@ "refs": { "BackupRestoring$FileSystemId": "The ID of a file system being restored from the backup.
", "CreateBackupRequest$FileSystemId": "The ID of the file system to back up.
", + "CreateDataRepositoryTaskRequest$FileSystemId": null, + "DataRepositoryTask$FileSystemId": null, "DeleteFileSystemRequest$FileSystemId": "The ID of the file system you want to delete.
", "DeleteFileSystemResponse$FileSystemId": "The ID of the file system being deleted.
", "FileSystem$FileSystemId": "The system-generated, unique 17-digit ID of the file system.
", @@ -433,6 +591,7 @@ "Flag": { "base": null, "refs": { + "CompletionReport$Enabled": "Set Enabled
to True
to generate a CompletionReport
when the task completes. If set to true
, then you need to provide a report Scope
, Path
, and Format
. Set Enabled
to False
if you do not want a CompletionReport
generated when the task completes.
A boolean flag indicating whether tags for the file system should be copied to backups. This value defaults to false. If it's set to true, all tags for the file system are copied to all automatic and user-initiated backups where the user doesn't specify tags. If this value is true, and you specify one or more tags, only the specified tags are copied to backups.
", "DeleteFileSystemWindowsConfiguration$SkipFinalBackup": "By default, Amazon FSx for Windows takes a final backup on your behalf when the DeleteFileSystem
operation is invoked. Doing this helps protect you from data loss, and we highly recommend taking the final backup. If you want to skip this backup, use this flag to do so.
A boolean flag indicating whether tags on the file system should be copied to backups. This value defaults to false. If it's set to true, all tags on the file system are copied to all automatic backups and any user-initiated backups where the user doesn't specify any tags. If this value is true, and you specify one or more tags, only the specified tags are copied to backups.
" @@ -471,11 +630,17 @@ } }, "KmsKeyId": { - "base": "The ID of your AWS Key Management Service (AWS KMS) key. This ID is used to encrypt the data in your file system at rest. For more information, see Encrypt in the AWS Key Management Service API Reference.
", + "base": "The ID of the AWS Key Management Service (AWS KMS) key used to encrypt the file system's data for an Amazon FSx for Windows File Server file system at rest. Amazon FSx for Lustre does not support KMS encryption. For more information, see Encrypt in the AWS Key Management Service API Reference.
", "refs": { - "Backup$KmsKeyId": "The ID of the AWS Key Management Service (AWS KMS) key used to encrypt this backup's data.
", + "Backup$KmsKeyId": "The ID of the AWS Key Management Service (AWS KMS) key used to encrypt this backup of the Amazon FSx for Windows file system's data at rest. Amazon FSx for Lustre does not support KMS encryption.
", "CreateFileSystemRequest$KmsKeyId": null, - "FileSystem$KmsKeyId": "The ID of the AWS Key Management Service (AWS KMS) key used to encrypt the file system's data for an Amazon FSx for Windows File Server file system.
" + "FileSystem$KmsKeyId": "The ID of the AWS Key Management Service (AWS KMS) key used to encrypt the file system's data for an Amazon FSx for Windows File Server file system. Amazon FSx for Lustre does not support KMS encryption.
" + } + }, + "LastUpdatedTime": { + "base": null, + "refs": { + "DataRepositoryTaskStatus$LastUpdatedTime": "The time at which the task status was last updated.
" } }, "ListTagsForResourceRequest": { @@ -498,6 +663,7 @@ "base": "The maximum number of resources to return in the response. This value must be an integer greater than zero.
", "refs": { "DescribeBackupsRequest$MaxResults": "(Optional) Maximum number of backups to return in the response (integer). This parameter value must be greater than 0. The number of items that Amazon FSx returns is the minimum of the MaxResults
parameter specified in the request and the service's internal maximum number of items per page.
(Optional) Maximum number of file systems to return in the response (integer). This parameter value must be greater than 0. The number of items that Amazon FSx returns is the minimum of the MaxResults
parameter specified in the request and the service's internal maximum number of items per page.
(Optional) Maximum number of tags to return in the response (integer). This parameter value must be greater than 0. The number of items that Amazon FSx returns is the minimum of the MaxResults
parameter specified in the request and the service's internal maximum number of items per page.
(Optional) Opaque pagination token returned from a previous DescribeBackups
operation (String). If a token present, the action continues the list from where the returning call left off.
This is present if there are more backups than returned in the response (String). You can use the NextToken
value in the later request to fetch the backups.
(Optional) Opaque pagination token returned from a previous DescribeFileSystems
operation (String). If a token present, the action continues the list from where the returning call left off.
Present if there are more file systems than returned in the response (String). You can use the NextToken
value in the later request to fetch the descriptions.
(Optional) Opaque pagination token returned from a previous ListTagsForResource
operation (String). If a token present, the action continues the list from where the returning call left off.
Required if Enabled
is set to true
. Specifies the format of the CompletionReport
. REPORT_CSV_20191124
is the only format currently supported. When Format
is set to REPORT_CSV_20191124
, the CompletionReport
is provided in CSV format, and is delivered to {path}/task-{id}/failures.csv
.
Required if Enabled
is set to true
. Specifies the scope of the CompletionReport
; FAILED_FILES_ONLY
is the only scope currently supported. When Scope
is set to FAILED_FILES_ONLY
, the CompletionReport
only contains information about files that the data repository task failed to process.
The Amazon Resource Name (ARN) for a given resource. ARNs uniquely identify AWS resources. We require an ARN when you need to specify a resource unambiguously across all of AWS. For more information, see Amazon Resource Names (ARNs) and AWS Service Namespaces in the AWS General Reference.
", "refs": { "Backup$ResourceARN": "The Amazon Resource Name (ARN) for the backup resource.
", + "DataRepositoryTask$ResourceARN": null, "FileSystem$ResourceARN": "The Amazon Resource Name (ARN) for the file system resource.
", "ListTagsForResourceRequest$ResourceARN": "The ARN of the Amazon FSx resource that will have its tags listed.
", "NotServiceResourceError$ResourceARN": "The Amazon Resource Name (ARN) of the non-Amazon FSx resource.
", @@ -634,6 +815,12 @@ "refs": { } }, + "StartTime": { + "base": null, + "refs": { + "DataRepositoryTask$StartTime": "The time that Amazon FSx began processing the task.
" + } + }, "StorageCapacity": { "base": "The storage capacity for your Amazon FSx file system, in gibibytes.
", "refs": { @@ -658,6 +845,12 @@ "FileSystem$SubnetIds": "The ID of the subnet to contain the endpoint for the file system. One and only one is supported. The file system is launched in the Availability Zone associated with this subnet.
" } }, + "SucceededCount": { + "base": null, + "refs": { + "DataRepositoryTaskStatus$SucceededCount": "A running total of the number of files that the task has successfully processed.
" + } + }, "Tag": { "base": "Specifies a key-value pair for a resource tag.
", "refs": { @@ -698,8 +891,10 @@ "refs": { "Backup$Tags": "Tags associated with a particular file system.
", "CreateBackupRequest$Tags": "The tags to apply to the backup at backup creation. The key value of the Name
tag appears in the console as the backup name.
The tags to be applied to the file system at file system creation. The key value of the Name
tag appears in the console as the file system name.
The tags to apply to the file system being created. The key value of the Name
tag appears in the console as the file system name.
A set of tags for your final backup.
", "DeleteFileSystemWindowsResponse$FinalBackupTags": "The set of tags applied to the final backup.
", "FileSystem$Tags": "The tags to associate with the file system. For more information, see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide.
", @@ -707,8 +902,29 @@ "TagResourceRequest$Tags": "A list of tags for the resource. If a tag with a given key already exists, the value is replaced by the one specified in this parameter.
" } }, + "TaskId": { + "base": null, + "refs": { + "CancelDataRepositoryTaskRequest$TaskId": "Specifies the data repository task to cancel.
", + "CancelDataRepositoryTaskResponse$TaskId": "The ID of the task being canceled.
", + "DataRepositoryTask$TaskId": "The system-generated, unique 17-digit ID of the data repository task.
", + "TaskIds$member": null + } + }, + "TaskIds": { + "base": null, + "refs": { + "DescribeDataRepositoryTasksRequest$TaskIds": "(Optional) IDs of the tasks whose descriptions you want to retrieve (String).
" + } + }, + "TotalCount": { + "base": null, + "refs": { + "DataRepositoryTaskStatus$TotalCount": "The total number of files that the task will process. While a task is executing, the sum of SucceededCount
plus FailedCount
may not equal TotalCount
. When the task is complete, TotalCount
equals the sum of SucceededCount
plus FailedCount
.
An error occured.
", + "base": "The requested operation is not supported for this resource or API.
", "refs": { } }, diff --git a/models/apis/fsx/2018-03-01/paginators-1.json b/models/apis/fsx/2018-03-01/paginators-1.json index 912e2f2de20..43df3a5f3a7 100644 --- a/models/apis/fsx/2018-03-01/paginators-1.json +++ b/models/apis/fsx/2018-03-01/paginators-1.json @@ -5,6 +5,11 @@ "output_token": "NextToken", "limit_key": "MaxResults" }, + "DescribeDataRepositoryTasks": { + "input_token": "NextToken", + "output_token": "NextToken", + "limit_key": "MaxResults" + }, "DescribeFileSystems": { "input_token": "NextToken", "output_token": "NextToken", diff --git a/models/apis/gamelift/2015-10-01/api-2.json b/models/apis/gamelift/2015-10-01/api-2.json index eb6edf7bba0..f4f73db5fa6 100644 --- a/models/apis/gamelift/2015-10-01/api-2.json +++ b/models/apis/gamelift/2015-10-01/api-2.json @@ -40,7 +40,8 @@ {"shape":"InvalidRequestException"}, {"shape":"ConflictException"}, {"shape":"InternalServiceException"}, - {"shape":"LimitExceededException"} + {"shape":"LimitExceededException"}, + {"shape":"TaggingFailedException"} ] }, "CreateBuild":{ @@ -55,6 +56,7 @@ {"shape":"UnauthorizedException"}, {"shape":"InvalidRequestException"}, {"shape":"ConflictException"}, + {"shape":"TaggingFailedException"}, {"shape":"InternalServiceException"} ] }, @@ -72,7 +74,8 @@ {"shape":"ConflictException"}, {"shape":"LimitExceededException"}, {"shape":"InvalidRequestException"}, - {"shape":"UnauthorizedException"} + {"shape":"UnauthorizedException"}, + {"shape":"TaggingFailedException"} ] }, "CreateGameSession":{ @@ -108,7 +111,8 @@ {"shape":"InternalServiceException"}, {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"}, - {"shape":"LimitExceededException"} + {"shape":"LimitExceededException"}, + {"shape":"TaggingFailedException"} ] }, "CreateMatchmakingConfiguration":{ @@ -124,7 +128,8 @@ {"shape":"LimitExceededException"}, {"shape":"NotFoundException"}, {"shape":"InternalServiceException"}, - {"shape":"UnsupportedRegionException"} + {"shape":"UnsupportedRegionException"}, + {"shape":"TaggingFailedException"} ] }, "CreateMatchmakingRuleSet":{ @@ -138,7 +143,8 @@ "errors":[ {"shape":"InvalidRequestException"}, {"shape":"InternalServiceException"}, - {"shape":"UnsupportedRegionException"} + {"shape":"UnsupportedRegionException"}, + {"shape":"TaggingFailedException"} ] }, "CreatePlayerSession":{ @@ -189,6 +195,7 @@ {"shape":"UnauthorizedException"}, {"shape":"InvalidRequestException"}, {"shape":"ConflictException"}, + {"shape":"TaggingFailedException"}, {"shape":"InternalServiceException"} ] }, @@ -233,6 +240,7 @@ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"}, {"shape":"InvalidRequestException"}, + {"shape":"TaggingFailedException"}, {"shape":"InternalServiceException"} ] }, @@ -247,6 +255,7 @@ {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"}, {"shape":"InternalServiceException"}, + {"shape":"TaggingFailedException"}, {"shape":"InvalidRequestException"} ] }, @@ -262,7 +271,8 @@ {"shape":"InternalServiceException"}, {"shape":"InvalidFleetStatusException"}, {"shape":"UnauthorizedException"}, - {"shape":"InvalidRequestException"} + {"shape":"InvalidRequestException"}, + {"shape":"TaggingFailedException"} ] }, "DeleteGameSessionQueue":{ @@ -277,7 +287,8 @@ {"shape":"InternalServiceException"}, {"shape":"InvalidRequestException"}, {"shape":"NotFoundException"}, - {"shape":"UnauthorizedException"} + {"shape":"UnauthorizedException"}, + {"shape":"TaggingFailedException"} ] }, "DeleteMatchmakingConfiguration":{ @@ -292,7 +303,8 @@ {"shape":"InvalidRequestException"}, {"shape":"NotFoundException"}, {"shape":"InternalServiceException"}, - {"shape":"UnsupportedRegionException"} + {"shape":"UnsupportedRegionException"}, + {"shape":"TaggingFailedException"} ] }, "DeleteMatchmakingRuleSet":{ @@ -307,7 +319,8 @@ {"shape":"InvalidRequestException"}, {"shape":"InternalServiceException"}, {"shape":"UnsupportedRegionException"}, - {"shape":"NotFoundException"} + {"shape":"NotFoundException"}, + {"shape":"TaggingFailedException"} ] }, "DeleteScalingPolicy":{ @@ -335,6 +348,7 @@ {"shape":"InvalidRequestException"}, {"shape":"UnauthorizedException"}, {"shape":"NotFoundException"}, + {"shape":"TaggingFailedException"}, {"shape":"InternalServiceException"} ] }, @@ -783,6 +797,21 @@ {"shape":"InternalServiceException"} ] }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagsForResourceRequest"}, + "output":{"shape":"ListTagsForResourceResponse"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"InvalidRequestException"}, + {"shape":"TaggingFailedException"}, + {"shape":"InternalServiceException"} + ] + }, "PutScalingPolicy":{ "name":"PutScalingPolicy", "http":{ @@ -950,6 +979,36 @@ {"shape":"UnsupportedRegionException"} ] }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TagResourceRequest"}, + "output":{"shape":"TagResourceResponse"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"InvalidRequestException"}, + {"shape":"TaggingFailedException"}, + {"shape":"InternalServiceException"} + ] + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UntagResourceRequest"}, + "output":{"shape":"UntagResourceResponse"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"InvalidRequestException"}, + {"shape":"TaggingFailedException"}, + {"shape":"InternalServiceException"} + ] + }, "UpdateAlias":{ "name":"UpdateAlias", "http":{ @@ -1167,12 +1226,17 @@ }, "AliasId":{ "type":"string", - "pattern":"^alias-\\S+" + "pattern":"^alias-\\S+|^arn:.*:alias\\/alias-\\S+" }, "AliasList":{ "type":"list", "member":{"shape":"Alias"} }, + "AmazonResourceName":{ + "type":"string", + "max":1011, + "min":1 + }, "ArnStringModel":{ "type":"string", "max":256, @@ -1209,6 +1273,7 @@ "type":"structure", "members":{ "BuildId":{"shape":"BuildId"}, + "BuildArn":{"shape":"BuildArn"}, "Name":{"shape":"FreeText"}, "Version":{"shape":"FreeText"}, "Status":{"shape":"BuildStatus"}, @@ -1217,9 +1282,13 @@ "CreationTime":{"shape":"Timestamp"} } }, + "BuildArn":{ + "type":"string", + "pattern":"^arn:.*:build\\/build-\\S+" + }, "BuildId":{ "type":"string", - "pattern":"^build-\\S+" + "pattern":"^build-\\S+|^arn:.*:build\\/build-\\S+" }, "BuildList":{ "type":"list", @@ -1272,7 +1341,8 @@ "members":{ "Name":{"shape":"NonBlankAndLengthConstraintString"}, "Description":{"shape":"NonZeroAndMaxString"}, - "RoutingStrategy":{"shape":"RoutingStrategy"} + "RoutingStrategy":{"shape":"RoutingStrategy"}, + "Tags":{"shape":"TagList"} } }, "CreateAliasOutput":{ @@ -1287,7 +1357,8 @@ "Name":{"shape":"NonZeroAndMaxString"}, "Version":{"shape":"NonZeroAndMaxString"}, "StorageLocation":{"shape":"S3Location"}, - "OperatingSystem":{"shape":"OperatingSystem"} + "OperatingSystem":{"shape":"OperatingSystem"}, + "Tags":{"shape":"TagList"} } }, "CreateBuildOutput":{ @@ -1322,7 +1393,8 @@ "PeerVpcId":{"shape":"NonZeroAndMaxString"}, "FleetType":{"shape":"FleetType"}, "InstanceRoleArn":{"shape":"NonEmptyString"}, - "CertificateConfiguration":{"shape":"CertificateConfiguration"} + "CertificateConfiguration":{"shape":"CertificateConfiguration"}, + "Tags":{"shape":"TagList"} } }, "CreateFleetOutput":{ @@ -1359,7 +1431,8 @@ "Name":{"shape":"GameSessionQueueName"}, "TimeoutInSeconds":{"shape":"WholeNumber"}, "PlayerLatencyPolicies":{"shape":"PlayerLatencyPolicyList"}, - "Destinations":{"shape":"GameSessionQueueDestinationList"} + "Destinations":{"shape":"GameSessionQueueDestinationList"}, + "Tags":{"shape":"TagList"} } }, "CreateGameSessionQueueOutput":{ @@ -1384,13 +1457,14 @@ "RequestTimeoutSeconds":{"shape":"MatchmakingRequestTimeoutInteger"}, "AcceptanceTimeoutSeconds":{"shape":"MatchmakingAcceptanceTimeoutInteger"}, "AcceptanceRequired":{"shape":"BooleanModel"}, - "RuleSetName":{"shape":"MatchmakingIdStringModel"}, + "RuleSetName":{"shape":"MatchmakingRuleSetName"}, "NotificationTarget":{"shape":"SnsArnStringModel"}, "AdditionalPlayerCount":{"shape":"WholeNumber"}, "CustomEventData":{"shape":"CustomEventData"}, "GameProperties":{"shape":"GamePropertyList"}, "GameSessionData":{"shape":"GameSessionData"}, - "BackfillMode":{"shape":"BackfillMode"} + "BackfillMode":{"shape":"BackfillMode"}, + "Tags":{"shape":"TagList"} } }, "CreateMatchmakingConfigurationOutput":{ @@ -1407,7 +1481,8 @@ ], "members":{ "Name":{"shape":"MatchmakingIdStringModel"}, - "RuleSetBody":{"shape":"RuleSetBody"} + "RuleSetBody":{"shape":"RuleSetBody"}, + "Tags":{"shape":"TagList"} } }, "CreateMatchmakingRuleSetOutput":{ @@ -1459,7 +1534,8 @@ "Name":{"shape":"NonZeroAndMaxString"}, "Version":{"shape":"NonZeroAndMaxString"}, "StorageLocation":{"shape":"S3Location"}, - "ZipFile":{"shape":"ZipBlob"} + "ZipFile":{"shape":"ZipBlob"}, + "Tags":{"shape":"TagList"} } }, "CreateScriptOutput":{ @@ -1545,7 +1621,7 @@ "type":"structure", "required":["Name"], "members":{ - "Name":{"shape":"MatchmakingIdStringModel"} + "Name":{"shape":"MatchmakingConfigurationName"} } }, "DeleteMatchmakingConfigurationOutput":{ @@ -1557,7 +1633,7 @@ "type":"structure", "required":["Name"], "members":{ - "Name":{"shape":"MatchmakingIdStringModel"} + "Name":{"shape":"MatchmakingRuleSetName"} } }, "DeleteMatchmakingRuleSetOutput":{ @@ -1813,8 +1889,8 @@ "DescribeMatchmakingConfigurationsInput":{ "type":"structure", "members":{ - "Names":{"shape":"MatchmakingIdList"}, - "RuleSetName":{"shape":"MatchmakingIdStringModel"}, + "Names":{"shape":"MatchmakingConfigurationNameList"}, + "RuleSetName":{"shape":"MatchmakingRuleSetName"}, "Limit":{"shape":"PositiveInteger"}, "NextToken":{"shape":"NonZeroAndMaxString"} } @@ -2116,7 +2192,9 @@ "TerminationTime":{"shape":"Timestamp"}, "Status":{"shape":"FleetStatus"}, "BuildId":{"shape":"BuildId"}, + "BuildArn":{"shape":"BuildArn"}, "ScriptId":{"shape":"ScriptId"}, + "ScriptArn":{"shape":"ScriptArn"}, "ServerLaunchPath":{"shape":"NonZeroAndMaxString"}, "ServerLaunchParameters":{"shape":"NonZeroAndMaxString"}, "LogPaths":{"shape":"StringList"}, @@ -2154,7 +2232,7 @@ }, "FleetId":{ "type":"string", - "pattern":"^fleet-\\S+" + "pattern":"^fleet-\\S+|^arn:.*:fleet\\/fleet-\\S+" }, "FleetIdList":{ "type":"list", @@ -2228,6 +2306,7 @@ "GameSessionId":{"shape":"NonZeroAndMaxString"}, "Name":{"shape":"NonZeroAndMaxString"}, "FleetId":{"shape":"FleetId"}, + "FleetArn":{"shape":"ArnStringModel"}, "CreationTime":{"shape":"Timestamp"}, "TerminationTime":{"shape":"Timestamp"}, "CurrentPlayerSessionCount":{"shape":"WholeNumber"}, @@ -2345,9 +2424,9 @@ }, "GameSessionQueueName":{ "type":"string", - "max":128, + "max":256, "min":1, - "pattern":"[a-zA-Z0-9-]+" + "pattern":"[a-zA-Z0-9-]+|^arn:.*:gamesessionqueue\\/[a-zA-Z0-9-]+" }, "GameSessionQueueNameList":{ "type":"list", @@ -2588,6 +2667,19 @@ "NextToken":{"shape":"NonEmptyString"} } }, + "ListTagsForResourceRequest":{ + "type":"structure", + "required":["ResourceARN"], + "members":{ + "ResourceARN":{"shape":"AmazonResourceName"} + } + }, + "ListTagsForResourceResponse":{ + "type":"structure", + "members":{ + "Tags":{"shape":"TagList"} + } + }, "MatchedPlayerSession":{ "type":"structure", "members":{ @@ -2613,12 +2705,14 @@ "type":"structure", "members":{ "Name":{"shape":"MatchmakingIdStringModel"}, + "ConfigurationArn":{"shape":"MatchmakingConfigurationArn"}, "Description":{"shape":"NonZeroAndMaxString"}, "GameSessionQueueArns":{"shape":"QueueArnsList"}, "RequestTimeoutSeconds":{"shape":"MatchmakingRequestTimeoutInteger"}, "AcceptanceTimeoutSeconds":{"shape":"MatchmakingAcceptanceTimeoutInteger"}, "AcceptanceRequired":{"shape":"BooleanModel"}, "RuleSetName":{"shape":"MatchmakingIdStringModel"}, + "RuleSetArn":{"shape":"MatchmakingRuleSetArn"}, "NotificationTarget":{"shape":"SnsArnStringModel"}, "AdditionalPlayerCount":{"shape":"WholeNumber"}, "CustomEventData":{"shape":"CustomEventData"}, @@ -2628,10 +2722,24 @@ "BackfillMode":{"shape":"BackfillMode"} } }, + "MatchmakingConfigurationArn":{ + "type":"string", + "pattern":"^arn:.*:matchmakingconfiguration\\/[a-zA-Z0-9-\\.]*" + }, "MatchmakingConfigurationList":{ "type":"list", "member":{"shape":"MatchmakingConfiguration"} }, + "MatchmakingConfigurationName":{ + "type":"string", + "max":256, + "min":1, + "pattern":"[a-zA-Z0-9-\\.]*|^arn:.*:matchmakingconfiguration\\/[a-zA-Z0-9-\\.]*" + }, + "MatchmakingConfigurationNameList":{ + "type":"list", + "member":{"shape":"MatchmakingConfigurationName"} + }, "MatchmakingConfigurationStatus":{ "type":"string", "enum":[ @@ -2664,17 +2772,28 @@ "required":["RuleSetBody"], "members":{ "RuleSetName":{"shape":"MatchmakingIdStringModel"}, + "RuleSetArn":{"shape":"MatchmakingRuleSetArn"}, "RuleSetBody":{"shape":"RuleSetBody"}, "CreationTime":{"shape":"Timestamp"} } }, + "MatchmakingRuleSetArn":{ + "type":"string", + "pattern":"^arn:.*:matchmakingruleset\\/[a-zA-Z0-9-\\.]*" + }, "MatchmakingRuleSetList":{ "type":"list", "member":{"shape":"MatchmakingRuleSet"} }, + "MatchmakingRuleSetName":{ + "type":"string", + "max":256, + "min":1, + "pattern":"[a-zA-Z0-9-\\.]*|^arn:.*:matchmakingruleset\\/[a-zA-Z0-9-\\.]*" + }, "MatchmakingRuleSetNameList":{ "type":"list", - "member":{"shape":"MatchmakingIdStringModel"}, + "member":{"shape":"MatchmakingRuleSetName"}, "max":10, "min":1 }, @@ -2683,6 +2802,7 @@ "members":{ "TicketId":{"shape":"MatchmakingIdStringModel"}, "ConfigurationName":{"shape":"MatchmakingIdStringModel"}, + "ConfigurationArn":{"shape":"MatchmakingConfigurationArn"}, "Status":{"shape":"MatchmakingConfigurationStatus"}, "StatusReason":{"shape":"StringModel"}, "StatusMessage":{"shape":"StringModel"}, @@ -2837,6 +2957,7 @@ "PlayerId":{"shape":"NonZeroAndMaxString"}, "GameSessionId":{"shape":"NonZeroAndMaxString"}, "FleetId":{"shape":"FleetId"}, + "FleetArn":{"shape":"ArnStringModel"}, "CreationTime":{"shape":"Timestamp"}, "TerminationTime":{"shape":"Timestamp"}, "Status":{"shape":"PlayerSessionStatus"}, @@ -2951,7 +3072,8 @@ "ResolveAliasOutput":{ "type":"structure", "members":{ - "FleetId":{"shape":"FleetId"} + "FleetId":{"shape":"FleetId"}, + "FleetArn":{"shape":"ArnStringModel"} } }, "ResourceCreationLimitPolicy":{ @@ -3047,6 +3169,7 @@ "type":"structure", "members":{ "ScriptId":{"shape":"ScriptId"}, + "ScriptArn":{"shape":"ScriptArn"}, "Name":{"shape":"NonZeroAndMaxString"}, "Version":{"shape":"NonZeroAndMaxString"}, "SizeOnDisk":{"shape":"PositiveLong"}, @@ -3054,9 +3177,13 @@ "StorageLocation":{"shape":"S3Location"} } }, + "ScriptArn":{ + "type":"string", + "pattern":"^arn:.*:script\\/script-\\S+" + }, "ScriptId":{ "type":"string", - "pattern":"^script-\\S+|^arn:.*script-\\S+" + "pattern":"^script-\\S+|^arn:.*:script\\/script-\\S+" }, "ScriptList":{ "type":"list", @@ -3153,7 +3280,7 @@ ], "members":{ "TicketId":{"shape":"MatchmakingIdStringModel"}, - "ConfigurationName":{"shape":"MatchmakingIdStringModel"}, + "ConfigurationName":{"shape":"MatchmakingConfigurationName"}, "GameSessionArn":{"shape":"ArnStringModel"}, "Players":{"shape":"PlayerList"} } @@ -3172,7 +3299,7 @@ ], "members":{ "TicketId":{"shape":"MatchmakingIdStringModel"}, - "ConfigurationName":{"shape":"MatchmakingIdStringModel"}, + "ConfigurationName":{"shape":"MatchmakingConfigurationName"}, "Players":{"shape":"PlayerList"} } }, @@ -3233,6 +3360,62 @@ "member":{"shape":"NonZeroAndMaxString"} }, "StringModel":{"type":"string"}, + "Tag":{ + "type":"structure", + "required":[ + "Key", + "Value" + ], + "members":{ + "Key":{"shape":"TagKey"}, + "Value":{"shape":"TagValue"} + } + }, + "TagKey":{ + "type":"string", + "max":128, + "min":1 + }, + "TagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":200, + "min":0 + }, + "TagList":{ + "type":"list", + "member":{"shape":"Tag"}, + "max":200, + "min":0 + }, + "TagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceARN", + "Tags" + ], + "members":{ + "ResourceARN":{"shape":"AmazonResourceName"}, + "Tags":{"shape":"TagList"} + } + }, + "TagResourceResponse":{ + "type":"structure", + "members":{ + } + }, + "TagValue":{ + "type":"string", + "max":256, + "min":0 + }, + "TaggingFailedException":{ + "type":"structure", + "members":{ + "Message":{"shape":"NonEmptyString"} + }, + "exception":true + }, "TargetConfiguration":{ "type":"structure", "required":["TargetValue"], @@ -3262,6 +3445,22 @@ }, "exception":true }, + "UntagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceARN", + "TagKeys" + ], + "members":{ + "ResourceARN":{"shape":"AmazonResourceName"}, + "TagKeys":{"shape":"TagKeyList"} + } + }, + "UntagResourceResponse":{ + "type":"structure", + "members":{ + } + }, "UpdateAliasInput":{ "type":"structure", "required":["AliasId"], @@ -3379,13 +3578,13 @@ "type":"structure", "required":["Name"], "members":{ - "Name":{"shape":"MatchmakingIdStringModel"}, + "Name":{"shape":"MatchmakingConfigurationName"}, "Description":{"shape":"NonZeroAndMaxString"}, "GameSessionQueueArns":{"shape":"QueueArnsList"}, "RequestTimeoutSeconds":{"shape":"MatchmakingRequestTimeoutInteger"}, "AcceptanceTimeoutSeconds":{"shape":"MatchmakingAcceptanceTimeoutInteger"}, "AcceptanceRequired":{"shape":"BooleanModel"}, - "RuleSetName":{"shape":"MatchmakingIdStringModel"}, + "RuleSetName":{"shape":"MatchmakingRuleSetName"}, "NotificationTarget":{"shape":"SnsArnStringModel"}, "AdditionalPlayerCount":{"shape":"WholeNumber"}, "CustomEventData":{"shape":"CustomEventData"}, @@ -3465,6 +3664,7 @@ "type":"structure", "members":{ "FleetId":{"shape":"FleetId"}, + "FleetArn":{"shape":"ArnStringModel"}, "IpV4CidrBlock":{"shape":"NonZeroAndMaxString"}, "VpcPeeringConnectionId":{"shape":"NonZeroAndMaxString"}, "Status":{"shape":"VpcPeeringConnectionStatus"}, diff --git a/models/apis/gamelift/2015-10-01/docs-2.json b/models/apis/gamelift/2015-10-01/docs-2.json index 0667e2ad500..8ee9551ddd4 100644 --- a/models/apis/gamelift/2015-10-01/docs-2.json +++ b/models/apis/gamelift/2015-10-01/docs-2.json @@ -3,21 +3,21 @@ "service": "Amazon GameLift is a managed service for developers who need a scalable, dedicated server solution for their multiplayer games. Use Amazon GameLift for these tasks: (1) set up computing resources and deploy your game servers, (2) run game sessions and get players into games, (3) automatically scale your resources to meet player demand and manage costs, and (4) track in-depth metrics on game server performance and player usage.
When setting up hosting resources, you can deploy your custom game server or use the Amazon GameLift Realtime Servers. Realtime Servers gives you the ability to quickly stand up lightweight, efficient game servers with the core Amazon GameLift infrastructure already built in.
Get Amazon GameLift Tools and Resources
This reference guide describes the low-level service API for Amazon GameLift and provides links to language-specific SDK reference topics. See also Amazon GameLift Tools and Resources.
API Summary
The Amazon GameLift service API includes two key sets of actions:
Manage game sessions and player access -- Integrate this functionality into game client services in order to create new game sessions, retrieve information on existing game sessions; reserve a player slot in a game session, request matchmaking, etc.
Configure and manage game server resources -- Manage your Amazon GameLift hosting resources, including builds, scripts, fleets, queues, and aliases. Set up matchmakers, configure auto-scaling, retrieve game logs, and get hosting and game metrics.
Task-based list of API actions
", "operations": { "AcceptMatch": "Registers a player's acceptance or rejection of a proposed FlexMatch match. A matchmaking configuration may require player acceptance; if so, then matches built with that configuration cannot be completed unless all players accept the proposed match within a specified time limit.
When FlexMatch builds a match, all the matchmaking tickets involved in the proposed match are placed into status REQUIRES_ACCEPTANCE
. This is a trigger for your game to get acceptance from all players in the ticket. Acceptances are only valid for tickets when they are in this status; all other acceptances result in an error.
To register acceptance, specify the ticket ID, a response, and one or more players. Once all players have registered acceptance, the matchmaking tickets advance to status PLACING
, where a new game session is created for the match.
If any player rejects the match, or if acceptances are not received before a specified timeout, the proposed match is dropped. The matchmaking tickets are then handled in one of two ways: For tickets where one or more players rejected the match, the ticket status is returned to SEARCHING
to find a new match. For tickets where one or more players failed to respond, the ticket status is set to CANCELLED
, and processing is terminated. A new matchmaking request for these players can be submitted as needed.
Learn more
Add FlexMatch to a Game Client
Related operations
", - "CreateAlias": "Creates an alias for a fleet. In most situations, you can use an alias ID in place of a fleet ID. By using a fleet alias instead of a specific fleet ID, you can switch gameplay and players to a new fleet without changing your game client or other game components. For example, for games in production, using an alias allows you to seamlessly redirect your player base to a new game server update.
Amazon GameLift supports two types of routing strategies for aliases: simple and terminal. A simple alias points to an active fleet. A terminal alias is used to display messaging or link to a URL instead of routing players to an active fleet. For example, you might use a terminal alias when a game version is no longer supported and you want to direct players to an upgrade site.
To create a fleet alias, specify an alias name, routing strategy, and optional description. Each simple alias can point to only one fleet, but a fleet can have multiple aliases. If successful, a new alias record is returned, including an alias ID, which you can reference when creating a game session. You can reassign an alias to another fleet by calling UpdateAlias
.
Creates a new Amazon GameLift build record for your game server binary files and points to the location of your game server build files in an Amazon Simple Storage Service (Amazon S3) location.
Game server binaries must be combined into a .zip
file for use with Amazon GameLift.
To create new builds quickly and easily, use the AWS CLI command upload-build . This helper command uploads your build and creates a new build record in one step, and automatically handles the necessary permissions.
The CreateBuild
operation should be used only when you need to manually upload your build files, as in the following scenarios:
Store a build file in an Amazon S3 bucket under your own AWS account. To use this option, you must first give Amazon GameLift access to that Amazon S3 bucket. To create a new build record using files in your Amazon S3 bucket, call CreateBuild
and specify a build name, operating system, and the storage location of your game build.
Upload a build file directly to Amazon GameLift's Amazon S3 account. To use this option, you first call CreateBuild
with a build name and operating system. This action creates a new build record and returns an Amazon S3 storage location (bucket and key only) and temporary access credentials. Use the credentials to manually upload your build file to the storage location (see the Amazon S3 topic Uploading Objects). You can upload files to a location only once.
If successful, this operation creates a new build record with a unique build ID and places it in INITIALIZED
status. You can use DescribeBuild to check the status of your build. A build must be in READY
status before it can be used to create fleets.
Learn more
Create a Build with Files in Amazon S3
Related operations
", - "CreateFleet": "Creates a new fleet to run your game servers. whether they are custom game builds or Realtime Servers with game-specific script. A fleet is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances, each of which can host multiple game sessions. When creating a fleet, you choose the hardware specifications, set some configuration options, and specify the game server to deploy on the new fleet.
To create a new fleet, you must provide the following: (1) a fleet name, (2) an EC2 instance type and fleet type (spot or on-demand), (3) the build ID for your game build or script ID if using Realtime Servers, and (4) a run-time configuration, which determines how game servers will run on each instance in the fleet.
When creating a Realtime Servers fleet, we recommend using a minimal version of the Realtime script (see this working code example ). This will make it much easier to troubleshoot any fleet creation issues. Once the fleet is active, you can update your Realtime script as needed.
If the CreateFleet
call is successful, Amazon GameLift performs the following tasks. You can track the process of a fleet by checking the fleet status or by monitoring fleet creation events:
Creates a fleet record. Status: NEW
.
Begins writing events to the fleet event log, which can be accessed in the Amazon GameLift console.
Sets the fleet's target capacity to 1 (desired instances), which triggers Amazon GameLift to start one new EC2 instance.
Downloads the game build or Realtime script to the new instance and installs it. Statuses: DOWNLOADING
, VALIDATING
, BUILDING
.
Starts launching server processes on the instance. If the fleet is configured to run multiple server processes per instance, Amazon GameLift staggers each launch by a few seconds. Status: ACTIVATING
.
Sets the fleet's status to ACTIVE
as soon as one server process is ready to host a game session.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Creates an alias for a fleet. In most situations, you can use an alias ID in place of a fleet ID. An alias provides a level of abstraction for a fleet that is useful when redirecting player traffic from one fleet to another, such as when updating your game build.
Amazon GameLift supports two types of routing strategies for aliases: simple and terminal. A simple alias points to an active fleet. A terminal alias is used to display messaging or link to a URL instead of routing players to an active fleet. For example, you might use a terminal alias when a game version is no longer supported and you want to direct players to an upgrade site.
To create a fleet alias, specify an alias name, routing strategy, and optional description. Each simple alias can point to only one fleet, but a fleet can have multiple aliases. If successful, a new alias record is returned, including an alias ID and an ARN. You can reassign an alias to another fleet by calling UpdateAlias
.
Creates a new Amazon GameLift build record for your game server binary files and points to the location of your game server build files in an Amazon Simple Storage Service (Amazon S3) location.
Game server binaries must be combined into a zip file for use with Amazon GameLift.
To create new builds directly from a file directory, use the AWS CLI command upload-build . This helper command uploads build files and creates a new build record in one step, and automatically handles the necessary permissions.
The CreateBuild
operation should be used only in the following scenarios:
To create a new game build with build files that are in an Amazon S3 bucket under your own AWS account. To use this option, you must first give Amazon GameLift access to that Amazon S3 bucket. Then call CreateBuild
and specify a build name, operating system, and the Amazon S3 storage location of your game build.
To upload build files directly to Amazon GameLift's Amazon S3 account. To use this option, first call CreateBuild
and specify a build name and operating system. This action creates a new build record and returns an Amazon S3 storage location (bucket and key only) and temporary access credentials. Use the credentials to manually upload your build file to the provided storage location (see the Amazon S3 topic Uploading Objects). You can upload build files to the GameLift Amazon S3 location only once.
If successful, this operation creates a new build record with a unique build ID and places it in INITIALIZED
status. You can use DescribeBuild to check the status of your build. A build must be in READY
status before it can be used to create fleets.
Learn more
Uploading Your Game https://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html
Create a Build with Files in Amazon S3
Related operations
", + "CreateFleet": "Creates a new fleet to run your game servers. whether they are custom game builds or Realtime Servers with game-specific script. A fleet is a set of Amazon Elastic Compute Cloud (Amazon EC2) instances, each of which can host multiple game sessions. When creating a fleet, you choose the hardware specifications, set some configuration options, and specify the game server to deploy on the new fleet.
To create a new fleet, you must provide the following: (1) a fleet name, (2) an EC2 instance type and fleet type (spot or on-demand), (3) the build ID for your game build or script ID if using Realtime Servers, and (4) a runtime configuration, which determines how game servers will run on each instance in the fleet.
If the CreateFleet
call is successful, Amazon GameLift performs the following tasks. You can track the process of a fleet by checking the fleet status or by monitoring fleet creation events:
Creates a fleet record. Status: NEW
.
Begins writing events to the fleet event log, which can be accessed in the Amazon GameLift console.
Sets the fleet's target capacity to 1 (desired instances), which triggers Amazon GameLift to start one new EC2 instance.
Downloads the game build or Realtime script to the new instance and installs it. Statuses: DOWNLOADING
, VALIDATING
, BUILDING
.
Starts launching server processes on the instance. If the fleet is configured to run multiple server processes per instance, Amazon GameLift staggers each process launch by a few seconds. Status: ACTIVATING
.
Sets the fleet's status to ACTIVE
as soon as one server process is ready to host a game session.
Learn more
Related operations
Manage fleet actions:
Creates a multiplayer game session for players. This action creates a game session record and assigns an available server process in the specified fleet to host the game session. A fleet must have an ACTIVE
status before a game session can be created in it.
To create a game session, specify either fleet ID or alias ID and indicate a maximum number of players to allow in the game session. You can also provide a name and game-specific properties for this game session. If successful, a GameSession object is returned containing the game session properties and other settings you specified.
Idempotency tokens. You can add a token that uniquely identifies game session requests. This is useful for ensuring that game session requests are idempotent. Multiple requests with the same idempotency token are processed only once; subsequent requests return the original result. All response values are the same with the exception of game session status, which may change.
Resource creation limits. If you are creating a game session on a fleet with a resource creation limit policy in force, then you must specify a creator ID. Without this ID, Amazon GameLift has no way to evaluate the policy for this new game session request.
Player acceptance policy. By default, newly created game sessions are open to new players. You can restrict new player access by using UpdateGameSession to change the game session's player session creation policy.
Game session logs. Logs are retained for all active game sessions for 14 days. To access the logs, call GetGameSessionLogUrl to download the log files.
Available in Amazon GameLift Local.
Game session placements
Establishes a new queue for processing requests to place new game sessions. A queue identifies where new game sessions can be hosted -- by specifying a list of destinations (fleets or aliases) -- and how long requests can wait in the queue before timing out. You can set up a queue to try to place game sessions on fleets in multiple regions. To add placement requests to a queue, call StartGameSessionPlacement and reference the queue name.
Destination order. When processing a request for a game session, Amazon GameLift tries each destination in order until it finds one with available resources to host the new game session. A queue's default order is determined by how destinations are listed. The default order is overridden when a game session placement request provides player latency information. Player latency information enables Amazon GameLift to prioritize destinations where players report the lowest average latency, as a result placing the new game session where the majority of players will have the best possible gameplay experience.
Player latency policies. For placement requests containing player latency information, use player latency policies to protect individual players from very high latencies. With a latency cap, even when a destination can deliver a low latency for most players, the game is not placed where any individual player is reporting latency higher than a policy's maximum. A queue can have multiple latency policies, which are enforced consecutively starting with the policy with the lowest latency cap. Use multiple policies to gradually relax latency controls; for example, you might set a policy with a low latency cap for the first 60 seconds, a second policy with a higher cap for the next 60 seconds, etc.
To create a new queue, provide a name, timeout value, a list of destinations and, if desired, a set of latency policies. If successful, a new queue object is returned.
", + "CreateGameSessionQueue": "Establishes a new queue for processing requests to place new game sessions. A queue identifies where new game sessions can be hosted -- by specifying a list of destinations (fleets or aliases) -- and how long requests can wait in the queue before timing out. You can set up a queue to try to place game sessions on fleets in multiple Regions. To add placement requests to a queue, call StartGameSessionPlacement and reference the queue name.
Destination order. When processing a request for a game session, Amazon GameLift tries each destination in order until it finds one with available resources to host the new game session. A queue's default order is determined by how destinations are listed. The default order is overridden when a game session placement request provides player latency information. Player latency information enables Amazon GameLift to prioritize destinations where players report the lowest average latency, as a result placing the new game session where the majority of players will have the best possible gameplay experience.
Player latency policies. For placement requests containing player latency information, use player latency policies to protect individual players from very high latencies. With a latency cap, even when a destination can deliver a low latency for most players, the game is not placed where any individual player is reporting latency higher than a policy's maximum. A queue can have multiple latency policies, which are enforced consecutively starting with the policy with the lowest latency cap. Use multiple policies to gradually relax latency controls; for example, you might set a policy with a low latency cap for the first 60 seconds, a second policy with a higher cap for the next 60 seconds, etc.
To create a new queue, provide a name, timeout value, a list of destinations and, if desired, a set of latency policies. If successful, a new queue object is returned.
", "CreateMatchmakingConfiguration": "Defines a new matchmaking configuration for use with FlexMatch. A matchmaking configuration sets out guidelines for matching players and getting the matches into games. You can set up multiple matchmaking configurations to handle the scenarios needed for your game. Each matchmaking ticket (StartMatchmaking or StartMatchBackfill) specifies a configuration for the match and provides player attributes to support the configuration being used.
To create a matchmaking configuration, at a minimum you must specify the following: configuration name; a rule set that governs how to evaluate players and find acceptable matches; a game session queue to use when placing a new game session for the match; and the maximum time allowed for a matchmaking attempt.
There are two ways to track the progress of matchmaking tickets: (1) polling ticket status with DescribeMatchmaking; or (2) receiving notifications with Amazon Simple Notification Service (SNS). To use notifications, you first need to set up an SNS topic to receive the notifications, and provide the topic ARN in the matchmaking configuration. Since notifications promise only \"best effort\" delivery, we recommend calling DescribeMatchmaking
if no notifications are received within 30 seconds.
Learn more
Setting up Notifications for Matchmaking
Related operations
Creates a new rule set for FlexMatch matchmaking. A rule set describes the type of match to create, such as the number and size of teams, and sets the parameters for acceptable player matches, such as minimum skill level or character type. A rule set is used by a MatchmakingConfiguration.
To create a matchmaking rule set, provide unique rule set name and the rule set body in JSON format. Rule sets must be defined in the same region as the matchmaking configuration they are used with.
Since matchmaking rule sets cannot be edited, it is a good idea to check the rule set syntax using ValidateMatchmakingRuleSet before creating a new rule set.
Learn more
Related operations
Creates a new rule set for FlexMatch matchmaking. A rule set describes the type of match to create, such as the number and size of teams. It also sets the parameters for acceptable player matches, such as minimum skill level or character type. A rule set is used by a MatchmakingConfiguration.
To create a matchmaking rule set, provide unique rule set name and the rule set body in JSON format. Rule sets must be defined in the same Region as the matchmaking configuration they are used with.
Since matchmaking rule sets cannot be edited, it is a good idea to check the rule set syntax using ValidateMatchmakingRuleSet before creating a new rule set.
Learn more
Related operations
Reserves an open player slot in an active game session. Before a player can be added, a game session must have an ACTIVE
status, have a creation policy of ALLOW_ALL
, and have an open player slot. To add a group of players to a game session, use CreatePlayerSessions. When the player connects to the game server and references a player session ID, the game server contacts the Amazon GameLift service to validate the player reservation and accept the player.
To create a player session, specify a game session ID, player ID, and optionally a string of player data. If successful, a slot is reserved in the game session for the player and a new PlayerSession object is returned. Player sessions cannot be updated.
Available in Amazon GameLift Local.
Game session placements
Reserves open slots in a game session for a group of players. Before players can be added, a game session must have an ACTIVE
status, have a creation policy of ALLOW_ALL
, and have an open player slot. To add a single player to a game session, use CreatePlayerSession. When a player connects to the game server and references a player session ID, the game server contacts the Amazon GameLift service to validate the player reservation and accept the player.
To create player sessions, specify a game session ID, a list of player IDs, and optionally a set of player data strings. If successful, a slot is reserved in the game session for each player and a set of new PlayerSession objects is returned. Player sessions cannot be updated.
Available in Amazon GameLift Local.
Game session placements
Creates a new script record for your Realtime Servers script. Realtime scripts are JavaScript that provide configuration settings and optional custom game logic for your game. The script is deployed when you create a Realtime Servers fleet to host your game sessions. Script logic is executed during an active game session.
To create a new script record, specify a script name and provide the script file(s). The script files and all dependencies must be zipped into a single file. You can pull the zip file from either of these locations:
A locally available directory. Use the ZipFile parameter for this option.
An Amazon Simple Storage Service (Amazon S3) bucket under your AWS account. Use the StorageLocation parameter for this option. You'll need to have an Identity Access Management (IAM) role that allows the Amazon GameLift service to access your S3 bucket.
If the call is successful, a new script record is created with a unique script ID. If the script file is provided as a local file, the file is uploaded to an Amazon GameLift-owned S3 bucket and the script record's storage location reflects this location. If the script file is provided as an S3 bucket, Amazon GameLift accesses the file at this storage location as needed for deployment.
Learn more
Amazon GameLift Realtime Servers
Set Up a Role for Amazon GameLift Access
Related operations
", - "CreateVpcPeeringAuthorization": "Requests authorization to create or delete a peer connection between the VPC for your Amazon GameLift fleet and a virtual private cloud (VPC) in your AWS account. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. Once you've received authorization, call CreateVpcPeeringConnection to establish the peering connection. For more information, see VPC Peering with Amazon GameLift Fleets.
You can peer with VPCs that are owned by any AWS account you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different regions.
To request authorization to create a connection, call this operation from the AWS account with the VPC that you want to peer to your Amazon GameLift fleet. For example, to enable your game servers to retrieve data from a DynamoDB table, use the account that manages that DynamoDB resource. Identify the following values: (1) The ID of the VPC that you want to peer with, and (2) the ID of the AWS account that you use to manage Amazon GameLift. If successful, VPC peering is authorized for the specified VPC.
To request authorization to delete a connection, call this operation from the AWS account with the VPC that is peered with your Amazon GameLift fleet. Identify the following values: (1) VPC ID that you want to delete the peering connection for, and (2) ID of the AWS account that you use to manage Amazon GameLift.
The authorization remains valid for 24 hours unless it is canceled by a call to DeleteVpcPeeringAuthorization. You must create or delete the peering connection while the authorization is valid.
Establishes a VPC peering connection between a virtual private cloud (VPC) in an AWS account with the VPC for your Amazon GameLift fleet. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. You can peer with VPCs in any AWS account that you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different regions. For more information, see VPC Peering with Amazon GameLift Fleets.
Before calling this operation to establish the peering connection, you first need to call CreateVpcPeeringAuthorization and identify the VPC you want to peer with. Once the authorization for the specified VPC is issued, you have 24 hours to establish the connection. These two operations handle all tasks necessary to peer the two VPCs, including acceptance, updating routing tables, etc.
To establish the connection, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Identify the following values: (1) The ID of the fleet you want to be enable a VPC peering connection for; (2) The AWS account with the VPC that you want to peer with; and (3) The ID of the VPC you want to peer with. This operation is asynchronous. If successful, a VpcPeeringConnection request is created. You can use continuous polling to track the request's status using DescribeVpcPeeringConnections, or by monitoring fleet events for success or failure using DescribeFleetEvents.
Requests authorization to create or delete a peer connection between the VPC for your Amazon GameLift fleet and a virtual private cloud (VPC) in your AWS account. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. Once you've received authorization, call CreateVpcPeeringConnection to establish the peering connection. For more information, see VPC Peering with Amazon GameLift Fleets.
You can peer with VPCs that are owned by any AWS account you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different Regions.
To request authorization to create a connection, call this operation from the AWS account with the VPC that you want to peer to your Amazon GameLift fleet. For example, to enable your game servers to retrieve data from a DynamoDB table, use the account that manages that DynamoDB resource. Identify the following values: (1) The ID of the VPC that you want to peer with, and (2) the ID of the AWS account that you use to manage Amazon GameLift. If successful, VPC peering is authorized for the specified VPC.
To request authorization to delete a connection, call this operation from the AWS account with the VPC that is peered with your Amazon GameLift fleet. Identify the following values: (1) VPC ID that you want to delete the peering connection for, and (2) ID of the AWS account that you use to manage Amazon GameLift.
The authorization remains valid for 24 hours unless it is canceled by a call to DeleteVpcPeeringAuthorization. You must create or delete the peering connection while the authorization is valid.
Establishes a VPC peering connection between a virtual private cloud (VPC) in an AWS account with the VPC for your Amazon GameLift fleet. VPC peering enables the game servers on your fleet to communicate directly with other AWS resources. You can peer with VPCs in any AWS account that you have access to, including the account that you use to manage your Amazon GameLift fleets. You cannot peer with VPCs that are in different Regions. For more information, see VPC Peering with Amazon GameLift Fleets.
Before calling this operation to establish the peering connection, you first need to call CreateVpcPeeringAuthorization and identify the VPC you want to peer with. Once the authorization for the specified VPC is issued, you have 24 hours to establish the connection. These two operations handle all tasks necessary to peer the two VPCs, including acceptance, updating routing tables, etc.
To establish the connection, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Identify the following values: (1) The ID of the fleet you want to be enable a VPC peering connection for; (2) The AWS account with the VPC that you want to peer with; and (3) The ID of the VPC you want to peer with. This operation is asynchronous. If successful, a VpcPeeringConnection request is created. You can use continuous polling to track the request's status using DescribeVpcPeeringConnections, or by monitoring fleet events for success or failure using DescribeFleetEvents.
Deletes an alias. This action removes all record of the alias. Game clients attempting to access a server process using the deleted alias receive an error. To delete an alias, specify the alias ID to be deleted.
", "DeleteBuild": "Deletes a build. This action permanently deletes the build record and any uploaded build files.
To delete a build, specify its ID. Deleting a build does not affect the status of any active fleets using the build, but you can no longer create new fleets with the deleted build.
Learn more
Related operations
", - "DeleteFleet": "Deletes everything related to a fleet. Before deleting a fleet, you must set the fleet's desired capacity to zero. See UpdateFleetCapacity.
If the fleet being deleted has a VPC peering connection, you first need to get a valid authorization (good for 24 hours) by calling CreateVpcPeeringAuthorization. You do not need to explicitly delete the VPC peering connection--this is done as part of the delete fleet process.
This action removes the fleet's resources and the fleet record. Once a fleet is deleted, you can no longer use that fleet.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Deletes everything related to a fleet. Before deleting a fleet, you must set the fleet's desired capacity to zero. See UpdateFleetCapacity.
If the fleet being deleted has a VPC peering connection, you first need to get a valid authorization (good for 24 hours) by calling CreateVpcPeeringAuthorization. You do not need to explicitly delete the VPC peering connection--this is done as part of the delete fleet process.
This action removes the fleet's resources and the fleet record. Once a fleet is deleted, you can no longer use that fleet.
Learn more
Related operations
Manage fleet actions:
Deletes a game session queue. This action means that any StartGameSessionPlacement requests that reference this queue will fail. To delete a queue, specify the queue name.
", "DeleteMatchmakingConfiguration": "Permanently removes a FlexMatch matchmaking configuration. To delete, specify the configuration name. A matchmaking configuration cannot be deleted if it is being used in any active matchmaking tickets.
Related operations
Deletes an existing matchmaking rule set. To delete the rule set, provide the rule set name. Rule sets cannot be deleted if they are currently being used by a matchmaking configuration.
Learn more
Related operations
Removes a VPC peering connection. To delete the connection, you must have a valid authorization for the VPC peering connection that you want to delete. You can check for an authorization by calling DescribeVpcPeeringAuthorizations or request a new one using CreateVpcPeeringAuthorization.
Once a valid authorization exists, call this operation from the AWS account that is used to manage the Amazon GameLift fleets. Identify the connection to delete by the connection ID and fleet ID. If successful, the connection is removed.
Retrieves properties for an alias. This operation returns all alias metadata and settings. To get an alias's target fleet ID only, use ResolveAlias
.
To get alias properties, specify the alias ID. If successful, the requested alias record is returned.
", "DescribeBuild": "Retrieves properties for a build. To request a build record, specify a build ID. If successful, an object containing the build properties is returned.
Learn more
Related operations
", - "DescribeEC2InstanceLimits": "Retrieves the following information for the specified EC2 instance type:
maximum number of instances allowed per AWS account (service limit)
current usage level for the AWS account
Service limits vary depending on region. Available regions for Amazon GameLift can be found in the AWS Management Console for Amazon GameLift (see the drop-down list in the upper right corner).
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves fleet properties, including metadata, status, and configuration, for one or more fleets. You can request attributes for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetAttributes object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves the current status of fleet capacity for one or more fleets. This information includes the number of instances that have been requested for the fleet and the number currently active. You can request capacity for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetCapacity object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves entries from the specified fleet's event log. You can specify a time range to limit the result set. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of event log entries matching the request are returned.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves the inbound connection permissions for a fleet. Connection permissions include a range of IP addresses and port settings that incoming traffic can use to access server processes in the fleet. To get a fleet's inbound connection permissions, specify a fleet ID. If successful, a collection of IpPermission objects is returned for the requested fleet ID. If the requested fleet has been deleted, the result set is empty.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves utilization statistics for one or more fleets. You can request utilization data for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetUtilization object is returned for each requested fleet ID. When specifying a list of fleet IDs, utilization objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves the following information for the specified EC2 instance type:
maximum number of instances allowed per AWS account (service limit)
current usage level for the AWS account
Service limits vary depending on Region. Available Regions for Amazon GameLift can be found in the AWS Management Console for Amazon GameLift (see the drop-down list in the upper right corner).
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves fleet properties, including metadata, status, and configuration, for one or more fleets. You can request attributes for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetAttributes object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves the current status of fleet capacity for one or more fleets. This information includes the number of instances that have been requested for the fleet and the number currently active. You can request capacity for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetCapacity object is returned for each requested fleet ID. When specifying a list of fleet IDs, attribute objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves entries from the specified fleet's event log. You can specify a time range to limit the result set. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a collection of event log entries matching the request are returned.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves the inbound connection permissions for a fleet. Connection permissions include a range of IP addresses and port settings that incoming traffic can use to access server processes in the fleet. To get a fleet's inbound connection permissions, specify a fleet ID. If successful, a collection of IpPermission objects is returned for the requested fleet ID. If the requested fleet has been deleted, the result set is empty.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves utilization statistics for one or more fleets. You can request utilization data for all fleets, or specify a list of one or more fleet IDs. When requesting multiple fleets, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a FleetUtilization object is returned for each requested fleet ID. When specifying a list of fleet IDs, utilization objects are returned only for fleets that currently exist.
Some API actions may limit the number of fleet IDs allowed in one request. If a request exceeds this limit, the request fails and the error message includes the maximum allowed.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves properties, including the protection policy in force, for one or more game sessions. This action can be used in several ways: (1) provide a GameSessionId
or GameSessionArn
to request details for a specific game session; (2) provide either a FleetId
or an AliasId
to request properties for all game sessions running on a fleet.
To get game session record(s), specify just one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionDetail object is returned for each session matching the request.
Game session placements
Retrieves properties and current status of a game session placement request. To get game session placement details, specify the placement ID. If successful, a GameSessionPlacement object is returned.
Game session placements
Retrieves the properties for one or more game session queues. When requesting multiple queues, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionQueue object is returned for each requested queue. When specifying a list of queues, objects are returned only for queues that currently exist in the region.
", + "DescribeGameSessionQueues": "Retrieves the properties for one or more game session queues. When requesting multiple queues, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSessionQueue object is returned for each requested queue. When specifying a list of queues, objects are returned only for queues that currently exist in the Region.
", "DescribeGameSessions": "Retrieves a set of one or more game sessions. Request a specific game session or request all game sessions on a fleet. Alternatively, use SearchGameSessions to request a set of active game sessions that are filtered by certain criteria. To retrieve protection policy settings for game sessions, use DescribeGameSessionDetails.
To get game sessions, specify one of the following: game session ID, fleet ID, or alias ID. You can filter this request by game session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a GameSession object is returned for each game session matching the request.
Available in Amazon GameLift Local.
Game session placements
Retrieves information about a fleet's instances, including instance IDs. Use this action to get details on all instances in the fleet or get details on one specific instance.
To get a specific instance, specify fleet ID and instance ID. To get all instances in a fleet, specify a fleet ID only. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, an Instance object is returned for each result.
", - "DescribeMatchmaking": "Retrieves one or more matchmaking tickets. Use this operation to retrieve ticket information, including status and--once a successful match is made--acquire connection information for the resulting new game session.
You can use this operation to track the progress of matchmaking requests (through polling) as an alternative to using event notifications. See more details on tracking matchmaking requests through polling or notifications in StartMatchmaking.
To request matchmaking tickets, provide a list of up to 10 ticket IDs. If the request is successful, a ticket object is returned for each requested ID that currently exists.
Learn more
Add FlexMatch to a Game Client
Set Up FlexMatch Event Notification
Related operations
", + "DescribeMatchmaking": "Retrieves one or more matchmaking tickets. Use this operation to retrieve ticket information, including status and--once a successful match is made--acquire connection information for the resulting new game session.
You can use this operation to track the progress of matchmaking requests (through polling) as an alternative to using event notifications. See more details on tracking matchmaking requests through polling or notifications in StartMatchmaking.
To request matchmaking tickets, provide a list of up to 10 ticket IDs. If the request is successful, a ticket object is returned for each requested ID that currently exists.
Learn more
Add FlexMatch to a Game Client
Set Up FlexMatch Event Notification
Related operations
", "DescribeMatchmakingConfigurations": "Retrieves the details of FlexMatch matchmaking configurations. With this operation, you have the following options: (1) retrieve all existing configurations, (2) provide the names of one or more configurations to retrieve, or (3) retrieve all configurations that use a specified rule set name. When requesting multiple items, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a configuration is returned for each requested name. When specifying a list of names, only configurations that currently exist are returned.
Learn more
Setting Up FlexMatch Matchmakers
Related operations
Retrieves the details for FlexMatch matchmaking rule sets. You can request all existing rule sets for the region, or provide a list of one or more rule set names. When requesting multiple items, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a rule set is returned for each requested name.
Learn more
Related operations
Retrieves the details for FlexMatch matchmaking rule sets. You can request all existing rule sets for the Region, or provide a list of one or more rule set names. When requesting multiple items, use the pagination parameters to retrieve results as a set of sequential pages. If successful, a rule set is returned for each requested name.
Learn more
Related operations
Retrieves properties for one or more player sessions. This action can be used in several ways: (1) provide a PlayerSessionId
to request properties for a specific player session; (2) provide a GameSessionId
to request properties for all player sessions in the specified game session; (3) provide a PlayerId
to request properties for all player sessions of a specified player.
To get game session record(s), specify only one of the following: a player session ID, a game session ID, or a player ID. You can filter this request by player session status. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, a PlayerSession object is returned for each session matching the request.
Available in Amazon GameLift Local.
Game session placements
Retrieves the current run-time configuration for the specified fleet. The run-time configuration tells Amazon GameLift how to launch server processes on instances in the fleet.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves the current runtime configuration for the specified fleet. The runtime configuration tells Amazon GameLift how to launch server processes on instances in the fleet.
Learn more
Related operations
Describe fleets:
Manage fleet actions:
Retrieves all scaling policies applied to a fleet.
To get a fleet's scaling policies, specify the fleet ID. You can filter this request by policy status, such as to retrieve only active scaling policies. Use the pagination parameters to retrieve results as a set of sequential pages. If successful, set of ScalingPolicy objects is returned for the fleet.
A fleet may have all of its scaling policies suspended (StopFleetActions). This action does not affect the status of the scaling policies, which remains ACTIVE. To see whether a fleet's scaling policies are in force or suspended, call DescribeFleetAttributes and check the stopped actions.
Manage scaling policies:
PutScalingPolicy (auto-scaling)
DescribeScalingPolicies (auto-scaling)
DeleteScalingPolicy (auto-scaling)
Manage fleet actions:
Retrieves properties for a Realtime script.
To request a script record, specify the script ID. If successful, an object containing the script properties is returned.
Learn more
Amazon GameLift Realtime Servers
Related operations
", "DescribeVpcPeeringAuthorizations": "Retrieves valid VPC peering authorizations that are pending for the AWS account. This operation returns all VPC peering authorizations and requests for peering. This includes those initiated and received by this account.
Requests remote access to a fleet instance. Remote access is useful for debugging, gathering benchmarking data, or watching activity in real time.
Access requires credentials that match the operating system of the instance. For a Windows instance, Amazon GameLift returns a user name and password as strings for use with a Windows Remote Desktop client. For a Linux instance, Amazon GameLift returns a user name and RSA private key, also as strings, for use with an SSH client. The private key must be saved in the proper format to a .pem
file before using. If you're making this request using the AWS CLI, saving the secret can be handled as part of the GetInstanceAccess request. (See the example later in this topic). For more information on remote access, see Remotely Accessing an Instance.
To request access to a specific instance, specify the IDs of both the instance and the fleet it belongs to. You can retrieve a fleet's instance IDs by calling DescribeInstances. If successful, an InstanceAccess object is returned containing the instance's IP address and a set of credentials.
", "ListAliases": "Retrieves all aliases for this AWS account. You can filter the result set by alias name and/or routing strategy type. Use the pagination parameters to retrieve results in sequential pages.
Returned aliases are not listed in any particular order.
Retrieves build records for all builds associated with the AWS account in use. You can limit results to builds that are in a specific status by using the Status
parameter. Use the pagination parameters to retrieve results in a set of sequential pages.
Build records are not listed in any particular order.
Learn more
Related operations
", - "ListFleets": "Retrieves a collection of fleet records for this AWS account. You can filter the result set to find only those fleets that are deployed with a specific build or script. Use the pagination parameters to retrieve results in sequential pages.
Fleet records are not listed in a particular order.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Retrieves a collection of fleet records for this AWS account. You can filter the result set to find only those fleets that are deployed with a specific build or script. Use the pagination parameters to retrieve results in sequential pages.
Fleet records are not listed in a particular order.
Learn more
Related operations
Manage fleet actions:
Retrieves script records for all Realtime scripts that are associated with the AWS account in use.
Learn more
Amazon GameLift Realtime Servers
Related operations
", + "ListTagsForResource": "Retrieves all tags that are assigned to a GameLift resource. Resource tags are used to organize AWS resources for a range of purposes. This action handles the permissions necessary to manage tags for the following GameLift resource types:
Build
Script
Fleet
Alias
GameSessionQueue
MatchmakingConfiguration
MatchmakingRuleSet
To list tags for a resource, specify the unique ARN value for the resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", "PutScalingPolicy": "Creates or updates a scaling policy for a fleet. Scaling policies are used to automatically scale a fleet's hosting capacity to meet player demand. An active scaling policy instructs Amazon GameLift to track a fleet metric and automatically change the fleet's capacity when a certain threshold is reached. There are two types of scaling policies: target-based and rule-based. Use a target-based policy to quickly and efficiently manage fleet scaling; this option is the most commonly used. Use rule-based policies when you need to exert fine-grained control over auto-scaling.
Fleets can have multiple scaling policies of each type in force at the same time; you can have one target-based policy, one or multiple rule-based scaling policies, or both. We recommend caution, however, because multiple auto-scaling policies can have unintended consequences.
You can temporarily suspend all scaling policies for a fleet by calling StopFleetActions with the fleet action AUTO_SCALING. To resume scaling policies, call StartFleetActions with the same fleet action. To stop just one scaling policy--or to permanently remove it, you must delete the policy with DeleteScalingPolicy.
Learn more about how to work with auto-scaling in Set Up Fleet Automatic Scaling.
Target-based policy
A target-based policy tracks a single metric: PercentAvailableGameSessions. This metric tells us how much of a fleet's hosting capacity is ready to host game sessions but is not currently in use. This is the fleet's buffer; it measures the additional player demand that the fleet could handle at current capacity. With a target-based policy, you set your ideal buffer size and leave it to Amazon GameLift to take whatever action is needed to maintain that target.
For example, you might choose to maintain a 10% buffer for a fleet that has the capacity to host 100 simultaneous game sessions. This policy tells Amazon GameLift to take action whenever the fleet's available capacity falls below or rises above 10 game sessions. Amazon GameLift will start new instances or stop unused instances in order to return to the 10% buffer.
To create or update a target-based policy, specify a fleet ID and name, and set the policy type to \"TargetBased\". Specify the metric to track (PercentAvailableGameSessions) and reference a TargetConfiguration object with your desired buffer value. Exclude all other parameters. On a successful request, the policy name is returned. The scaling policy is automatically in force as soon as it's successfully created. If the fleet's auto-scaling actions are temporarily suspended, the new policy will be in force once the fleet actions are restarted.
Rule-based policy
A rule-based policy tracks specified fleet metric, sets a threshold value, and specifies the type of action to initiate when triggered. With a rule-based policy, you can select from several available fleet metrics. Each policy specifies whether to scale up or scale down (and by how much), so you need one policy for each type of action.
For example, a policy may make the following statement: \"If the percentage of idle instances is greater than 20% for more than 15 minutes, then reduce the fleet capacity by 10%.\"
A policy's rule statement has the following structure:
If [MetricName]
is [ComparisonOperator]
[Threshold]
for [EvaluationPeriods]
minutes, then [ScalingAdjustmentType]
to/by [ScalingAdjustment]
.
To implement the example, the rule statement would look like this:
If [PercentIdleInstances]
is [GreaterThanThreshold]
[20]
for [15]
minutes, then [PercentChangeInCapacity]
to/by [10]
.
To create or update a scaling policy, specify a unique combination of name and fleet ID, and set the policy type to \"RuleBased\". Specify the parameter values for a policy rule statement. On a successful request, the policy name is returned. Scaling policies are automatically in force as soon as they're successfully created. If the fleet's auto-scaling actions are temporarily suspended, the new policy will be in force once the fleet actions are restarted.
Manage scaling policies:
PutScalingPolicy (auto-scaling)
DescribeScalingPolicies (auto-scaling)
DeleteScalingPolicy (auto-scaling)
Manage fleet actions:
Retrieves a fresh set of credentials for use when uploading a new set of game build files to Amazon GameLift's Amazon S3. This is done as part of the build creation process; see CreateBuild.
To request new credentials, specify the build ID as returned with an initial CreateBuild
request. If successful, a new set of credentials are returned, along with the S3 storage location associated with the build ID.
Learn more
Related operations
", - "ResolveAlias": "Retrieves the fleet ID that a specified alias is currently pointing to.
", - "SearchGameSessions": "Retrieves all active game sessions that match a set of search criteria and sorts them in a specified order. You can search or sort by the following game session attributes:
gameSessionId -- Unique identifier for the game session. You can use either a GameSessionId
or GameSessionArn
value.
gameSessionName -- Name assigned to a game session. This value is set when requesting a new game session with CreateGameSession or updating with UpdateGameSession. Game session names do not need to be unique to a game session.
gameSessionProperties -- Custom data defined in a game session's GameProperty
parameter. GameProperty
values are stored as key:value pairs; the filter expression must indicate the key and a string to search the data values for. For example, to search for game sessions with custom data containing the key:value pair \"gameMode:brawl\", specify the following: gameSessionProperties.gameMode = \"brawl\"
. All custom data values are searched as strings.
maximumSessions -- Maximum number of player sessions allowed for a game session. This value is set when requesting a new game session with CreateGameSession or updating with UpdateGameSession.
creationTimeMillis -- Value indicating when a game session was created. It is expressed in Unix time as milliseconds.
playerSessionCount -- Number of players currently connected to a game session. This value changes rapidly as players join the session or drop out.
hasAvailablePlayerSessions -- Boolean value indicating whether a game session has reached its maximum number of players. It is highly recommended that all search requests include this filter attribute to optimize search performance and return only sessions that players can join.
Returned values for playerSessionCount
and hasAvailablePlayerSessions
change quickly as players join sessions and others drop out. Results should be considered a snapshot in time. Be sure to refresh search results often, and handle sessions that fill up before a player can join.
To search or sort, specify either a fleet ID or an alias ID, and provide a search filter expression, a sort expression, or both. If successful, a collection of GameSession objects matching the request is returned. Use the pagination parameters to retrieve results as a set of sequential pages.
You can search for game sessions one fleet at a time only. To find game sessions across multiple fleets, you must search each fleet separately and combine the results. This search feature finds only game sessions that are in ACTIVE
status. To locate games in statuses other than active, use DescribeGameSessionDetails.
Game session placements
Retrieves the fleet ID that an alias is currently pointing to.
", + "SearchGameSessions": "Retrieves all active game sessions that match a set of search criteria and sorts them in a specified order. You can search or sort by the following game session attributes:
gameSessionId -- A unique identifier for the game session. You can use either a GameSessionId
or GameSessionArn
value.
gameSessionName -- Name assigned to a game session. This value is set when requesting a new game session with CreateGameSession or updating with UpdateGameSession. Game session names do not need to be unique to a game session.
gameSessionProperties -- Custom data defined in a game session's GameProperty
parameter. GameProperty
values are stored as key:value pairs; the filter expression must indicate the key and a string to search the data values for. For example, to search for game sessions with custom data containing the key:value pair \"gameMode:brawl\", specify the following: gameSessionProperties.gameMode = \"brawl\"
. All custom data values are searched as strings.
maximumSessions -- Maximum number of player sessions allowed for a game session. This value is set when requesting a new game session with CreateGameSession or updating with UpdateGameSession.
creationTimeMillis -- Value indicating when a game session was created. It is expressed in Unix time as milliseconds.
playerSessionCount -- Number of players currently connected to a game session. This value changes rapidly as players join the session or drop out.
hasAvailablePlayerSessions -- Boolean value indicating whether a game session has reached its maximum number of players. It is highly recommended that all search requests include this filter attribute to optimize search performance and return only sessions that players can join.
Returned values for playerSessionCount
and hasAvailablePlayerSessions
change quickly as players join sessions and others drop out. Results should be considered a snapshot in time. Be sure to refresh search results often, and handle sessions that fill up before a player can join.
To search or sort, specify either a fleet ID or an alias ID, and provide a search filter expression, a sort expression, or both. If successful, a collection of GameSession objects matching the request is returned. Use the pagination parameters to retrieve results as a set of sequential pages.
You can search for game sessions one fleet at a time only. To find game sessions across multiple fleets, you must search each fleet separately and combine the results. This search feature finds only game sessions that are in ACTIVE
status. To locate games in statuses other than active, use DescribeGameSessionDetails.
Game session placements
Resumes activity on a fleet that was suspended with StopFleetActions. Currently, this operation is used to restart a fleet's auto-scaling activity.
To start fleet actions, specify the fleet ID and the type of actions to restart. When auto-scaling fleet actions are restarted, Amazon GameLift once again initiates scaling events as triggered by the fleet's scaling policies. If actions on the fleet were never stopped, this operation will have no effect. You can view a fleet's stopped actions using DescribeFleetAttributes.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Places a request for a new game session in a queue (see CreateGameSessionQueue). When processing a placement request, Amazon GameLift searches for available resources on the queue's destinations, scanning each until it finds resources or the placement request times out.
A game session placement request can also request player sessions. When a new game session is successfully created, Amazon GameLift creates a player session for each player included in the request.
When placing a game session, by default Amazon GameLift tries each fleet in the order they are listed in the queue configuration. Ideally, a queue's destinations are listed in preference order.
Alternatively, when requesting a game session with players, you can also provide latency data for each player in relevant regions. Latency data indicates the performance lag a player experiences when connected to a fleet in the region. Amazon GameLift uses latency data to reorder the list of destinations to place the game session in a region with minimal lag. If latency data is provided for multiple players, Amazon GameLift calculates each region's average lag for all players and reorders to get the best game play across all players.
To place a new game session request, specify the following:
The queue name and a set of game session properties and settings
A unique ID (such as a UUID) for the placement. You use this ID to track the status of the placement request
(Optional) A set of player data and a unique player ID for each player that you are joining to the new game session (player data is optional, but if you include it, you must also provide a unique ID for each player)
Latency data for all players (if you want to optimize game play for the players)
If successful, a new game session placement is created.
To track the status of a placement request, call DescribeGameSessionPlacement and check the request's status. If the status is FULFILLED
, a new game session has been created and a game session ARN and region are referenced. If the placement request times out, you can resubmit the request or retry it with a different queue.
Game session placements
Places a request for a new game session in a queue (see CreateGameSessionQueue). When processing a placement request, Amazon GameLift searches for available resources on the queue's destinations, scanning each until it finds resources or the placement request times out.
A game session placement request can also request player sessions. When a new game session is successfully created, Amazon GameLift creates a player session for each player included in the request.
When placing a game session, by default Amazon GameLift tries each fleet in the order they are listed in the queue configuration. Ideally, a queue's destinations are listed in preference order.
Alternatively, when requesting a game session with players, you can also provide latency data for each player in relevant Regions. Latency data indicates the performance lag a player experiences when connected to a fleet in the Region. Amazon GameLift uses latency data to reorder the list of destinations to place the game session in a Region with minimal lag. If latency data is provided for multiple players, Amazon GameLift calculates each Region's average lag for all players and reorders to get the best game play across all players.
To place a new game session request, specify the following:
The queue name and a set of game session properties and settings
A unique ID (such as a UUID) for the placement. You use this ID to track the status of the placement request
(Optional) A set of player data and a unique player ID for each player that you are joining to the new game session (player data is optional, but if you include it, you must also provide a unique ID for each player)
Latency data for all players (if you want to optimize game play for the players)
If successful, a new game session placement is created.
To track the status of a placement request, call DescribeGameSessionPlacement and check the request's status. If the status is FULFILLED
, a new game session has been created and a game session ARN and Region are referenced. If the placement request times out, you can resubmit the request or retry it with a different queue.
Game session placements
Finds new players to fill open slots in an existing game session. This operation can be used to add players to matched games that start with fewer than the maximum number of players or to replace players when they drop out. By backfilling with the same matchmaker used to create the original match, you ensure that new players meet the match criteria and maintain a consistent experience throughout the game session. You can backfill a match anytime after a game session has been created.
To request a match backfill, specify a unique ticket ID, the existing game session's ARN, a matchmaking configuration, and a set of data that describes all current players in the game session. If successful, a match backfill ticket is created and returned with status set to QUEUED. The ticket is placed in the matchmaker's ticket pool and processed. Track the status of the ticket to respond as needed.
The process of finding backfill matches is essentially identical to the initial matchmaking process. The matchmaker searches the pool and groups tickets together to form potential matches, allowing only one backfill ticket per potential match. Once the a match is formed, the matchmaker creates player sessions for the new players. All tickets in the match are updated with the game session's connection information, and the GameSession object is updated to include matchmaker data on the new players. For more detail on how match backfill requests are processed, see How Amazon GameLift FlexMatch Works.
Learn more
Backfill Existing Games with FlexMatch
Related operations
", "StartMatchmaking": "Uses FlexMatch to create a game match for a group of players based on custom matchmaking rules, and starts a new game for the matched players. Each matchmaking request specifies the type of match to build (team configuration, rules for an acceptable match, etc.). The request also specifies the players to find a match for and where to host the new game session for optimal performance. A matchmaking request might start with a single player or a group of players who want to play together. FlexMatch finds additional players as needed to fill the match. Match type, rules, and the queue used to place a new game session are defined in a MatchmakingConfiguration
.
To start matchmaking, provide a unique ticket ID, specify a matchmaking configuration, and include the players to be matched. You must also include a set of player attributes relevant for the matchmaking configuration. If successful, a matchmaking ticket is returned with status set to QUEUED
. Track the status of the ticket to respond as needed and acquire game session connection information for successfully completed matches.
Tracking ticket status -- A couple of options are available for tracking the status of matchmaking requests:
Polling -- Call DescribeMatchmaking
. This operation returns the full ticket object, including current status and (for completed tickets) game session connection info. We recommend polling no more than once every 10 seconds.
Notifications -- Get event notifications for changes in ticket status using Amazon Simple Notification Service (SNS). Notifications are easy to set up (see CreateMatchmakingConfiguration) and typically deliver match status changes faster and more efficiently than polling. We recommend that you use polling to back up to notifications (since delivery is not guaranteed) and call DescribeMatchmaking
only when notifications are not received within 30 seconds.
Processing a matchmaking request -- FlexMatch handles a matchmaking request as follows:
Your client code submits a StartMatchmaking
request for one or more players and tracks the status of the request ticket.
FlexMatch uses this ticket and others in process to build an acceptable match. When a potential match is identified, all tickets in the proposed match are advanced to the next status.
If the match requires player acceptance (set in the matchmaking configuration), the tickets move into status REQUIRES_ACCEPTANCE
. This status triggers your client code to solicit acceptance from all players in every ticket involved in the match, and then call AcceptMatch for each player. If any player rejects or fails to accept the match before a specified timeout, the proposed match is dropped (see AcceptMatch
for more details).
Once a match is proposed and accepted, the matchmaking tickets move into status PLACING
. FlexMatch locates resources for a new game session using the game session queue (set in the matchmaking configuration) and creates the game session based on the match data.
When the match is successfully placed, the matchmaking tickets move into COMPLETED
status. Connection information (including game session endpoint and player session) is added to the matchmaking tickets. Matched players can use the connection information to join the game.
Learn more
Add FlexMatch to a Game Client
Set Up FlexMatch Event Notification
Related operations
", "StopFleetActions": "Suspends activity on a fleet. Currently, this operation is used to stop a fleet's auto-scaling activity. It is used to temporarily stop scaling events triggered by the fleet's scaling policies. The policies can be retained and auto-scaling activity can be restarted using StartFleetActions. You can view a fleet's stopped actions using DescribeFleetAttributes.
To stop fleet actions, specify the fleet ID and the type of actions to suspend. When auto-scaling fleet actions are stopped, Amazon GameLift no longer initiates scaling events except to maintain the fleet's desired instances setting (FleetCapacity. Changes to the fleet's capacity must be done manually using UpdateFleetCapacity.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Cancels a game session placement that is in PENDING
status. To stop a placement, provide the placement ID values. If successful, the placement is moved to CANCELLED
status.
Game session placements
Cancels a matchmaking ticket or match backfill ticket that is currently being processed. To stop the matchmaking operation, specify the ticket ID. If successful, work on the ticket is stopped, and the ticket status is changed to CANCELLED
.
This call is also used to turn off automatic backfill for an individual game session. This is for game sessions that are created with a matchmaking configuration that has automatic backfill enabled. The ticket ID is included in the MatchmakerData
of an updated game session object, which is provided to the game server.
If the action is successful, the service sends back an empty JSON struct with the HTTP 200 response (not an empty HTTP body).
Learn more
Add FlexMatch to a Game Client
Related operations
", + "TagResource": "Assigns a tag to a GameLift resource. AWS resource tags provide an additional management tool set. You can use tags to organize resources, create IAM permissions policies to manage access to groups of resources, customize AWS cost breakdowns, etc. This action handles the permissions necessary to manage tags for the following GameLift resource types:
Build
Script
Fleet
Alias
GameSessionQueue
MatchmakingConfiguration
MatchmakingRuleSet
To add a tag to a resource, specify the unique ARN value for the resource and provide a trig list containing one or more tags. The operation succeeds even if the list includes tags that are already assigned to the specified resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", + "UntagResource": "Removes a tag that is assigned to a GameLift resource. Resource tags are used to organize AWS resources for a range of purposes. This action handles the permissions necessary to manage tags for the following GameLift resource types:
Build
Script
Fleet
Alias
GameSessionQueue
MatchmakingConfiguration
MatchmakingRuleSet
To remove a tag from a resource, specify the unique ARN value for the resource and provide a string list containing one or more tags to be removed. This action succeeds even if the list includes tags that are not currently assigned to the specified resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", "UpdateAlias": "Updates properties for an alias. To update properties, specify the alias ID to be updated and provide the information to be changed. To reassign an alias to another fleet, provide an updated routing strategy. If successful, the updated alias record is returned.
", "UpdateBuild": "Updates metadata in a build record, including the build name and version. To update the metadata, specify the build ID to update and provide the new values. If successful, a build object containing the updated metadata is returned.
Learn more
Related operations
", - "UpdateFleetAttributes": "Updates fleet properties, including name and description, for a fleet. To update metadata, specify the fleet ID and the property values that you want to change. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Updates capacity settings for a fleet. Use this action to specify the number of EC2 instances (hosts) that you want this fleet to contain. Before calling this action, you may want to call DescribeEC2InstanceLimits to get the maximum capacity based on the fleet's EC2 instance type.
Specify minimum and maximum number of instances. Amazon GameLift will not change fleet capacity to values fall outside of this range. This is particularly important when using auto-scaling (see PutScalingPolicy) to allow capacity to adjust based on player demand while imposing limits on automatic adjustments.
To update fleet capacity, specify the fleet ID and the number of instances you want the fleet to host. If successful, Amazon GameLift starts or terminates instances so that the fleet's active instance count matches the desired instance count. You can view a fleet's current capacity information by calling DescribeFleetCapacity. If the desired instance count is higher than the instance type's limit, the \"Limit Exceeded\" exception occurs.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Updates port settings for a fleet. To update settings, specify the fleet ID to be updated and list the permissions you want to update. List the permissions you want to add in InboundPermissionAuthorizations
, and permissions you want to remove in InboundPermissionRevocations
. Permissions to be removed must match existing fleet permissions. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Updates fleet properties, including name and description, for a fleet. To update metadata, specify the fleet ID and the property values that you want to change. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates capacity settings for a fleet. Use this action to specify the number of EC2 instances (hosts) that you want this fleet to contain. Before calling this action, you may want to call DescribeEC2InstanceLimits to get the maximum capacity based on the fleet's EC2 instance type.
Specify minimum and maximum number of instances. Amazon GameLift will not change fleet capacity to values fall outside of this range. This is particularly important when using auto-scaling (see PutScalingPolicy) to allow capacity to adjust based on player demand while imposing limits on automatic adjustments.
To update fleet capacity, specify the fleet ID and the number of instances you want the fleet to host. If successful, Amazon GameLift starts or terminates instances so that the fleet's active instance count matches the desired instance count. You can view a fleet's current capacity information by calling DescribeFleetCapacity. If the desired instance count is higher than the instance type's limit, the \"Limit Exceeded\" exception occurs.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates port settings for a fleet. To update settings, specify the fleet ID to be updated and list the permissions you want to update. List the permissions you want to add in InboundPermissionAuthorizations
, and permissions you want to remove in InboundPermissionRevocations
. Permissions to be removed must match existing fleet permissions. If successful, the fleet ID for the updated fleet is returned.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates game session properties. This includes the session name, maximum player count, protection policy, which controls whether or not an active game session can be terminated during a scale-down event, and the player session creation policy, which controls whether or not new players can join the session. To update a game session, specify the game session ID and the values you want to change. If successful, an updated GameSession object is returned.
Game session placements
Updates settings for a game session queue, which determines how new game session requests in the queue are processed. To update settings, specify the queue name to be updated and provide the new settings. When updating destinations, provide a complete list of destinations.
", "UpdateMatchmakingConfiguration": "Updates settings for a FlexMatch matchmaking configuration. These changes affect all matches and game sessions that are created after the update. To update settings, specify the configuration name to be updated and provide the new settings.
Learn more
Related operations
Updates the current run-time configuration for the specified fleet, which tells Amazon GameLift how to launch server processes on instances in the fleet. You can update a fleet's run-time configuration at any time after the fleet is created; it does not need to be in an ACTIVE
status.
To update run-time configuration, specify the fleet ID and provide a RuntimeConfiguration
object with an updated set of server process configurations.
Each instance in a Amazon GameLift fleet checks regularly for an updated run-time configuration and changes how it launches server processes to comply with the latest version. Existing server processes are not affected by the update; run-time configuration changes are applied gradually as existing processes shut down and new processes are launched during Amazon GameLift's normal process recycling activity.
Learn more
Related operations
Describe fleets:
Update fleets:
Manage fleet actions:
Updates the current runtime configuration for the specified fleet, which tells Amazon GameLift how to launch server processes on instances in the fleet. You can update a fleet's runtime configuration at any time after the fleet is created; it does not need to be in an ACTIVE
status.
To update runtime configuration, specify the fleet ID and provide a RuntimeConfiguration
object with an updated set of server process configurations.
Each instance in a Amazon GameLift fleet checks regularly for an updated runtime configuration and changes how it launches server processes to comply with the latest version. Existing server processes are not affected by the update; runtime configuration changes are applied gradually as existing processes shut down and new processes are launched during Amazon GameLift's normal process recycling activity.
Learn more
Related operations
Update fleets:
Manage fleet actions:
Updates Realtime script metadata and content.
To update script metadata, specify the script ID and provide updated name and/or version values.
To update script content, provide an updated zip file by pointing to either a local file or an Amazon S3 bucket location. You can use either method regardless of how the original script was uploaded. Use the Version parameter to track updates to the script.
If the call is successful, the updated metadata is stored in the script record and a revised script is uploaded to the Amazon GameLift service. Once the script is updated and acquired by a fleet instance, the new version is used for all new game sessions.
Learn more
Amazon GameLift Realtime Servers
Related operations
", "ValidateMatchmakingRuleSet": "Validates the syntax of a matchmaking rule or rule set. This operation checks that the rule set is using syntactically correct JSON and that it conforms to allowed property expressions. To validate syntax, provide a rule set JSON string.
Learn more
Related operations
Properties describing a fleet alias.
", + "base": "Properties that describe an alias resource.
", "refs": { "AliasList$member": null, - "CreateAliasOutput$Alias": "Object that describes the newly created alias record.
", - "DescribeAliasOutput$Alias": "Object that contains the requested alias.
", - "UpdateAliasOutput$Alias": "Object that contains the updated alias configuration.
" + "CreateAliasOutput$Alias": "The newly created alias resource.
", + "DescribeAliasOutput$Alias": "The requested alias resource.
", + "UpdateAliasOutput$Alias": "The updated alias resource.
" } }, "AliasId": { "base": null, "refs": { - "Alias$AliasId": "Unique identifier for an alias; alias IDs are unique within a region.
", - "CreateGameSessionInput$AliasId": "Unique identifier for an alias associated with the fleet to create a game session in. Each request must reference either a fleet ID or alias ID, but not both.
", - "DeleteAliasInput$AliasId": "Unique identifier for a fleet alias. Specify the alias you want to delete.
", - "DescribeAliasInput$AliasId": "Unique identifier for a fleet alias. Specify the alias you want to retrieve.
", - "DescribeGameSessionDetailsInput$AliasId": "Unique identifier for an alias associated with the fleet to retrieve all game sessions for.
", - "DescribeGameSessionsInput$AliasId": "Unique identifier for an alias associated with the fleet to retrieve all game sessions for.
", - "ResolveAliasInput$AliasId": "Unique identifier for the alias you want to resolve.
", - "SearchGameSessionsInput$AliasId": "Unique identifier for an alias associated with the fleet to search for active game sessions. Each request must reference either a fleet ID or alias ID, but not both.
", - "UpdateAliasInput$AliasId": "Unique identifier for a fleet alias. Specify the alias you want to update.
" + "Alias$AliasId": "A unique identifier for an alias. Alias IDs are unique within a Region.
", + "CreateGameSessionInput$AliasId": "A unique identifier for an alias associated with the fleet to create a game session in. You can use either the alias ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", + "DeleteAliasInput$AliasId": "A unique identifier of the alias that you want to delete. You can use either the alias ID or ARN value.
", + "DescribeAliasInput$AliasId": "The unique identifier for the fleet alias that you want to retrieve. You can use either the alias ID or ARN value.
", + "DescribeGameSessionDetailsInput$AliasId": "A unique identifier for an alias associated with the fleet to retrieve all game sessions for. You can use either the alias ID or ARN value.
", + "DescribeGameSessionsInput$AliasId": "A unique identifier for an alias associated with the fleet to retrieve all game sessions for. You can use either the alias ID or ARN value.
", + "ResolveAliasInput$AliasId": "The unique identifier of the alias that you want to retrieve a fleet ID for. You can use either the alias ID or ARN value.
", + "SearchGameSessionsInput$AliasId": "A unique identifier for an alias associated with the fleet to search for active game sessions. You can use either the alias ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", + "UpdateAliasInput$AliasId": "A unique identifier for the alias that you want to update. You can use either the alias ID or ARN value.
" } }, "AliasList": { "base": null, "refs": { - "ListAliasesOutput$Aliases": "Collection of alias records that match the list request.
" + "ListAliasesOutput$Aliases": "A collection of alias resources that match the request parameters.
" + } + }, + "AmazonResourceName": { + "base": null, + "refs": { + "ListTagsForResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) that is assigned to and uniquely identifies the GameLift resource that you want to retrieve tags for. GameLift resource ARNs are included in the data object for the resource, which can be retrieved by calling a List or Describe action for the resource type.
", + "TagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) that is assigned to and uniquely identifies the GameLift resource that you want to assign tags to. GameLift resource ARNs are included in the data object for the resource, which can be retrieved by calling a List or Describe action for the resource type.
", + "UntagResourceRequest$ResourceARN": "The Amazon Resource Name (ARN) that is assigned to and uniquely identifies the GameLift resource that you want to remove tags from. GameLift resource ARNs are included in the data object for the resource, which can be retrieved by calling a List or Describe action for the resource type.
" } }, "ArnStringModel": { "base": null, "refs": { - "Alias$AliasArn": "Unique identifier for an alias; alias ARNs are unique across all regions.
", - "CreatePlayerSessionInput$GameSessionId": "Unique identifier for the game session to add a player to.
", - "CreatePlayerSessionsInput$GameSessionId": "Unique identifier for the game session to add players to.
", - "DescribeGameSessionDetailsInput$GameSessionId": "Unique identifier for the game session to retrieve.
", - "DescribeGameSessionsInput$GameSessionId": "Unique identifier for the game session to retrieve. You can use either a GameSessionId
or GameSessionArn
value.
Unique identifier for the game session to retrieve player sessions for.
", - "FleetAttributes$FleetArn": "Identifier for a fleet that is unique across all regions.
", - "GameSessionConnectionInfo$GameSessionArn": "Amazon Resource Name (ARN) that is assigned to a game session and uniquely identifies it.
", - "GameSessionQueue$GameSessionQueueArn": "Amazon Resource Name (ARN) that is assigned to a game session queue and uniquely identifies it. Format is arn:aws:gamelift:<region>:<aws account>:gamesessionqueue/<queue name>
.
Amazon Resource Name (ARN) assigned to fleet or fleet alias. ARNs, which include a fleet ID or alias ID and a region name, provide a unique identifier across all regions.
", - "GetGameSessionLogUrlInput$GameSessionId": "Unique identifier for the game session to get logs for.
", + "Alias$AliasArn": "Amazon Resource Name (ARN) that is assigned to a GameLift alias resource and uniquely identifies it. ARNs are unique across all Regions.. In a GameLift alias ARN, the resource ID matches the alias ID value.
", + "CreatePlayerSessionInput$GameSessionId": "A unique identifier for the game session to add a player to.
", + "CreatePlayerSessionsInput$GameSessionId": "A unique identifier for the game session to add players to.
", + "DescribeGameSessionDetailsInput$GameSessionId": "A unique identifier for the game session to retrieve.
", + "DescribeGameSessionsInput$GameSessionId": "A unique identifier for the game session to retrieve.
", + "DescribePlayerSessionsInput$GameSessionId": "A unique identifier for the game session to retrieve player sessions for.
", + "FleetAttributes$FleetArn": "The Amazon Resource Name (ARN) that is assigned to a GameLift fleet resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift fleet ARN, the resource ID matches the FleetId value.
", + "GameSession$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet that this game session is running on.
", + "GameSessionConnectionInfo$GameSessionArn": "Amazon Resource Name (ARN) that is assigned to a game session and uniquely identifies it.
", + "GameSessionQueue$GameSessionQueueArn": "Amazon Resource Name (ARN) that is assigned to a GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift game session queue ARN, the resource ID matches the Name value.
", + "GameSessionQueueDestination$DestinationArn": "The Amazon Resource Name (ARN) that is assigned to fleet or fleet alias. ARNs, which include a fleet ID or alias ID and a Region name, provide a unique identifier across all Regions.
", + "GetGameSessionLogUrlInput$GameSessionId": "A unique identifier for the game session to get logs for.
", + "PlayerSession$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet that the player's game session is running on.
", "QueueArnsList$member": null, - "StartMatchBackfillInput$GameSessionArn": "Amazon Resource Name (ARN) that is assigned to a game session and uniquely identifies it.
", - "UpdateGameSessionInput$GameSessionId": "Unique identifier for the game session to update.
" + "ResolveAliasOutput$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet resource that this alias points to.
", + "StartMatchBackfillInput$GameSessionArn": "Amazon Resource Name (ARN) that is assigned to a game session and uniquely identifies it. This is the same as the game session ID.
", + "UpdateGameSessionInput$GameSessionId": "A unique identifier for the game session to update.
", + "VpcPeeringConnection$FleetArn": "The Amazon Resource Name (ARN) associated with the GameLift fleet resource for this connection.
" } }, "AttributeValue": { - "base": "Values for use in Player attribute key:value pairs. This object lets you specify an attribute value using any of the valid data types: string, number, string array, or data map. Each AttributeValue
object can use only one of the available properties.
Values for use in Player attribute key-value pairs. This object lets you specify an attribute value using any of the valid data types: string, number, string array, or data map. Each AttributeValue
object can use only one of the available properties.
Method used to backfill game sessions created with this matchmaking configuration. Specify MANUAL when your game manages backfill requests manually or does not use the match backfill feature. Specify AUTOMATIC to have GameLift create a StartMatchBackfill request whenever a game session has one or more open slots. Learn more about manual and automatic backfill in Backfill Existing Games with FlexMatch.
", - "MatchmakingConfiguration$BackfillMode": "Method used to backfill game sessions created with this matchmaking configuration. MANUAL indicates that the game makes backfill requests or does not use the match backfill feature. AUTOMATIC indicates that GameLift creates StartMatchBackfill requests whenever a game session has one or more open slots. Learn more about manual and automatic backfill in Backfill Existing Games with FlexMatch.
", - "UpdateMatchmakingConfigurationInput$BackfillMode": "Method used to backfill game sessions created with this matchmaking configuration. Specify MANUAL when your game manages backfill requests manually or does not use the match backfill feature. Specify AUTOMATIC to have GameLift create a StartMatchBackfill request whenever a game session has one or more open slots. Learn more about manual and automatic backfill in Backfill Existing Games with FlexMatch.
" + "CreateMatchmakingConfigurationInput$BackfillMode": "The method used to backfill game sessions that are created with this matchmaking configuration. Specify MANUAL
when your game manages backfill requests manually or does not use the match backfill feature. Specify AUTOMATIC
to have GameLift create a StartMatchBackfill request whenever a game session has one or more open slots. Learn more about manual and automatic backfill in Backfill Existing Games with FlexMatch.
The method used to backfill game sessions created with this matchmaking configuration. MANUAL indicates that the game makes backfill requests or does not use the match backfill feature. AUTOMATIC indicates that GameLift creates StartMatchBackfill requests whenever a game session has one or more open slots. Learn more about manual and automatic backfill in Backfill Existing Games with FlexMatch.
", + "UpdateMatchmakingConfigurationInput$BackfillMode": "The method that is used to backfill game sessions created with this matchmaking configuration. Specify MANUAL when your game manages backfill requests manually or does not use the match backfill feature. Specify AUTOMATIC to have GameLift create a StartMatchBackfill request whenever a game session has one or more open slots. Learn more about manual and automatic backfill in Backfill Existing Games with FlexMatch.
" } }, "BooleanModel": { "base": null, "refs": { - "CreateMatchmakingConfigurationInput$AcceptanceRequired": "Flag that determines whether a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.
", - "MatchmakingConfiguration$AcceptanceRequired": "Flag that determines whether a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.
", - "UpdateMatchmakingConfigurationInput$AcceptanceRequired": "Flag that determines whether a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.
", - "ValidateMatchmakingRuleSetOutput$Valid": "Response indicating whether the rule set is valid.
" + "CreateMatchmakingConfigurationInput$AcceptanceRequired": "A flag that determines whether a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE
.
A flag that indicates whether a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.
", + "UpdateMatchmakingConfigurationInput$AcceptanceRequired": "A flag that indicates whether a match that was created with this configuration must be accepted by the matched players. To require acceptance, set to TRUE.
", + "ValidateMatchmakingRuleSetOutput$Valid": "A response indicating whether the rule set is valid.
" } }, "Build": { "base": "Properties describing a custom game build.
Related operations
", "refs": { "BuildList$member": null, - "CreateBuildOutput$Build": "The newly created build record, including a unique build ID and status.
", + "CreateBuildOutput$Build": "The newly created build record, including a unique build IDs and status.
", "DescribeBuildOutput$Build": "Set of properties describing the requested build.
", - "UpdateBuildOutput$Build": "Object that contains the updated build record.
" + "UpdateBuildOutput$Build": "The updated build record.
" + } + }, + "BuildArn": { + "base": null, + "refs": { + "Build$BuildArn": "Amazon Resource Name (ARN) that is assigned to a GameLift build resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift build ARN, the resource ID matches the BuildId value.
", + "FleetAttributes$BuildArn": "The Amazon Resource Name (ARN) associated with the GameLift build resource that is deployed on instances in this fleet. In a GameLift build ARN, the resource ID matches the BuildId value.
" } }, "BuildId": { "base": null, "refs": { - "Build$BuildId": "Unique identifier for a build.
", - "CreateFleetInput$BuildId": "Unique identifier for a build to be deployed on the new fleet. The custom game server build must have been successfully uploaded to Amazon GameLift and be in a READY
status. This fleet setting cannot be changed once the fleet is created.
Unique identifier for a build to delete.
", - "DescribeBuildInput$BuildId": "Unique identifier for a build to retrieve properties for.
", - "FleetAttributes$BuildId": "Unique identifier for a build.
", - "ListFleetsInput$BuildId": "Unique identifier for a build to return fleets for. Use this parameter to return only fleets using the specified build. To retrieve all fleets, leave this parameter empty.
", - "RequestUploadCredentialsInput$BuildId": "Unique identifier for a build to get credentials for.
", - "UpdateBuildInput$BuildId": "Unique identifier for a build to update.
" + "Build$BuildId": "A unique identifier for a build.
", + "CreateFleetInput$BuildId": "A unique identifier for a build to be deployed on the new fleet. You can use either the build ID or ARN value. The custom game server build must have been successfully uploaded to Amazon GameLift and be in a READY
status. This fleet setting cannot be changed once the fleet is created.
A unique identifier for a build to delete. You can use either the build ID or ARN value.
", + "DescribeBuildInput$BuildId": "A unique identifier for a build to retrieve properties for. You can use either the build ID or ARN value.
", + "FleetAttributes$BuildId": "A unique identifier for a build.
", + "ListFleetsInput$BuildId": "A unique identifier for a build to return fleets for. Use this parameter to return only fleets using the specified build. Use either the build ID or ARN value.To retrieve all fleets, leave this parameter empty.
", + "RequestUploadCredentialsInput$BuildId": "A unique identifier for a build to get credentials for. You can use either the build ID or ARN value.
", + "UpdateBuildInput$BuildId": "A unique identifier for a build to update. You can use either the build ID or ARN value.
" } }, "BuildList": { "base": null, "refs": { - "ListBuildsOutput$Builds": "Collection of build records that match the request.
" + "ListBuildsOutput$Builds": "A collection of build records that match the request.
" } }, "BuildStatus": { @@ -207,16 +229,16 @@ } }, "CertificateConfiguration": { - "base": null, + "base": "Information about the use of a TLS/SSL certificate for a fleet. TLS certificate generation is enabled at the fleet level, with one certificate generated for the fleet. When this feature is enabled, the certificate can be retrieved using the GameLift Server SDK call GetInstanceCertificate
. All instances in a fleet share the same certificate.
Indicates whether to generate a TLS/SSL certificate for the new fleet. TLS certificates are used for encrypting traffic between game clients and game servers running on GameLift. If this parameter is not specified, the default value, DISABLED, is used. This fleet setting cannot be changed once the fleet is created. Learn more at Securing Client/Server Communication.
Note: This feature requires the AWS Certificate Manager (ACM) service, which is available in the AWS global partition but not in all other partitions. When working in a partition that does not support this feature, a request for a new fleet with certificate generation results fails with a 4xx unsupported Region error.
Valid values include:
GENERATED - Generate a TLS/SSL certificate for this fleet.
DISABLED - (default) Do not generate a TLS/SSL certificate for this fleet.
Indicates whether a TLS/SSL certificate was generated for the fleet.
" } }, "CertificateType": { "base": null, "refs": { - "CertificateConfiguration$CertificateType": null + "CertificateConfiguration$CertificateType": "Indicates whether a TLS/SSL certificate was generated for a fleet.
" } }, "ComparisonOperatorType": { @@ -375,7 +397,7 @@ } }, "DeleteGameSessionQueueInput": { - "base": "
Represents the input for a request action.
", + "base": "Represents the input for a request action.
", "refs": { } }, @@ -669,11 +691,11 @@ "DnsName": { "base": null, "refs": { - "GameSession$DnsName": null, - "GameSessionConnectionInfo$DnsName": null, - "GameSessionPlacement$DnsName": null, - "Instance$DnsName": null, - "PlayerSession$DnsName": null + "GameSession$DnsName": "DNS identifier assigned to the instance that is running the game session. Values have the following format:
TLS-enabled fleets: <unique identifier>.<region identifier>.amazongamelift.com
.
Non-TLS-enabled fleets: ec2-<unique identifier>.compute.amazonaws.com
. (See Amazon EC2 Instance IP Addressing.)
When connecting to a game session that is running on a TLS-enabled fleet, you must use the DNS name, not the IP address.
", + "GameSessionConnectionInfo$DnsName": "DNS identifier assigned to the instance that is running the game session. Values have the following format:
TLS-enabled fleets: <unique identifier>.<region identifier>.amazongamelift.com
.
Non-TLS-enabled fleets: ec2-<unique identifier>.compute.amazonaws.com
. (See Amazon EC2 Instance IP Addressing.)
When connecting to a game session that is running on a TLS-enabled fleet, you must use the DNS name, not the IP address.
", + "GameSessionPlacement$DnsName": "DNS identifier assigned to the instance that is running the game session. Values have the following format:
TLS-enabled fleets: <unique identifier>.<region identifier>.amazongamelift.com
.
Non-TLS-enabled fleets: ec2-<unique identifier>.compute.amazonaws.com
. (See Amazon EC2 Instance IP Addressing.)
When connecting to a game session that is running on a TLS-enabled fleet, you must use the DNS name, not the IP address.
", + "Instance$DnsName": "DNS identifier assigned to the instance that is running the game session. Values have the following format:
TLS-enabled fleets: <unique identifier>.<region identifier>.amazongamelift.com
.
Non-TLS-enabled fleets: ec2-<unique identifier>.compute.amazonaws.com
. (See Amazon EC2 Instance IP Addressing.)
When connecting to a game session that is running on a TLS-enabled fleet, you must use the DNS name, not the IP address.
", + "PlayerSession$DnsName": "DNS identifier assigned to the instance that is running the game session. Values have the following format:
TLS-enabled fleets: <unique identifier>.<region identifier>.amazongamelift.com
.
Non-TLS-enabled fleets: ec2-<unique identifier>.compute.amazonaws.com
. (See Amazon EC2 Instance IP Addressing.)
When connecting to a game session that is running on a TLS-enabled fleet, you must use the DNS name, not the IP address.
" } }, "Double": { @@ -692,13 +714,13 @@ } }, "EC2InstanceCounts": { - "base": "Current status of fleet capacity. The number of active instances should match or be in the process of matching the number of desired instances. Pending and terminating counts are non-zero only if fleet capacity is adjusting to an UpdateFleetCapacity request, or if access to resources is temporarily affected.
Describe fleets:
Update fleets:
Manage fleet actions:
Current status of fleet capacity. The number of active instances should match or be in the process of matching the number of desired instances. Pending and terminating counts are non-zero only if fleet capacity is adjusting to an UpdateFleetCapacity request, or if access to resources is temporarily affected.
Manage fleet actions:
Current status of fleet capacity.
" } }, "EC2InstanceLimit": { - "base": "Maximum number of instances allowed based on the Amazon Elastic Compute Cloud (Amazon EC2) instance type. Instance limits can be retrieved by calling DescribeEC2InstanceLimits.
", + "base": "The maximum number of instances allowed based on the Amazon Elastic Compute Cloud (Amazon EC2) instance type. Instance limits can be retrieved by calling DescribeEC2InstanceLimits.
", "refs": { "EC2InstanceLimitList$member": null } @@ -706,13 +728,13 @@ "EC2InstanceLimitList": { "base": null, "refs": { - "DescribeEC2InstanceLimitsOutput$EC2InstanceLimits": "Object that contains the maximum number of instances for the specified instance type.
" + "DescribeEC2InstanceLimitsOutput$EC2InstanceLimits": "The maximum number of instances for the specified instance type.
" } }, "EC2InstanceType": { "base": null, "refs": { - "CreateFleetInput$EC2InstanceType": "Name of an EC2 instance type that is supported in Amazon GameLift. A fleet instance type determines the computing resources of each instance in the fleet, including CPU, memory, storage, and networking capacity. Amazon GameLift supports the following EC2 instance types. See Amazon EC2 Instance Types for detailed descriptions.
", + "CreateFleetInput$EC2InstanceType": "The name of an EC2 instance type that is supported in Amazon GameLift. A fleet instance type determines the computing resources of each instance in the fleet, including CPU, memory, storage, and networking capacity. Amazon GameLift supports the following EC2 instance types. See Amazon EC2 Instance Types for detailed descriptions.
", "DescribeEC2InstanceLimitsInput$EC2InstanceType": "Name of an EC2 instance type that is supported in Amazon GameLift. A fleet instance type determines the computing resources of each instance in the fleet, including CPU, memory, storage, and networking capacity. Amazon GameLift supports the following EC2 instance types. See Amazon EC2 Instance Types for detailed descriptions. Leave this parameter blank to retrieve limits for all types.
", "EC2InstanceLimit$EC2InstanceType": "Name of an EC2 instance type that is supported in Amazon GameLift. A fleet instance type determines the computing resources of each instance in the fleet, including CPU, memory, storage, and networking capacity. Amazon GameLift supports the following EC2 instance types. See Amazon EC2 Instance Types for detailed descriptions.
", "FleetAttributes$InstanceType": "EC2 instance type indicating the computing resources of each instance in the fleet, including CPU, memory, storage, and networking capacity. See Amazon EC2 Instance Types for detailed descriptions.
", @@ -729,13 +751,13 @@ "EventCode": { "base": null, "refs": { - "Event$EventCode": "Type of event being logged. The following events are currently in use:
Fleet creation events:
FLEET_CREATED -- A fleet record was successfully created with a status of NEW
. Event messaging includes the fleet ID.
FLEET_STATE_DOWNLOADING -- Fleet status changed from NEW
to DOWNLOADING
. The compressed build has started downloading to a fleet instance for installation.
FLEET_BINARY_DOWNLOAD_FAILED -- The build failed to download to the fleet instance.
FLEET_CREATION_EXTRACTING_BUILD – The game server build was successfully downloaded to an instance, and the build files are now being extracted from the uploaded build and saved to an instance. Failure at this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage display a list of the files that are extracted and saved on the instance. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_RUNNING_INSTALLER – The game server build files were successfully extracted, and the Amazon GameLift is now running the build's install script (if one is included). Failure in this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage list the installation steps and whether or not the install completed successfully. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_VALIDATING_RUNTIME_CONFIG -- The build process was successful, and the Amazon GameLift is now verifying that the game server launch paths, which are specified in the fleet's run-time configuration, exist. If any listed launch path exists, Amazon GameLift tries to launch a game server process and waits for the process to report ready. Failures in this stage prevent a fleet from moving to ACTIVE
status. Logs for this stage list the launch paths in the run-time configuration and indicate whether each is found. Access the logs by using the URL in PreSignedLogUrl.
FLEET_STATE_VALIDATING -- Fleet status changed from DOWNLOADING
to VALIDATING
.
FLEET_VALIDATION_LAUNCH_PATH_NOT_FOUND -- Validation of the run-time configuration failed because the executable specified in a launch path does not exist on the instance.
FLEET_STATE_BUILDING -- Fleet status changed from VALIDATING
to BUILDING
.
FLEET_VALIDATION_EXECUTABLE_RUNTIME_FAILURE -- Validation of the run-time configuration failed because the executable specified in a launch path failed to run on the fleet instance.
FLEET_STATE_ACTIVATING -- Fleet status changed from BUILDING
to ACTIVATING
.
FLEET_ACTIVATION_FAILED - The fleet failed to successfully complete one of the steps in the fleet activation process. This event code indicates that the game build was successfully downloaded to a fleet instance, built, and validated, but was not able to start a server process. A possible reason for failure is that the game server is not reporting \"process ready\" to the Amazon GameLift service.
FLEET_STATE_ACTIVE -- The fleet's status changed from ACTIVATING
to ACTIVE
. The fleet is now ready to host game sessions.
VPC peering events:
FLEET_VPC_PEERING_SUCCEEDED -- A VPC peering connection has been established between the VPC for an Amazon GameLift fleet and a VPC in your AWS account.
FLEET_VPC_PEERING_FAILED -- A requested VPC peering connection has failed. Event details and status information (see DescribeVpcPeeringConnections) provide additional detail. A common reason for peering failure is that the two VPCs have overlapping CIDR blocks of IPv4 addresses. To resolve this, change the CIDR block for the VPC in your AWS account. For more information on VPC peering failures, see https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-configurations.html
FLEET_VPC_PEERING_DELETED -- A VPC peering connection has been successfully deleted.
Spot instance events:
INSTANCE_INTERRUPTED -- A spot instance was interrupted by EC2 with a two-minute notification.
Other fleet events:
FLEET_SCALING_EVENT -- A change was made to the fleet's capacity settings (desired instances, minimum/maximum scaling limits). Event messaging includes the new capacity settings.
FLEET_NEW_GAME_SESSION_PROTECTION_POLICY_UPDATED -- A change was made to the fleet's game session protection policy setting. Event messaging includes both the old and new policy setting.
FLEET_DELETED -- A request to delete a fleet was initiated.
GENERIC_EVENT -- An unspecified event has occurred.
The type of event being logged.
Fleet creation events (ordered by fleet creation activity):
FLEET_CREATED -- A fleet record was successfully created with a status of NEW
. Event messaging includes the fleet ID.
FLEET_STATE_DOWNLOADING -- Fleet status changed from NEW
to DOWNLOADING
. The compressed build has started downloading to a fleet instance for installation.
FLEET_BINARY_DOWNLOAD_FAILED -- The build failed to download to the fleet instance.
FLEET_CREATION_EXTRACTING_BUILD – The game server build was successfully downloaded to an instance, and the build files are now being extracted from the uploaded build and saved to an instance. Failure at this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage display a list of the files that are extracted and saved on the instance. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_RUNNING_INSTALLER – The game server build files were successfully extracted, and the Amazon GameLift is now running the build's install script (if one is included). Failure in this stage prevents a fleet from moving to ACTIVE
status. Logs for this stage list the installation steps and whether or not the install completed successfully. Access the logs by using the URL in PreSignedLogUrl.
FLEET_CREATION_VALIDATING_RUNTIME_CONFIG -- The build process was successful, and the Amazon GameLift is now verifying that the game server launch paths, which are specified in the fleet's runtime configuration, exist. If any listed launch path exists, Amazon GameLift tries to launch a game server process and waits for the process to report ready. Failures in this stage prevent a fleet from moving to ACTIVE
status. Logs for this stage list the launch paths in the runtime configuration and indicate whether each is found. Access the logs by using the URL in PreSignedLogUrl.
FLEET_STATE_VALIDATING -- Fleet status changed from DOWNLOADING
to VALIDATING
.
FLEET_VALIDATION_LAUNCH_PATH_NOT_FOUND -- Validation of the runtime configuration failed because the executable specified in a launch path does not exist on the instance.
FLEET_STATE_BUILDING -- Fleet status changed from VALIDATING
to BUILDING
.
FLEET_VALIDATION_EXECUTABLE_RUNTIME_FAILURE -- Validation of the runtime configuration failed because the executable specified in a launch path failed to run on the fleet instance.
FLEET_STATE_ACTIVATING -- Fleet status changed from BUILDING
to ACTIVATING
.
FLEET_ACTIVATION_FAILED - The fleet failed to successfully complete one of the steps in the fleet activation process. This event code indicates that the game build was successfully downloaded to a fleet instance, built, and validated, but was not able to start a server process. Learn more at Debug Fleet Creation Issues
FLEET_STATE_ACTIVE -- The fleet's status changed from ACTIVATING
to ACTIVE
. The fleet is now ready to host game sessions.
VPC peering events:
FLEET_VPC_PEERING_SUCCEEDED -- A VPC peering connection has been established between the VPC for an Amazon GameLift fleet and a VPC in your AWS account.
FLEET_VPC_PEERING_FAILED -- A requested VPC peering connection has failed. Event details and status information (see DescribeVpcPeeringConnections) provide additional detail. A common reason for peering failure is that the two VPCs have overlapping CIDR blocks of IPv4 addresses. To resolve this, change the CIDR block for the VPC in your AWS account. For more information on VPC peering failures, see https://docs.aws.amazon.com/AmazonVPC/latest/PeeringGuide/invalid-peering-configurations.html
FLEET_VPC_PEERING_DELETED -- A VPC peering connection has been successfully deleted.
Spot instance events:
INSTANCE_INTERRUPTED -- A spot instance was interrupted by EC2 with a two-minute notification.
Other fleet events:
FLEET_SCALING_EVENT -- A change was made to the fleet's capacity settings (desired instances, minimum/maximum scaling limits). Event messaging includes the new capacity settings.
FLEET_NEW_GAME_SESSION_PROTECTION_POLICY_UPDATED -- A change was made to the fleet's game session protection policy setting. Event messaging includes both the old and new policy setting.
FLEET_DELETED -- A request to delete a fleet was initiated.
GENERIC_EVENT -- An unspecified event has occurred.
Collection of objects containing event log entries for the specified fleet.
" + "DescribeFleetEventsOutput$Events": "A collection of objects containing event log entries for the specified fleet.
" } }, "FleetAction": { @@ -753,7 +775,7 @@ } }, "FleetAttributes": { - "base": "General properties describing a fleet.
Describe fleets:
Update fleets:
Manage fleet actions:
General properties describing a fleet.
Manage fleet actions:
Properties for the newly created fleet.
", "FleetAttributesList$member": null @@ -762,11 +784,11 @@ "FleetAttributesList": { "base": null, "refs": { - "DescribeFleetAttributesOutput$FleetAttributes": "Collection of objects containing attribute metadata for each requested fleet ID.
" + "DescribeFleetAttributesOutput$FleetAttributes": "A collection of objects containing attribute metadata for each requested fleet ID.
" } }, "FleetCapacity": { - "base": "Information about the fleet's capacity. Fleet capacity is measured in EC2 instances. By default, new fleets have a capacity of one instance, but can be updated as needed. The maximum number of instances for a fleet is determined by the fleet's instance type.
Describe fleets:
Update fleets:
Manage fleet actions:
Information about the fleet's capacity. Fleet capacity is measured in EC2 instances. By default, new fleets have a capacity of one instance, but can be updated as needed. The maximum number of instances for a fleet is determined by the fleet's instance type.
Manage fleet actions:
Collection of objects containing capacity information for each requested fleet ID. Leave this parameter empty to retrieve capacity information for all fleets.
" + "DescribeFleetCapacityOutput$FleetCapacity": "A collection of objects containing capacity information for each requested fleet ID. Leave this parameter empty to retrieve capacity information for all fleets.
" } }, "FleetId": { "base": null, "refs": { - "CreateGameSessionInput$FleetId": "Unique identifier for a fleet to create a game session in. Each request must reference either a fleet ID or alias ID, but not both.
", - "CreateVpcPeeringConnectionInput$FleetId": "Unique identifier for a fleet. This tells Amazon GameLift which GameLift VPC to peer with.
", - "DeleteFleetInput$FleetId": "Unique identifier for a fleet to be deleted.
", - "DeleteScalingPolicyInput$FleetId": "Unique identifier for a fleet to be deleted.
", - "DeleteVpcPeeringConnectionInput$FleetId": "Unique identifier for a fleet. This value must match the fleet ID referenced in the VPC peering connection record.
", - "DescribeFleetEventsInput$FleetId": "Unique identifier for a fleet to get event logs for.
", - "DescribeFleetPortSettingsInput$FleetId": "Unique identifier for a fleet to retrieve port settings for.
", - "DescribeGameSessionDetailsInput$FleetId": "Unique identifier for a fleet to retrieve all game sessions active on the fleet.
", - "DescribeGameSessionsInput$FleetId": "Unique identifier for a fleet to retrieve all game sessions for.
", - "DescribeInstancesInput$FleetId": "Unique identifier for a fleet to retrieve instance information for.
", - "DescribeRuntimeConfigurationInput$FleetId": "Unique identifier for a fleet to get the run-time configuration for.
", - "DescribeScalingPoliciesInput$FleetId": "Unique identifier for a fleet to retrieve scaling policies for.
", - "DescribeVpcPeeringConnectionsInput$FleetId": "Unique identifier for a fleet.
", - "FleetAttributes$FleetId": "Unique identifier for a fleet.
", - "FleetCapacity$FleetId": "Unique identifier for a fleet.
", + "CreateGameSessionInput$FleetId": "A unique identifier for a fleet to create a game session in. You can use either the fleet ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", + "CreateVpcPeeringConnectionInput$FleetId": "A unique identifier for a fleet. You can use either the fleet ID or ARN value. This tells Amazon GameLift which GameLift VPC to peer with.
", + "DeleteFleetInput$FleetId": "A unique identifier for a fleet to be deleted. You can use either the fleet ID or ARN value.
", + "DeleteScalingPolicyInput$FleetId": "A unique identifier for a fleet to be deleted. You can use either the fleet ID or ARN value.
", + "DeleteVpcPeeringConnectionInput$FleetId": "A unique identifier for a fleet. This fleet specified must match the fleet referenced in the VPC peering connection record. You can use either the fleet ID or ARN value.
", + "DescribeFleetEventsInput$FleetId": "A unique identifier for a fleet to get event logs for. You can use either the fleet ID or ARN value.
", + "DescribeFleetPortSettingsInput$FleetId": "A unique identifier for a fleet to retrieve port settings for. You can use either the fleet ID or ARN value.
", + "DescribeGameSessionDetailsInput$FleetId": "A unique identifier for a fleet to retrieve all game sessions active on the fleet. You can use either the fleet ID or ARN value.
", + "DescribeGameSessionsInput$FleetId": "A unique identifier for a fleet to retrieve all game sessions for. You can use either the fleet ID or ARN value.
", + "DescribeInstancesInput$FleetId": "A unique identifier for a fleet to retrieve instance information for. You can use either the fleet ID or ARN value.
", + "DescribeRuntimeConfigurationInput$FleetId": "A unique identifier for a fleet to get the runtime configuration for. You can use either the fleet ID or ARN value.
", + "DescribeScalingPoliciesInput$FleetId": "A unique identifier for a fleet to retrieve scaling policies for. You can use either the fleet ID or ARN value.
", + "DescribeVpcPeeringConnectionsInput$FleetId": "A unique identifier for a fleet. You can use either the fleet ID or ARN value.
", + "FleetAttributes$FleetId": "A unique identifier for a fleet.
", + "FleetCapacity$FleetId": "A unique identifier for a fleet.
", "FleetIdList$member": null, - "FleetUtilization$FleetId": "Unique identifier for a fleet.
", - "GameSession$FleetId": "Unique identifier for a fleet that the game session is running on.
", - "GetInstanceAccessInput$FleetId": "Unique identifier for a fleet that contains the instance you want access to. The fleet can be in any of the following statuses: ACTIVATING
, ACTIVE
, or ERROR
. Fleets with an ERROR
status may be accessible for a short time before they are deleted.
Unique identifier for a fleet that the instance is in.
", - "InstanceAccess$FleetId": "Unique identifier for a fleet containing the instance being accessed.
", - "PlayerSession$FleetId": "Unique identifier for a fleet that the player's game session is running on.
", - "PutScalingPolicyInput$FleetId": "Unique identifier for a fleet to apply this policy to. The fleet cannot be in any of the following statuses: ERROR or DELETING.
", - "ResolveAliasOutput$FleetId": "Fleet identifier that is associated with the requested alias.
", - "RoutingStrategy$FleetId": "Unique identifier for a fleet that the alias points to.
", - "ScalingPolicy$FleetId": "Unique identifier for a fleet that is associated with this scaling policy.
", - "SearchGameSessionsInput$FleetId": "Unique identifier for a fleet to search for active game sessions. Each request must reference either a fleet ID or alias ID, but not both.
", - "StartFleetActionsInput$FleetId": "Unique identifier for a fleet
", - "StopFleetActionsInput$FleetId": "Unique identifier for a fleet
", - "UpdateFleetAttributesInput$FleetId": "Unique identifier for a fleet to update attribute metadata for.
", - "UpdateFleetAttributesOutput$FleetId": "Unique identifier for a fleet that was updated.
", - "UpdateFleetCapacityInput$FleetId": "Unique identifier for a fleet to update capacity for.
", - "UpdateFleetCapacityOutput$FleetId": "Unique identifier for a fleet that was updated.
", - "UpdateFleetPortSettingsInput$FleetId": "Unique identifier for a fleet to update port settings for.
", - "UpdateFleetPortSettingsOutput$FleetId": "Unique identifier for a fleet that was updated.
", - "UpdateRuntimeConfigurationInput$FleetId": "Unique identifier for a fleet to update run-time configuration for.
", - "VpcPeeringConnection$FleetId": "Unique identifier for a fleet. This ID determines the ID of the Amazon GameLift VPC for your fleet.
" + "FleetUtilization$FleetId": "A unique identifier for a fleet.
", + "GameSession$FleetId": "A unique identifier for a fleet that the game session is running on.
", + "GetInstanceAccessInput$FleetId": "A unique identifier for a fleet that contains the instance you want access to. You can use either the fleet ID or ARN value. The fleet can be in any of the following statuses: ACTIVATING
, ACTIVE
, or ERROR
. Fleets with an ERROR
status may be accessible for a short time before they are deleted.
A unique identifier for a fleet that the instance is in.
", + "InstanceAccess$FleetId": "A unique identifier for a fleet containing the instance being accessed.
", + "PlayerSession$FleetId": "A unique identifier for a fleet that the player's game session is running on.
", + "PutScalingPolicyInput$FleetId": "A unique identifier for a fleet to apply this policy to. You can use either the fleet ID or ARN value. The fleet cannot be in any of the following statuses: ERROR or DELETING.
", + "ResolveAliasOutput$FleetId": "The fleet identifier that the alias is pointing to.
", + "RoutingStrategy$FleetId": "The unique identifier for a fleet that the alias points to. This value is the fleet ID, not the fleet ARN.
", + "ScalingPolicy$FleetId": "A unique identifier for a fleet that is associated with this scaling policy.
", + "SearchGameSessionsInput$FleetId": "A unique identifier for a fleet to search for active game sessions. You can use either the fleet ID or ARN value. Each request must reference either a fleet ID or alias ID, but not both.
", + "StartFleetActionsInput$FleetId": "A unique identifier for a fleet to start actions on. You can use either the fleet ID or ARN value.
", + "StopFleetActionsInput$FleetId": "A unique identifier for a fleet to stop actions on. You can use either the fleet ID or ARN value.
", + "UpdateFleetAttributesInput$FleetId": "A unique identifier for a fleet to update attribute metadata for. You can use either the fleet ID or ARN value.
", + "UpdateFleetAttributesOutput$FleetId": "A unique identifier for a fleet that was updated. Use either the fleet ID or ARN value.
", + "UpdateFleetCapacityInput$FleetId": "A unique identifier for a fleet to update capacity for. You can use either the fleet ID or ARN value.
", + "UpdateFleetCapacityOutput$FleetId": "A unique identifier for a fleet that was updated.
", + "UpdateFleetPortSettingsInput$FleetId": "A unique identifier for a fleet to update port settings for. You can use either the fleet ID or ARN value.
", + "UpdateFleetPortSettingsOutput$FleetId": "A unique identifier for a fleet that was updated.
", + "UpdateRuntimeConfigurationInput$FleetId": "A unique identifier for a fleet to update runtime configuration for. You can use either the fleet ID or ARN value.
", + "VpcPeeringConnection$FleetId": "A unique identifier for a fleet. This ID determines the ID of the Amazon GameLift VPC for your fleet.
" } }, "FleetIdList": { "base": null, "refs": { - "DescribeFleetAttributesInput$FleetIds": "Unique identifier for a fleet(s) to retrieve attributes for. To request attributes for all fleets, leave this parameter empty.
", - "DescribeFleetCapacityInput$FleetIds": "Unique identifier for a fleet(s) to retrieve capacity information for. To request capacity information for all fleets, leave this parameter empty.
", - "DescribeFleetUtilizationInput$FleetIds": "Unique identifier for a fleet(s) to retrieve utilization data for. To request utilization data for all fleets, leave this parameter empty.
", + "DescribeFleetAttributesInput$FleetIds": "A unique identifier for a fleet(s) to retrieve attributes for. You can use either the fleet ID or ARN value.
", + "DescribeFleetCapacityInput$FleetIds": "A unique identifier for a fleet(s) to retrieve capacity information for. You can use either the fleet ID or ARN value.
", + "DescribeFleetUtilizationInput$FleetIds": "A unique identifier for a fleet(s) to retrieve utilization data for. You can use either the fleet ID or ARN value.
", "ListFleetsOutput$FleetIds": "Set of fleet IDs matching the list request. You can retrieve additional information about all returned fleets by passing this result set to a call to DescribeFleetAttributes, DescribeFleetCapacity, or DescribeFleetUtilization.
" } }, @@ -842,12 +864,12 @@ "FleetType": { "base": null, "refs": { - "CreateFleetInput$FleetType": "Indicates whether to use on-demand instances or spot instances for this fleet. If empty, the default is ON_DEMAND. Both categories of instances use identical hardware and configurations based on the instance type selected for this fleet. Learn more about On-Demand versus Spot Instances.
", + "CreateFleetInput$FleetType": "Indicates whether to use On-Demand instances or Spot instances for this fleet. If empty, the default is ON_DEMAND
. Both categories of instances use identical hardware and configurations based on the instance type selected for this fleet. Learn more about On-Demand versus Spot Instances.
Indicates whether the fleet uses on-demand or spot instances. A spot instance in use may be interrupted with a two-minute notification.
" } }, "FleetUtilization": { - "base": "Current status of fleet utilization, including the number of game and player sessions being hosted.
Describe fleets:
Update fleets:
Manage fleet actions:
Current status of fleet utilization, including the number of game and player sessions being hosted.
Manage fleet actions:
Collection of objects containing utilization information for each requested fleet ID.
" + "DescribeFleetUtilizationOutput$FleetUtilization": "A collection of objects containing utilization information for each requested fleet ID.
" } }, "Float": { "base": null, "refs": { - "PlayerLatency$LatencyInMilliseconds": "Amount of time that represents the time lag experienced by the player when connected to the specified region.
" + "PlayerLatency$LatencyInMilliseconds": "Amount of time that represents the time lag experienced by the player when connected to the specified Region.
" } }, "FreeText": { "base": null, "refs": { - "Alias$Description": "Human-readable description of an alias.
", - "Build$Name": "Descriptive label that is associated with a build. Build names do not need to be unique. It can be set using CreateBuild or UpdateBuild.
", - "Build$Version": "Version that is associated with a build or script. Version strings do not need to be unique. This value can be set using CreateBuild or UpdateBuild.
", - "RoutingStrategy$Message": "Message text to be used with a terminal routing strategy.
" + "Alias$Description": "A human-readable description of an alias.
", + "Build$Name": "A descriptive label that is associated with a build. Build names do not need to be unique. It can be set using CreateBuild or UpdateBuild.
", + "Build$Version": "Version information that is associated with a build or script. Version strings do not need to be unique. This value can be set using CreateBuild or UpdateBuild.
", + "RoutingStrategy$Message": "The message text to be used with a terminal routing strategy.
" } }, "GameProperty": { - "base": "Set of key-value pairs that contain information about a game session. When included in a game session request, these properties communicate details to be used when setting up the new game session, such as to specify a game mode, level, or map. Game properties are passed to the game server process when initiating a new game session; the server process uses the properties as appropriate. For more information, see the Amazon GameLift Developer Guide.
", + "base": "Set of key-value pairs that contain information about a game session. When included in a game session request, these properties communicate details to be used when setting up the new game session. For example, a game property might specify a game mode, level, or map. Game properties are passed to the game server process when initiating a new game session. For more information, see the Amazon GameLift Developer Guide.
", "refs": { "GamePropertyList$member": null } @@ -882,25 +904,25 @@ "GamePropertyKey": { "base": null, "refs": { - "GameProperty$Key": "Game property identifier.
" + "GameProperty$Key": "The game property identifier.
" } }, "GamePropertyList": { "base": null, "refs": { "CreateGameSessionInput$GameProperties": "Set of custom properties for a game session, formatted as key:value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session).
", - "CreateMatchmakingConfigurationInput$GameProperties": "Set of custom properties for a game session, formatted as key:value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", + "CreateMatchmakingConfigurationInput$GameProperties": "A set of custom properties for a game session, formatted as key-value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", "GameSession$GameProperties": "Set of custom properties for a game session, formatted as key:value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). You can search for active game sessions based on this custom data with SearchGameSessions.
", "GameSessionPlacement$GameProperties": "Set of custom properties for a game session, formatted as key:value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session).
", - "MatchmakingConfiguration$GameProperties": "Set of custom properties for a game session, formatted as key:value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", + "MatchmakingConfiguration$GameProperties": "A set of custom properties for a game session, formatted as key-value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", "StartGameSessionPlacementInput$GameProperties": "Set of custom properties for a game session, formatted as key:value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session).
", - "UpdateMatchmakingConfigurationInput$GameProperties": "Set of custom properties for a game session, formatted as key:value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
" + "UpdateMatchmakingConfigurationInput$GameProperties": "A set of custom properties for a game session, formatted as key-value pairs. These properties are passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
" } }, "GamePropertyValue": { "base": null, "refs": { - "GameProperty$Value": "Game property value.
" + "GameProperty$Value": "The game property value.
" } }, "GameSession": { @@ -909,13 +931,13 @@ "CreateGameSessionOutput$GameSession": "Object that describes the newly created game session record.
", "GameSessionDetail$GameSession": "Object that describes a game session.
", "GameSessionList$member": null, - "UpdateGameSessionOutput$GameSession": "Object that contains the updated game session metadata.
" + "UpdateGameSessionOutput$GameSession": "The updated game session metadata.
" } }, "GameSessionActivationTimeoutSeconds": { "base": null, "refs": { - "RuntimeConfiguration$GameSessionActivationTimeoutSeconds": "Maximum amount of time (in seconds) that a game session can remain in status ACTIVATING
. If the game session is not active before the timeout, activation is terminated and the game session status is changed to TERMINATED
.
The maximum amount of time (in seconds) that a game session can remain in status ACTIVATING
. If the game session is not active before the timeout, activation is terminated and the game session status is changed to TERMINATED
.
Set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session).
", - "CreateMatchmakingConfigurationInput$GameSessionData": "Set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", + "CreateMatchmakingConfigurationInput$GameSessionData": "A set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", "GameSession$GameSessionData": "Set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session).
", "GameSessionPlacement$GameSessionData": "Set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session).
", - "MatchmakingConfiguration$GameSessionData": "Set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", + "MatchmakingConfiguration$GameSessionData": "A set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
", "StartGameSessionPlacementInput$GameSessionData": "Set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session).
", - "UpdateMatchmakingConfigurationInput$GameSessionData": "Set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
" + "UpdateMatchmakingConfigurationInput$GameSessionData": "A set of custom game session properties, formatted as a single string value. This data is passed to a game server process in the GameSession object with a request to start a new game session (see Start a Game Session). This information is added to the new GameSession object that is created for a successful match.
" } }, "GameSessionDetail": { @@ -945,7 +967,7 @@ "GameSessionDetailList": { "base": null, "refs": { - "DescribeGameSessionDetailsOutput$GameSessionDetails": "Collection of objects containing game session properties and the protection policy currently in force for each session matching the request.
" + "DescribeGameSessionDetailsOutput$GameSessionDetails": "A collection of objects containing game session properties and the protection policy currently in force for each session matching the request.
" } }, "GameSessionFullException": { @@ -956,8 +978,8 @@ "GameSessionList": { "base": null, "refs": { - "DescribeGameSessionsOutput$GameSessions": "Collection of objects containing game session properties for each session matching the request.
", - "SearchGameSessionsOutput$GameSessions": "Collection of objects containing game session properties for each session matching the request.
" + "DescribeGameSessionsOutput$GameSessions": "A collection of objects containing game session properties for each session matching the request.
", + "SearchGameSessionsOutput$GameSessions": "A collection of objects containing game session properties for each session matching the request.
" } }, "GameSessionPlacement": { @@ -971,19 +993,19 @@ "GameSessionPlacementState": { "base": null, "refs": { - "GameSessionPlacement$Status": "Current status of the game session placement request.
PENDING -- The placement request is currently in the queue waiting to be processed.
FULFILLED -- A new game session and player sessions (if requested) have been successfully created. Values for GameSessionArn and GameSessionRegion are available.
CANCELLED -- The placement request was canceled with a call to StopGameSessionPlacement.
TIMED_OUT -- A new game session was not successfully created before the time limit expired. You can resubmit the placement request as needed.
Current status of the game session placement request.
PENDING -- The placement request is currently in the queue waiting to be processed.
FULFILLED -- A new game session and player sessions (if requested) have been successfully created. Values for GameSessionArn and GameSessionRegion are available.
CANCELLED -- The placement request was canceled with a call to StopGameSessionPlacement.
TIMED_OUT -- A new game session was not successfully created before the time limit expired. You can resubmit the placement request as needed.
FAILED -- GameLift is not able to complete the process of placing the game session. Common reasons are the game session terminated before the placement process was completed, or an unexpected internal error.
Configuration of a queue that is used to process game session placement requests. The queue configuration identifies several game features:
The destinations where a new game session can potentially be hosted. Amazon GameLift tries these destinations in an order based on either the queue's default order or player latency information, if provided in a placement request. With latency information, Amazon GameLift can place game sessions where the majority of players are reporting the lowest possible latency.
The length of time that placement requests can wait in the queue before timing out.
A set of optional latency policies that protect individual players from high latencies, preventing game sessions from being placed where any individual player is reporting latency higher than a policy's maximum.
Object that describes the newly created game session queue.
", + "CreateGameSessionQueueOutput$GameSessionQueue": "An object that describes the newly created game session queue.
", "GameSessionQueueList$member": null, - "UpdateGameSessionQueueOutput$GameSessionQueue": "Object that describes the newly updated game session queue.
" + "UpdateGameSessionQueueOutput$GameSessionQueue": "An object that describes the newly updated game session queue.
" } }, "GameSessionQueueDestination": { - "base": "Fleet designated in a game session queue. Requests for new game sessions in the queue are fulfilled by starting a new game session on any destination configured for a queue.
", + "base": "Fleet designated in a game session queue. Requests for new game sessions in the queue are fulfilled by starting a new game session on any destination that is configured for a queue.
", "refs": { "GameSessionQueueDestinationList$member": null } @@ -991,33 +1013,33 @@ "GameSessionQueueDestinationList": { "base": null, "refs": { - "CreateGameSessionQueueInput$Destinations": "List of fleets that can be used to fulfill game session placement requests in the queue. Fleets are identified by either a fleet ARN or a fleet alias ARN. Destinations are listed in default preference order.
", - "GameSessionQueue$Destinations": "List of fleets that can be used to fulfill game session placement requests in the queue. Fleets are identified by either a fleet ARN or a fleet alias ARN. Destinations are listed in default preference order.
", - "UpdateGameSessionQueueInput$Destinations": "List of fleets that can be used to fulfill game session placement requests in the queue. Fleets are identified by either a fleet ARN or a fleet alias ARN. Destinations are listed in default preference order. When updating this list, provide a complete list of destinations.
" + "CreateGameSessionQueueInput$Destinations": "A list of fleets that can be used to fulfill game session placement requests in the queue. Fleets are identified by either a fleet ARN or a fleet alias ARN. Destinations are listed in default preference order.
", + "GameSessionQueue$Destinations": "A list of fleets that can be used to fulfill game session placement requests in the queue. Fleets are identified by either a fleet ARN or a fleet alias ARN. Destinations are listed in default preference order.
", + "UpdateGameSessionQueueInput$Destinations": "A list of fleets that can be used to fulfill game session placement requests in the queue. Fleets are identified by either a fleet ARN or a fleet alias ARN. Destinations are listed in default preference order. When updating this list, provide a complete list of destinations.
" } }, "GameSessionQueueList": { "base": null, "refs": { - "DescribeGameSessionQueuesOutput$GameSessionQueues": "Collection of objects that describes the requested game session queues.
" + "DescribeGameSessionQueuesOutput$GameSessionQueues": "A collection of objects that describe the requested game session queues.
" } }, "GameSessionQueueName": { "base": null, "refs": { - "CreateGameSessionQueueInput$Name": "Descriptive label that is associated with game session queue. Queue names must be unique within each region.
", - "DeleteGameSessionQueueInput$Name": "Descriptive label that is associated with game session queue. Queue names must be unique within each region.
", - "GameSessionPlacement$GameSessionQueueName": "Descriptive label that is associated with game session queue. Queue names must be unique within each region.
", - "GameSessionQueue$Name": "Descriptive label that is associated with game session queue. Queue names must be unique within each region.
", + "CreateGameSessionQueueInput$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.
", + "DeleteGameSessionQueueInput$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region. You can use either the queue ID or ARN value.
", + "GameSessionPlacement$GameSessionQueueName": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.
", + "GameSessionQueue$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region.
", "GameSessionQueueNameList$member": null, - "StartGameSessionPlacementInput$GameSessionQueueName": "Name of the queue to use to place the new game session.
", - "UpdateGameSessionQueueInput$Name": "Descriptive label that is associated with game session queue. Queue names must be unique within each region.
" + "StartGameSessionPlacementInput$GameSessionQueueName": "Name of the queue to use to place the new game session. You can use either the qieue name or ARN value.
", + "UpdateGameSessionQueueInput$Name": "A descriptive label that is associated with game session queue. Queue names must be unique within each Region. You can use either the queue ID or ARN value.
" } }, "GameSessionQueueNameList": { "base": null, "refs": { - "DescribeGameSessionQueuesInput$Names": "List of queue names to retrieve information for. To request settings for all queues, leave this parameter empty.
" + "DescribeGameSessionQueuesInput$Names": "A list of queue names to retrieve information for. You can use either the queue ID or ARN value. To request settings for all queues, leave this parameter empty.
" } }, "GameSessionStatus": { @@ -1057,10 +1079,10 @@ "refs": { "CreateGameSessionInput$GameSessionId": " This parameter is no longer preferred. Please use IdempotencyToken
instead. Custom string that uniquely identifies a request for a new game session. Maximum token length is 48 characters. If provided, this string is included in the new game session's ID. (A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>
.)
Custom string that uniquely identifies a request for a new game session. Maximum token length is 48 characters. If provided, this string is included in the new game session's ID. (A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>
.) Idempotency tokens remain in use for 30 days after a game session has ended; game session objects are retained for this time period and then deleted.
Unique identifier for a game session placement to retrieve.
", - "GameSessionPlacement$PlacementId": "Unique identifier for a game session placement.
", - "StartGameSessionPlacementInput$PlacementId": "Unique identifier to assign to the new game session placement. This value is developer-defined. The value must be unique across all regions and cannot be reused unless you are resubmitting a canceled or timed-out placement request.
", - "StopGameSessionPlacementInput$PlacementId": "Unique identifier for a game session placement to cancel.
" + "DescribeGameSessionPlacementInput$PlacementId": "A unique identifier for a game session placement to retrieve.
", + "GameSessionPlacement$PlacementId": "A unique identifier for a game session placement.
", + "StartGameSessionPlacementInput$PlacementId": "A unique identifier to assign to the new game session placement. This value is developer-defined. The value must be unique across all Regions and cannot be reused unless you are resubmitting a canceled or timed-out placement request.
", + "StopGameSessionPlacementInput$PlacementId": "A unique identifier for a game session placement to cancel.
" } }, "IdempotentParameterMismatchException": { @@ -1077,7 +1099,7 @@ "InstanceAccess": { "base": "Information required to remotely connect to a fleet instance. Access is requested by calling GetInstanceAccess.
", "refs": { - "GetInstanceAccessOutput$InstanceAccess": "Object that contains connection information for a fleet instance, including IP address and access credentials.
" + "GetInstanceAccessOutput$InstanceAccess": "The connection information for a fleet instance, including IP address and access credentials.
" } }, "InstanceCredentials": { @@ -1089,16 +1111,16 @@ "InstanceId": { "base": null, "refs": { - "DescribeInstancesInput$InstanceId": "Unique identifier for an instance to retrieve. Specify an instance ID or leave blank to retrieve all instances in the fleet.
", - "GetInstanceAccessInput$InstanceId": "Unique identifier for an instance you want to get access to. You can access an instance in any status.
", - "Instance$InstanceId": "Unique identifier for an instance.
", - "InstanceAccess$InstanceId": "Unique identifier for an instance being accessed.
" + "DescribeInstancesInput$InstanceId": "A unique identifier for an instance to retrieve. Specify an instance ID or leave blank to retrieve all instances in the fleet.
", + "GetInstanceAccessInput$InstanceId": "A unique identifier for an instance you want to get access to. You can access an instance in any status.
", + "Instance$InstanceId": "A unique identifier for an instance.
", + "InstanceAccess$InstanceId": "A unique identifier for an instance being accessed.
" } }, "InstanceList": { "base": null, "refs": { - "DescribeInstancesOutput$Instances": "Collection of objects containing properties for each instance returned.
" + "DescribeInstancesOutput$Instances": "A collection of objects containing properties for each instance returned.
" } }, "InstanceStatus": { @@ -1137,15 +1159,15 @@ "IpAddress": { "base": null, "refs": { - "GameSession$IpAddress": "IP address of the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.
", - "GameSessionPlacement$IpAddress": "IP address of the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number. This value is set once the new game session is placed (placement status is FULFILLED
).
IP address assigned to the instance.
", - "InstanceAccess$IpAddress": "IP address assigned to the instance.
", - "PlayerSession$IpAddress": "IP address of the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.
" + "GameSession$IpAddress": "IP address of the instance that is running the game session. When connecting to a Amazon GameLift game server, a client needs to reference an IP address (or DNS name) and port number.
", + "GameSessionPlacement$IpAddress": "IP address of the instance that is running the game session. When connecting to a Amazon GameLift game server, a client needs to reference an IP address (or DNS name) and port number. This value is set once the new game session is placed (placement status is FULFILLED
).
IP address that is assigned to the instance.
", + "InstanceAccess$IpAddress": "IP address that is assigned to the instance.
", + "PlayerSession$IpAddress": "IP address of the instance that is running the game session. When connecting to a Amazon GameLift game server, a client needs to reference an IP address (or DNS name) and port number.
" } }, "IpPermission": { - "base": "A range of IP addresses and port settings that allow inbound traffic to connect to server processes on an Amazon GameLift. New game sessions that are started on the fleet are assigned an IP address/port number combination, which must fall into the fleet's allowed ranges. For fleets created with a custom game server, the ranges reflect the server's game session assignments. For Realtime Servers fleets, Amazon GameLift automatically opens two port ranges, one for TCP messaging and one for UDP for use by the Realtime servers.
", + "base": "A range of IP addresses and port settings that allow inbound traffic to connect to server processes on an Amazon GameLift hosting resource. New game sessions that are started on the fleet are assigned an IP address/port number combination, which must fall into the fleet's allowed ranges. For fleets created with a custom game server, the ranges reflect the server's game session assignments. For Realtime Servers fleets, Amazon GameLift automatically opens two port ranges, one for TCP messaging and one for UDP for use by the Realtime servers.
", "refs": { "IpPermissionsList$member": null } @@ -1153,22 +1175,22 @@ "IpPermissionsList": { "base": null, "refs": { - "CreateFleetInput$EC2InboundPermissions": "Range of IP addresses and port settings that permit inbound traffic to access game sessions that running on the fleet. For fleets using a custom game build, this parameter is required before game sessions running on the fleet can accept connections. For Realtime Servers fleets, Amazon GameLift automatically sets TCP and UDP ranges for use by the Realtime servers. You can specify multiple permission settings or add more by updating the fleet.
", - "DescribeFleetPortSettingsOutput$InboundPermissions": "Object that contains port settings for the requested fleet ID.
", - "UpdateFleetPortSettingsInput$InboundPermissionAuthorizations": "Collection of port settings to be added to the fleet record.
", - "UpdateFleetPortSettingsInput$InboundPermissionRevocations": "Collection of port settings to be removed from the fleet record.
" + "CreateFleetInput$EC2InboundPermissions": "Range of IP addresses and port settings that permit inbound traffic to access game sessions that are running on the fleet. For fleets using a custom game build, this parameter is required before game sessions running on the fleet can accept connections. For Realtime Servers fleets, Amazon GameLift automatically sets TCP and UDP ranges for use by the Realtime servers. You can specify multiple permission settings or add more by updating the fleet.
", + "DescribeFleetPortSettingsOutput$InboundPermissions": "The port settings for the requested fleet ID.
", + "UpdateFleetPortSettingsInput$InboundPermissionAuthorizations": "A collection of port settings to be added to the fleet record.
", + "UpdateFleetPortSettingsInput$InboundPermissionRevocations": "A collection of port settings to be removed from the fleet record.
" } }, "IpProtocol": { "base": null, "refs": { - "IpPermission$Protocol": "Network communication protocol used by the fleet.
" + "IpPermission$Protocol": "The network communication protocol used by the fleet.
" } }, "LatencyMap": { "base": null, "refs": { - "Player$LatencyInMs": "Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS regions. If this property is present, FlexMatch considers placing the match only in regions for which latency is reported.
If a matchmaker has a rule that evaluates player latency, players must report latency in order to be matched. If no latency is reported in this scenario, FlexMatch assumes that no regions are available to the player and the ticket is not matchable.
" + "Player$LatencyInMs": "Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS Regions. If this property is present, FlexMatch considers placing the match only in Regions for which latency is reported.
If a matchmaker has a rule that evaluates player latency, players must report latency in order to be matched. If no latency is reported in this scenario, FlexMatch assumes that no Regions are available to the player and the ticket is not matchable.
" } }, "LimitExceededException": { @@ -1216,6 +1238,16 @@ "refs": { } }, + "ListTagsForResourceRequest": { + "base": null, + "refs": { + } + }, + "ListTagsForResourceResponse": { + "base": null, + "refs": { + } + }, "MatchedPlayerSession": { "base": "Represents a new player session that is created as a result of a successful FlexMatch match. A successful match automatically creates new player sessions for every player ID in the original matchmaking request.
When players connect to the match's game session, they must include both player ID and player session ID in order to claim their assigned player slot.
", "refs": { @@ -1225,7 +1257,7 @@ "MatchedPlayerSessionList": { "base": null, "refs": { - "GameSessionConnectionInfo$MatchedPlayerSessions": "Collection of player session IDs, one for each player ID that was included in the original matchmaking request.
" + "GameSessionConnectionInfo$MatchedPlayerSessions": "A collection of player session IDs, one for each player ID that was included in the original matchmaking request.
" } }, "MatchmakerData": { @@ -1238,9 +1270,9 @@ "MatchmakingAcceptanceTimeoutInteger": { "base": null, "refs": { - "CreateMatchmakingConfigurationInput$AcceptanceTimeoutSeconds": "Length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.
", - "MatchmakingConfiguration$AcceptanceTimeoutSeconds": "Length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.
", - "UpdateMatchmakingConfigurationInput$AcceptanceTimeoutSeconds": "Length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.
" + "CreateMatchmakingConfigurationInput$AcceptanceTimeoutSeconds": "The length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.
", + "MatchmakingConfiguration$AcceptanceTimeoutSeconds": "The length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.
", + "UpdateMatchmakingConfigurationInput$AcceptanceTimeoutSeconds": "The length of time (in seconds) to wait for players to accept a proposed match. If any player rejects the match or fails to accept before the timeout, the ticket continues to look for an acceptable match.
" } }, "MatchmakingConfiguration": { @@ -1248,13 +1280,36 @@ "refs": { "CreateMatchmakingConfigurationOutput$Configuration": "Object that describes the newly created matchmaking configuration.
", "MatchmakingConfigurationList$member": null, - "UpdateMatchmakingConfigurationOutput$Configuration": "Object that describes the updated matchmaking configuration.
" + "UpdateMatchmakingConfigurationOutput$Configuration": "The updated matchmaking configuration.
" + } + }, + "MatchmakingConfigurationArn": { + "base": "Data type used for Matchmaking Configuration ARN.", + "refs": { + "MatchmakingConfiguration$ConfigurationArn": "Amazon Resource Name (ARN) that is assigned to a GameLift matchmaking configuration resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift configuration ARN, the resource ID matches the Name value.
", + "MatchmakingTicket$ConfigurationArn": "The Amazon Resource Name (ARN) associated with the GameLift matchmaking configuration resource that is used with this ticket.
" } }, "MatchmakingConfigurationList": { "base": null, "refs": { - "DescribeMatchmakingConfigurationsOutput$Configurations": "Collection of requested matchmaking configuration objects.
" + "DescribeMatchmakingConfigurationsOutput$Configurations": "A collection of requested matchmaking configurations.
" + } + }, + "MatchmakingConfigurationName": { + "base": null, + "refs": { + "DeleteMatchmakingConfigurationInput$Name": "A unique identifier for a matchmaking configuration. You can use either the configuration name or ARN value.
", + "MatchmakingConfigurationNameList$member": null, + "StartMatchBackfillInput$ConfigurationName": "Name of the matchmaker to use for this request. You can use either the configuration name or ARN value. The ARN of the matchmaker that was used with the original game session is listed in the GameSession object, MatchmakerData
property.
Name of the matchmaking configuration to use for this request. Matchmaking configurations must exist in the same Region as this request. You can use either the configuration name or ARN value.
", + "UpdateMatchmakingConfigurationInput$Name": "A unique identifier for a matchmaking configuration to update. You can use either the configuration name or ARN value.
" + } + }, + "MatchmakingConfigurationNameList": { + "base": null, + "refs": { + "DescribeMatchmakingConfigurationsInput$Names": "A unique identifier for a matchmaking configuration(s) to retrieve. You can use either the configuration name or ARN value. To request all existing configurations, leave this parameter empty.
" } }, "MatchmakingConfigurationStatus": { @@ -1266,61 +1321,68 @@ "MatchmakingIdList": { "base": null, "refs": { - "DescribeMatchmakingConfigurationsInput$Names": "Unique identifier for a matchmaking configuration(s) to retrieve. To request all existing configurations, leave this parameter empty.
", - "DescribeMatchmakingInput$TicketIds": "Unique identifier for a matchmaking ticket. You can include up to 10 ID values.
" + "DescribeMatchmakingInput$TicketIds": "A unique identifier for a matchmaking ticket. You can include up to 10 ID values.
" } }, "MatchmakingIdStringModel": { "base": null, "refs": { - "AcceptMatchInput$TicketId": "Unique identifier for a matchmaking ticket. The ticket must be in status REQUIRES_ACCEPTANCE
; otherwise this request will fail.
Unique identifier for a matchmaking configuration. This name is used to identify the configuration associated with a matchmaking request or ticket.
", - "CreateMatchmakingConfigurationInput$RuleSetName": "Unique identifier for a matchmaking rule set to use with this configuration. A matchmaking configuration can only use rule sets that are defined in the same region.
", - "CreateMatchmakingRuleSetInput$Name": "Unique identifier for a matchmaking rule set. A matchmaking configuration identifies the rule set it uses by this name value. (Note: The rule set name is different from the optional \"name\" field in the rule set body.)
", - "DeleteMatchmakingConfigurationInput$Name": "Unique identifier for a matchmaking configuration
", - "DeleteMatchmakingRuleSetInput$Name": "Unique identifier for a matchmaking rule set to be deleted. (Note: The rule set name is different from the optional \"name\" field in the rule set body.)
", - "DescribeMatchmakingConfigurationsInput$RuleSetName": "Unique identifier for a matchmaking rule set. Use this parameter to retrieve all matchmaking configurations that use this rule set.
", - "MatchmakingConfiguration$Name": "Unique identifier for a matchmaking configuration. This name is used to identify the configuration associated with a matchmaking request or ticket.
", - "MatchmakingConfiguration$RuleSetName": "Unique identifier for a matchmaking rule set to use with this configuration. A matchmaking configuration can only use rule sets that are defined in the same region.
", + "AcceptMatchInput$TicketId": "A unique identifier for a matchmaking ticket. The ticket must be in status REQUIRES_ACCEPTANCE
; otherwise this request will fail.
A unique identifier for a matchmaking configuration. This name is used to identify the configuration associated with a matchmaking request or ticket.
", + "CreateMatchmakingRuleSetInput$Name": "A unique identifier for a matchmaking rule set. A matchmaking configuration identifies the rule set it uses by this name value. Note that the rule set name is different from the optional name
field in the rule set body.
A unique identifier for a matchmaking configuration. This name is used to identify the configuration associated with a matchmaking request or ticket.
", + "MatchmakingConfiguration$RuleSetName": "A unique identifier for a matchmaking rule set to use with this configuration. A matchmaking configuration can only use rule sets that are defined in the same Region.
", "MatchmakingIdList$member": null, - "MatchmakingRuleSet$RuleSetName": "Unique identifier for a matchmaking rule set
", - "MatchmakingRuleSetNameList$member": null, - "MatchmakingTicket$TicketId": "Unique identifier for a matchmaking ticket.
", + "MatchmakingRuleSet$RuleSetName": "A unique identifier for a matchmaking rule set
", + "MatchmakingTicket$TicketId": "A unique identifier for a matchmaking ticket.
", "MatchmakingTicket$ConfigurationName": "Name of the MatchmakingConfiguration that is used with this ticket. Matchmaking configurations determine how players are grouped into a match and how a new game session is created for the match.
", - "StartMatchBackfillInput$TicketId": "Unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon GameLift will generate one in the form of a UUID. Use this identifier to track the match backfill ticket status and retrieve match results.
", - "StartMatchBackfillInput$ConfigurationName": "Name of the matchmaker to use for this request. The name of the matchmaker that was used with the original game session is listed in the GameSession object, MatchmakerData
property. This property contains a matchmaking configuration ARN value, which includes the matchmaker name. (In the ARN value \"arn:aws:gamelift:us-west-2:111122223333:matchmakingconfiguration/MM-4v4\", the matchmaking configuration name is \"MM-4v4\".) Use only the name for this parameter.
Unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon GameLift will generate one in the form of a UUID. Use this identifier to track the matchmaking ticket status and retrieve match results.
", - "StartMatchmakingInput$ConfigurationName": "Name of the matchmaking configuration to use for this request. Matchmaking configurations must exist in the same region as this request.
", - "StopMatchmakingInput$TicketId": "Unique identifier for a matchmaking ticket.
", - "UpdateMatchmakingConfigurationInput$Name": "Unique identifier for a matchmaking configuration to update.
", - "UpdateMatchmakingConfigurationInput$RuleSetName": "Unique identifier for a matchmaking rule set to use with this configuration. A matchmaking configuration can only use rule sets that are defined in the same region.
" + "StartMatchBackfillInput$TicketId": "A unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon GameLift will generate one in the form of a UUID. Use this identifier to track the match backfill ticket status and retrieve match results.
", + "StartMatchmakingInput$TicketId": "A unique identifier for a matchmaking ticket. If no ticket ID is specified here, Amazon GameLift will generate one in the form of a UUID. Use this identifier to track the matchmaking ticket status and retrieve match results.
", + "StopMatchmakingInput$TicketId": "A unique identifier for a matchmaking ticket.
" } }, "MatchmakingRequestTimeoutInteger": { "base": null, "refs": { - "CreateMatchmakingConfigurationInput$RequestTimeoutSeconds": "Maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that fail due to timing out can be resubmitted as needed.
", - "MatchmakingConfiguration$RequestTimeoutSeconds": "Maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that fail due to timing out can be resubmitted as needed.
", - "UpdateMatchmakingConfigurationInput$RequestTimeoutSeconds": "Maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that fail due to timing out can be resubmitted as needed.
" + "CreateMatchmakingConfigurationInput$RequestTimeoutSeconds": "The maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that fail due to timing out can be resubmitted as needed.
", + "MatchmakingConfiguration$RequestTimeoutSeconds": "The maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that fail due to timing out can be resubmitted as needed.
", + "UpdateMatchmakingConfigurationInput$RequestTimeoutSeconds": "The maximum duration, in seconds, that a matchmaking ticket can remain in process before timing out. Requests that fail due to timing out can be resubmitted as needed.
" } }, "MatchmakingRuleSet": { "base": "Set of rule statements, used with FlexMatch, that determine how to build your player matches. Each rule set describes a type of group to be created and defines the parameters for acceptable player matches. Rule sets are used in MatchmakingConfiguration objects.
A rule set may define the following elements for a match. For detailed information and examples showing how to construct a rule set, see Build a FlexMatch Rule Set.
Teams -- Required. A rule set must define one or multiple teams for the match and set minimum and maximum team sizes. For example, a rule set might describe a 4x4 match that requires all eight slots to be filled.
Player attributes -- Optional. These attributes specify a set of player characteristics to evaluate when looking for a match. Matchmaking requests that use a rule set with player attributes must provide the corresponding attribute values. For example, an attribute might specify a player's skill or level.
Rules -- Optional. Rules define how to evaluate potential players for a match based on player attributes. A rule might specify minimum requirements for individual players, teams, or entire matches. For example, a rule might require each player to meet a certain skill level, each team to have at least one player in a certain role, or the match to have a minimum average skill level. or may describe an entire group--such as all teams must be evenly matched or have at least one player in a certain role.
Expansions -- Optional. Expansions allow you to relax the rules after a period of time when no acceptable matches are found. This feature lets you balance getting players into games in a reasonable amount of time instead of making them wait indefinitely for the best possible match. For example, you might use an expansion to increase the maximum skill variance between players after 30 seconds.
Object that describes the newly created matchmaking rule set.
", + "CreateMatchmakingRuleSetOutput$RuleSet": "The newly created matchmaking rule set.
", "MatchmakingRuleSetList$member": null } }, + "MatchmakingRuleSetArn": { + "base": "Data type used for Matchmaking RuleSet ARN.", + "refs": { + "MatchmakingConfiguration$RuleSetArn": "The Amazon Resource Name (ARN) associated with the GameLift matchmaking rule set resource that this configuration uses.
", + "MatchmakingRuleSet$RuleSetArn": "Amazon Resource Name (ARN) that is assigned to a GameLift matchmaking rule set resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift rule set ARN, the resource ID matches the RuleSetName value.
" + } + }, "MatchmakingRuleSetList": { "base": null, "refs": { - "DescribeMatchmakingRuleSetsOutput$RuleSets": "Collection of requested matchmaking rule set objects.
" + "DescribeMatchmakingRuleSetsOutput$RuleSets": "A collection of requested matchmaking rule set objects.
" + } + }, + "MatchmakingRuleSetName": { + "base": null, + "refs": { + "CreateMatchmakingConfigurationInput$RuleSetName": "A unique identifier for a matchmaking rule set to use with this configuration. You can use either the rule set name or ARN value. A matchmaking configuration can only use rule sets that are defined in the same Region.
", + "DeleteMatchmakingRuleSetInput$Name": "A unique identifier for a matchmaking rule set to be deleted. (Note: The rule set name is different from the optional \"name\" field in the rule set body.) You can use either the rule set name or ARN value.
", + "DescribeMatchmakingConfigurationsInput$RuleSetName": "A unique identifier for a matchmaking rule set. You can use either the rule set name or ARN value. Use this parameter to retrieve all matchmaking configurations that use this rule set.
", + "MatchmakingRuleSetNameList$member": null, + "UpdateMatchmakingConfigurationInput$RuleSetName": "A unique identifier for a matchmaking rule set to use with this configuration. You can use either the rule set name or ARN value. A matchmaking configuration can only use rule sets that are defined in the same Region.
" } }, "MatchmakingRuleSetNameList": { "base": null, "refs": { - "DescribeMatchmakingRuleSetsInput$Names": "List of one or more matchmaking rule set names to retrieve details for. (Note: The rule set name is different from the optional \"name\" field in the rule set body.)
" + "DescribeMatchmakingRuleSetsInput$Names": "A list of one or more matchmaking rule set names to retrieve details for. (Note: The rule set name is different from the optional \"name\" field in the rule set body.) You can use either the rule set name or ARN value.
" } }, "MatchmakingTicket": { @@ -1334,13 +1396,13 @@ "MatchmakingTicketList": { "base": null, "refs": { - "DescribeMatchmakingOutput$TicketList": "Collection of existing matchmaking ticket objects matching the request.
" + "DescribeMatchmakingOutput$TicketList": "A collection of existing matchmaking ticket objects matching the request.
" } }, "MaxConcurrentGameSessionActivations": { "base": null, "refs": { - "RuntimeConfiguration$MaxConcurrentGameSessionActivations": "Maximum number of game sessions with status ACTIVATING
to allow on an instance simultaneously. This setting limits the amount of instance resources that can be used for new game activations at any one time.
The maximum number of game sessions with status ACTIVATING
to allow on an instance simultaneously. This setting limits the amount of instance resources that can be used for new game activations at any one time.
Name of an Amazon CloudWatch metric group to add this fleet to. A metric group aggregates the metrics for all fleets in the group. Specify an existing metric group name, or provide a new name to create a new metric group. A fleet can only be included in one metric group at a time.
", + "CreateFleetInput$MetricGroups": "The name of an Amazon CloudWatch metric group to add this fleet to. A metric group aggregates the metrics for all fleets in the group. Specify an existing metric group name, or provide a new name to create a new metric group. A fleet can only be included in one metric group at a time.
", "FleetAttributes$MetricGroups": "Names of metric groups that this fleet is included in. In Amazon CloudWatch, you can view metrics for an individual fleet or aggregated metrics for fleets that are in a fleet metric group. A fleet can be included in only one metric group at a time.
", "UpdateFleetAttributesInput$MetricGroups": "Names of metric groups to include this fleet in. Amazon CloudWatch uses a fleet metric group is to aggregate metrics from multiple fleets. Use an existing metric group name to add this fleet to the group. Or use a new name to create a new metric group. A fleet can only be included in one metric group at a time.
" } @@ -1367,15 +1429,15 @@ "NonBlankAndLengthConstraintString": { "base": null, "refs": { - "Alias$Name": "Descriptive label that is associated with an alias. Alias names do not need to be unique.
", - "CreateAliasInput$Name": "Descriptive label that is associated with an alias. Alias names do not need to be unique.
", - "UpdateAliasInput$Name": "Descriptive label that is associated with an alias. Alias names do not need to be unique.
" + "Alias$Name": "A descriptive label that is associated with an alias. Alias names do not need to be unique.
", + "CreateAliasInput$Name": "A descriptive label that is associated with an alias. Alias names do not need to be unique.
", + "UpdateAliasInput$Name": "A descriptive label that is associated with an alias. Alias names do not need to be unique.
" } }, "NonBlankString": { "base": null, "refs": { - "IpPermission$IpRange": "Range of allowed IP addresses. This value must be expressed in CIDR notation. Example: \"000.000.000.000/[subnet mask]
\" or optionally the shortened version \"0.0.0.0/[subnet mask]
\".
A range of allowed IP addresses. This value must be expressed in CIDR notation. Example: \"000.000.000.000/[subnet mask]
\" or optionally the shortened version \"0.0.0.0/[subnet mask]
\".
Temporary secret key allowing access to the Amazon GameLift S3 account.
", "AwsCredentials$SessionToken": "Token used to associate a specific build ID with the files uploaded using these credentials.
", "ConflictException$Message": null, - "CreateFleetInput$InstanceRoleArn": "Unique identifier for an AWS IAM role that manages access to your AWS services. With an instance role ARN set, any application that runs on an instance in this fleet can assume the role, including install scripts, server processes, daemons (background processes). Create a role or look up a role's ARN using the IAM dashboard in the AWS Management Console. Learn more about using on-box credentials for your game servers at Access external resources from a game server.
", + "CreateFleetInput$InstanceRoleArn": "A unique identifier for an AWS IAM role that manages access to your AWS services. With an instance role ARN set, any application that runs on an instance in this fleet can assume the role, including install scripts, server processes, and daemons (background processes). Create a role or look up a role's ARN from the IAM dashboard in the AWS Management Console. Learn more about using on-box credentials for your game servers at Access external resources from a game server.
", "Event$Message": "Additional information related to the event.
", - "FleetAttributes$InstanceRoleArn": "Unique identifier for an AWS IAM role that manages access to your AWS services. With an instance role ARN set, any application that runs on an instance in this fleet can assume the role, including install scripts, server processes, daemons (background processes). Create a role or look up a role's ARN using the IAM dashboard in the AWS Management Console. Learn more about using on-box credentials for your game servers at Access external resources from a game server.
", + "FleetAttributes$InstanceRoleArn": "A unique identifier for an AWS IAM role that manages access to your AWS services. With an instance role ARN set, any application that runs on an instance in this fleet can assume the role, including install scripts, server processes, and daemons (background processes). Create a role or look up a role's ARN from the IAM dashboard in the AWS Management Console. Learn more about using on-box credentials for your game servers at Access external resources from a game server.
", "FleetCapacityExceededException$Message": null, "GameSessionFullException$Message": null, "IdempotentParameterMismatchException$Message": null, @@ -1399,18 +1461,19 @@ "InvalidRequestException$Message": null, "LatencyMap$key": null, "LimitExceededException$Message": null, - "ListAliasesInput$Name": "Descriptive label that is associated with an alias. Alias names do not need to be unique.
", - "ListAliasesInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", - "ListAliasesOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", + "ListAliasesInput$Name": "A descriptive label that is associated with an alias. Alias names do not need to be unique.
", + "ListAliasesInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", + "ListAliasesOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "ListBuildsInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "ListBuildsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "ListScriptsInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", - "ListScriptsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", + "ListScriptsInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", + "ListScriptsOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "NotFoundException$Message": null, - "S3Location$Bucket": "Amazon S3 bucket identifier. This is the name of the S3 bucket.
", - "S3Location$Key": "Name of the zip file containing the build files or script files.
", - "S3Location$RoleArn": "Amazon Resource Name (ARN) for an IAM role that allows Amazon GameLift to access the S3 bucket.
", - "S3Location$ObjectVersion": "Version of the file, if object versioning is turned on for the bucket. Amazon GameLift uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file; if not set, the latest version of the file is retrieved.
", + "S3Location$Bucket": "An Amazon S3 bucket identifier. This is the name of the S3 bucket.
", + "S3Location$Key": "The name of the zip file that contains the build files or script files.
", + "S3Location$RoleArn": "The Amazon Resource Name (ARN) for an IAM role that allows Amazon GameLift to access the S3 bucket.
", + "S3Location$ObjectVersion": "The version of the file, if object versioning is turned on for the bucket. Amazon GameLift uses this information when retrieving files from an S3 bucket that you own. Use this parameter to specify a specific version of the file. If not set, the latest version of the file is retrieved.
", + "TaggingFailedException$Message": null, "TerminalRoutingStrategyException$Message": null, "UnauthorizedException$Message": null, "UnsupportedRegionException$Message": null @@ -1420,29 +1483,29 @@ "base": null, "refs": { "AttributeValue$S": "For single string values. Maximum string length is 100 characters.
", - "CreateAliasInput$Description": "Human-readable description of an alias.
", - "CreateBuildInput$Name": "Descriptive label that is associated with a build. Build names do not need to be unique. You can use UpdateBuild to change this value later.
", - "CreateBuildInput$Version": "Version that is associated with a build or script. Version strings do not need to be unique. You can use UpdateBuild to change this value later.
", - "CreateFleetInput$Name": "Descriptive label that is associated with a fleet. Fleet names do not need to be unique.
", - "CreateFleetInput$Description": "Human-readable description of a fleet.
", - "CreateFleetInput$ServerLaunchPath": "This parameter is no longer used. Instead, specify a server launch path using the RuntimeConfiguration
parameter. (Requests that specify a server launch path and launch parameters instead of a run-time configuration will continue to work.)
This parameter is no longer used. Instead, specify server launch parameters in the RuntimeConfiguration
parameter. (Requests that specify a server launch path and launch parameters instead of a run-time configuration will continue to work.)
Unique identifier for the AWS account with the VPC that you want to peer your Amazon GameLift fleet with. You can find your Account ID in the AWS Management Console under account settings.
", - "CreateFleetInput$PeerVpcId": "Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", - "CreateGameSessionInput$Name": "Descriptive label that is associated with a game session. Session names do not need to be unique.
", - "CreateGameSessionInput$CreatorId": "Unique identifier for a player or entity creating the game session. This ID is used to enforce a resource protection policy (if one exists) that limits the number of concurrent active game sessions one player can have.
", - "CreateMatchmakingConfigurationInput$Description": "Meaningful description of the matchmaking configuration.
", - "CreatePlayerSessionInput$PlayerId": "Unique identifier for a player. Player IDs are developer-defined.
", - "CreateScriptInput$Name": "Descriptive label that is associated with a script. Script names do not need to be unique. You can use UpdateScript to change this value later.
", - "CreateScriptInput$Version": "Version that is associated with a build or script. Version strings do not need to be unique. You can use UpdateScript to change this value later.
", - "CreateVpcPeeringAuthorizationInput$GameLiftAwsAccountId": "Unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.
", - "CreateVpcPeeringAuthorizationInput$PeerVpcId": "Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", - "CreateVpcPeeringConnectionInput$PeerVpcAwsAccountId": "Unique identifier for the AWS account with the VPC that you want to peer your Amazon GameLift fleet with. You can find your Account ID in the AWS Management Console under account settings.
", - "CreateVpcPeeringConnectionInput$PeerVpcId": "Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", - "DeleteScalingPolicyInput$Name": "Descriptive label that is associated with a scaling policy. Policy names do not need to be unique.
", - "DeleteVpcPeeringAuthorizationInput$GameLiftAwsAccountId": "Unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.
", - "DeleteVpcPeeringAuthorizationInput$PeerVpcId": "Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", - "DeleteVpcPeeringConnectionInput$VpcPeeringConnectionId": "Unique identifier for a VPC peering connection. This value is included in the VpcPeeringConnection object, which can be retrieved by calling DescribeVpcPeeringConnections.
", + "CreateAliasInput$Description": "A human-readable description of the alias.
", + "CreateBuildInput$Name": "A descriptive label that is associated with a build. Build names do not need to be unique. You can use UpdateBuild to change this value later.
", + "CreateBuildInput$Version": "Version information that is associated with a build or script. Version strings do not need to be unique. You can use UpdateBuild to change this value later.
", + "CreateFleetInput$Name": "A descriptive label that is associated with a fleet. Fleet names do not need to be unique.
", + "CreateFleetInput$Description": "A human-readable description of a fleet.
", + "CreateFleetInput$ServerLaunchPath": "This parameter is no longer used. Instead, specify a server launch path using the RuntimeConfiguration
parameter. Requests that specify a server launch path and launch parameters instead of a runtime configuration will continue to work.
This parameter is no longer used. Instead, specify server launch parameters in the RuntimeConfiguration
parameter. (Requests that specify a server launch path and launch parameters instead of a runtime configuration will continue to work.)
A unique identifier for the AWS account with the VPC that you want to peer your Amazon GameLift fleet with. You can find your account ID in the AWS Management Console under account settings.
", + "CreateFleetInput$PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region as your fleet. To look up a VPC ID, use the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", + "CreateGameSessionInput$Name": "A descriptive label that is associated with a game session. Session names do not need to be unique.
", + "CreateGameSessionInput$CreatorId": "A unique identifier for a player or entity creating the game session. This ID is used to enforce a resource protection policy (if one exists) that limits the number of concurrent active game sessions one player can have.
", + "CreateMatchmakingConfigurationInput$Description": "A human-readable description of the matchmaking configuration.
", + "CreatePlayerSessionInput$PlayerId": "A unique identifier for a player. Player IDs are developer-defined.
", + "CreateScriptInput$Name": "A descriptive label that is associated with a script. Script names do not need to be unique. You can use UpdateScript to change this value later.
", + "CreateScriptInput$Version": "The version that is associated with a build or script. Version strings do not need to be unique. You can use UpdateScript to change this value later.
", + "CreateVpcPeeringAuthorizationInput$GameLiftAwsAccountId": "A unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.
", + "CreateVpcPeeringAuthorizationInput$PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", + "CreateVpcPeeringConnectionInput$PeerVpcAwsAccountId": "A unique identifier for the AWS account with the VPC that you want to peer your Amazon GameLift fleet with. You can find your Account ID in the AWS Management Console under account settings.
", + "CreateVpcPeeringConnectionInput$PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", + "DeleteScalingPolicyInput$Name": "A descriptive label that is associated with a scaling policy. Policy names do not need to be unique.
", + "DeleteVpcPeeringAuthorizationInput$GameLiftAwsAccountId": "A unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.
", + "DeleteVpcPeeringAuthorizationInput$PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", + "DeleteVpcPeeringConnectionInput$VpcPeeringConnectionId": "A unique identifier for a VPC peering connection. This value is included in the VpcPeeringConnection object, which can be retrieved by calling DescribeVpcPeeringConnections.
", "DescribeFleetAttributesInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.
", "DescribeFleetAttributesOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "DescribeFleetCapacityInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value. This parameter is ignored when the request specifies one or a list of fleet IDs.
", @@ -1454,83 +1517,83 @@ "DescribeGameSessionDetailsInput$StatusFilter": "Game session status to filter results on. Possible game session statuses include ACTIVE
, TERMINATED
, ACTIVATING
and TERMINATING
(the last two are transitory).
Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "DescribeGameSessionDetailsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "DescribeGameSessionQueuesInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", - "DescribeGameSessionQueuesOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", + "DescribeGameSessionQueuesInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", + "DescribeGameSessionQueuesOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "DescribeGameSessionsInput$StatusFilter": "Game session status to filter results on. Possible game session statuses include ACTIVE
, TERMINATED
, ACTIVATING
, and TERMINATING
(the last two are transitory).
Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "DescribeGameSessionsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "DescribeInstancesInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "DescribeInstancesOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "DescribeMatchmakingConfigurationsInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", - "DescribeMatchmakingConfigurationsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "DescribeMatchmakingRuleSetsInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", - "DescribeMatchmakingRuleSetsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "DescribePlayerSessionsInput$PlayerId": "Unique identifier for a player to retrieve player sessions for.
", + "DescribeMatchmakingConfigurationsInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", + "DescribeMatchmakingConfigurationsOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", + "DescribeMatchmakingRuleSetsInput$NextToken": "A token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", + "DescribeMatchmakingRuleSetsOutput$NextToken": "A token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", + "DescribePlayerSessionsInput$PlayerId": "A unique identifier for a player to retrieve player sessions for.
", "DescribePlayerSessionsInput$PlayerSessionStatusFilter": "Player session status to filter results on.
Possible player session statuses include the following:
RESERVED -- The player session request has been received, but the player has not yet connected to the server process and/or been validated.
ACTIVE -- The player has been validated by the server process and is currently connected.
COMPLETED -- The player connection has been dropped.
TIMEDOUT -- A player session request was received, but the player did not connect and/or was not validated within the timeout limit (60 seconds).
Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value. If a player session ID is specified, this parameter is ignored.
", "DescribePlayerSessionsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", "DescribeScalingPoliciesInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "DescribeScalingPoliciesOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "DesiredPlayerSession$PlayerId": "Unique identifier for a player to associate with the player session.
", - "Event$EventId": "Unique identifier for a fleet event.
", - "Event$ResourceId": "Unique identifier for an event resource, such as a fleet ID.
", + "DesiredPlayerSession$PlayerId": "A unique identifier for a player to associate with the player session.
", + "Event$EventId": "A unique identifier for a fleet event.
", + "Event$ResourceId": "A unique identifier for an event resource, such as a fleet ID.
", "Event$PreSignedLogUrl": "Location of stored logs with additional detail that is related to the event. This is useful for debugging issues. The URL is valid for 15 minutes. You can also access fleet creation logs through the Amazon GameLift console.
", "FleetAttributes$Description": "Human-readable description of the fleet.
", - "FleetAttributes$Name": "Descriptive label that is associated with a fleet. Fleet names do not need to be unique.
", + "FleetAttributes$Name": "A descriptive label that is associated with a fleet. Fleet names do not need to be unique.
", "FleetAttributes$ServerLaunchPath": "Path to a game server executable in the fleet's build, specified for fleets created before 2016-08-04 (or AWS SDK v. 0.12.16). Server launch paths for fleets created after this date are specified in the fleet's RuntimeConfiguration.
", "FleetAttributes$ServerLaunchParameters": "Game server launch parameters specified for fleets created before 2016-08-04 (or AWS SDK v. 0.12.16). Server launch parameters for fleets created after this date are specified in the fleet's RuntimeConfiguration.
", - "GameSession$GameSessionId": "Unique identifier for the game session. A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>
.
Descriptive label that is associated with a game session. Session names do not need to be unique.
", - "GameSession$CreatorId": "Unique identifier for a player. This ID is used to enforce a resource protection policy (if one exists), that limits the number of game sessions a player can create.
", - "GameSessionPlacement$GameSessionName": "Descriptive label that is associated with a game session. Session names do not need to be unique.
", - "GameSessionPlacement$GameSessionId": "Unique identifier for the game session. This value is set once the new game session is placed (placement status is FULFILLED
).
Identifier for the game session created by this placement request. This value is set once the new game session is placed (placement status is FULFILLED
). This identifier is unique across all regions. You can use this value as a GameSessionId
value as needed.
Name of the region where the game session created by this placement request is running. This value is set once the new game session is placed (placement status is FULFILLED
).
A unique identifier for the game session. A game session ARN has the following format: arn:aws:gamelift:<region>::gamesession/<fleet ID>/<custom ID string or idempotency token>
.
A descriptive label that is associated with a game session. Session names do not need to be unique.
", + "GameSession$CreatorId": "A unique identifier for a player. This ID is used to enforce a resource protection policy (if one exists), that limits the number of game sessions a player can create.
", + "GameSessionPlacement$GameSessionName": "A descriptive label that is associated with a game session. Session names do not need to be unique.
", + "GameSessionPlacement$GameSessionId": "A unique identifier for the game session. This value is set once the new game session is placed (placement status is FULFILLED
).
Identifier for the game session created by this placement request. This value is set once the new game session is placed (placement status is FULFILLED
). This identifier is unique across all Regions. You can use this value as a GameSessionId
value as needed.
Name of the Region where the game session created by this placement request is running. This value is set once the new game session is placed (placement status is FULFILLED
).
Location of the requested game session logs, available for download. This URL is valid for 15 minutes, after which S3 will reject any download request using this URL. You can request a new URL any time within the 14-day period that the logs are retained.
", "ListFleetsInput$NextToken": "Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "ListFleetsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "MatchedPlayerSession$PlayerId": "Unique identifier for a player
", - "MatchmakingConfiguration$Description": "Descriptive label that is associated with matchmaking configuration.
", - "PlacedPlayerSession$PlayerId": "Unique identifier for a player that is associated with this player session.
", - "Player$PlayerId": "Unique identifier for a player
", + "MatchedPlayerSession$PlayerId": "A unique identifier for a player
", + "MatchmakingConfiguration$Description": "A descriptive label that is associated with matchmaking configuration.
", + "PlacedPlayerSession$PlayerId": "A unique identifier for a player that is associated with this player session.
", + "Player$PlayerId": "A unique identifier for a player
", "Player$Team": "Name of the team that the player is assigned to in a match. Team names are defined in a matchmaking rule set.
", "PlayerAttributeMap$key": null, "PlayerDataMap$key": null, "PlayerIdList$member": null, - "PlayerLatency$PlayerId": "Unique identifier for a player associated with the latency data.
", - "PlayerLatency$RegionIdentifier": "Name of the region that is associated with the latency value.
", - "PlayerSession$PlayerId": "Unique identifier for a player that is associated with this player session.
", - "PlayerSession$GameSessionId": "Unique identifier for the game session that the player session is connected to.
", - "PutScalingPolicyInput$Name": "Descriptive label that is associated with a scaling policy. Policy names do not need to be unique. A fleet can have only one scaling policy with the same name.
", - "PutScalingPolicyOutput$Name": "Descriptive label that is associated with a scaling policy. Policy names do not need to be unique.
", - "ScalingPolicy$Name": "Descriptive label that is associated with a scaling policy. Policy names do not need to be unique.
", - "Script$Name": "Descriptive label that is associated with a script. Script names do not need to be unique.
", - "Script$Version": "Version that is associated with a build or script. Version strings do not need to be unique.
", + "PlayerLatency$PlayerId": "A unique identifier for a player associated with the latency data.
", + "PlayerLatency$RegionIdentifier": "Name of the Region that is associated with the latency value.
", + "PlayerSession$PlayerId": "A unique identifier for a player that is associated with this player session.
", + "PlayerSession$GameSessionId": "A unique identifier for the game session that the player session is connected to.
", + "PutScalingPolicyInput$Name": "A descriptive label that is associated with a scaling policy. Policy names do not need to be unique. A fleet can have only one scaling policy with the same name.
", + "PutScalingPolicyOutput$Name": "A descriptive label that is associated with a scaling policy. Policy names do not need to be unique.
", + "ScalingPolicy$Name": "A descriptive label that is associated with a scaling policy. Policy names do not need to be unique.
", + "Script$Name": "A descriptive label that is associated with a script. Script names do not need to be unique.
", + "Script$Version": "The version that is associated with a build or script. Version strings do not need to be unique.
", "SearchGameSessionsInput$FilterExpression": "String containing the search criteria for the session search. If no filter expression is included, the request returns results for all game sessions in the fleet that are in ACTIVE
status.
A filter expression can contain one or multiple conditions. Each condition consists of the following:
Operand -- Name of a game session attribute. Valid values are gameSessionName
, gameSessionId
, gameSessionProperties
, maximumSessions
, creationTimeMillis
, playerSessionCount
, hasAvailablePlayerSessions
.
Comparator -- Valid comparators are: =
, <>
, <
, >
, <=
, >=
.
Value -- Value to be searched for. Values may be numbers, boolean values (true/false) or strings depending on the operand. String values are case sensitive and must be enclosed in single quotes. Special characters must be escaped. Boolean and string values can only be used with the comparators =
and <>
. For example, the following filter expression searches on gameSessionName
: \"FilterExpression\": \"gameSessionName = 'Matt\\\\'s Awesome Game 1'\"
.
To chain multiple conditions in a single expression, use the logical keywords AND
, OR
, and NOT
and parentheses as needed. For example: x AND y AND NOT z
, NOT (x OR y)
.
Session search evaluates conditions from left to right using the following precedence rules:
=
, <>
, <
, >
, <=
, >=
Parentheses
NOT
AND
OR
For example, this filter expression retrieves game sessions hosting at least ten players that have an open player slot: \"maximumSessions>=10 AND hasAvailablePlayerSessions=true\"
.
Instructions on how to sort the search results. If no sort expression is included, the request returns results in random order. A sort expression consists of the following elements:
Operand -- Name of a game session attribute. Valid values are gameSessionName
, gameSessionId
, gameSessionProperties
, maximumSessions
, creationTimeMillis
, playerSessionCount
, hasAvailablePlayerSessions
.
Order -- Valid sort orders are ASC
(ascending) and DESC
(descending).
For example, this sort expression returns the oldest active sessions first: \"SortExpression\": \"creationTimeMillis ASC\"
. Results with a null value for the sort operand are returned at the end of the list.
Token that indicates the start of the next sequential page of results. Use the token that is returned with a previous call to this action. To start at the beginning of the result set, do not specify a value.
", "SearchGameSessionsOutput$NextToken": "Token that indicates where to resume retrieving results on the next call to this action. If no token is returned, these results represent the end of the list.
", - "ServerProcess$LaunchPath": "Location of the server executable in a custom game build or the name of the Realtime script file that contains the Init()
function. Game builds and Realtime scripts are installed on instances at the root:
Windows (for custom game builds only): C:\\game
. Example: \"C:\\game\\MyGame\\server.exe
\"
Linux: /local/game
. Examples: \"/local/game/MyGame/server.exe
\" or \"/local/game/MyRealtimeScript.js
\"
Optional list of parameters to pass to the server executable or Realtime script on launch.
", - "StartGameSessionPlacementInput$GameSessionName": "Descriptive label that is associated with a game session. Session names do not need to be unique.
", + "ServerProcess$LaunchPath": "The location of the server executable in a custom game build or the name of the Realtime script file that contains the Init()
function. Game builds and Realtime scripts are installed on instances at the root:
Windows (for custom game builds only): C:\\game
. Example: \"C:\\game\\MyGame\\server.exe
\"
Linux: /local/game
. Examples: \"/local/game/MyGame/server.exe
\" or \"/local/game/MyRealtimeScript.js
\"
An optional list of parameters to pass to the server executable or Realtime script on launch.
", + "StartGameSessionPlacementInput$GameSessionName": "A descriptive label that is associated with a game session. Session names do not need to be unique.
", "StringDoubleMap$key": null, "StringList$member": null, - "UpdateAliasInput$Description": "Human-readable description of an alias.
", - "UpdateBuildInput$Name": "Descriptive label that is associated with a build. Build names do not need to be unique.
", - "UpdateBuildInput$Version": "Version that is associated with a build or script. Version strings do not need to be unique.
", - "UpdateFleetAttributesInput$Name": "Descriptive label that is associated with a fleet. Fleet names do not need to be unique.
", + "UpdateAliasInput$Description": "A human-readable description of the alias.
", + "UpdateBuildInput$Name": "A descriptive label that is associated with a build. Build names do not need to be unique.
", + "UpdateBuildInput$Version": "Version information that is associated with a build or script. Version strings do not need to be unique.
", + "UpdateFleetAttributesInput$Name": "A descriptive label that is associated with a fleet. Fleet names do not need to be unique.
", "UpdateFleetAttributesInput$Description": "Human-readable description of a fleet.
", - "UpdateGameSessionInput$Name": "Descriptive label that is associated with a game session. Session names do not need to be unique.
", - "UpdateMatchmakingConfigurationInput$Description": "Descriptive label that is associated with matchmaking configuration.
", - "UpdateScriptInput$Name": "Descriptive label that is associated with a script. Script names do not need to be unique.
", - "UpdateScriptInput$Version": "Version that is associated with a build or script. Version strings do not need to be unique.
", - "VpcPeeringAuthorization$GameLiftAwsAccountId": "Unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.
", + "UpdateGameSessionInput$Name": "A descriptive label that is associated with a game session. Session names do not need to be unique.
", + "UpdateMatchmakingConfigurationInput$Description": "A descriptive label that is associated with matchmaking configuration.
", + "UpdateScriptInput$Name": "A descriptive label that is associated with a script. Script names do not need to be unique.
", + "UpdateScriptInput$Version": "The version that is associated with a build or script. Version strings do not need to be unique.
", + "VpcPeeringAuthorization$GameLiftAwsAccountId": "A unique identifier for the AWS account that you use to manage your Amazon GameLift fleet. You can find your Account ID in the AWS Management Console under account settings.
", "VpcPeeringAuthorization$PeerVpcAwsAccountId": "", - "VpcPeeringAuthorization$PeerVpcId": "Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", + "VpcPeeringAuthorization$PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", "VpcPeeringConnection$IpV4CidrBlock": "CIDR block of IPv4 addresses assigned to the VPC peering connection for the GameLift VPC. The peered VPC also has an IPv4 CIDR block associated with it; these blocks cannot overlap or the peering connection cannot be created.
", - "VpcPeeringConnection$VpcPeeringConnectionId": "Unique identifier that is automatically assigned to the connection record. This ID is referenced in VPC peering connection events, and is used when deleting a connection with DeleteVpcPeeringConnection.
", - "VpcPeeringConnection$PeerVpcId": "Unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", - "VpcPeeringConnection$GameLiftVpcId": "Unique identifier for the VPC that contains the Amazon GameLift fleet for this connection. This VPC is managed by Amazon GameLift and does not appear in your AWS account.
", + "VpcPeeringConnection$VpcPeeringConnectionId": "A unique identifier that is automatically assigned to the connection record. This ID is referenced in VPC peering connection events, and is used when deleting a connection with DeleteVpcPeeringConnection.
", + "VpcPeeringConnection$PeerVpcId": "A unique identifier for a VPC with resources to be accessed by your Amazon GameLift fleet. The VPC must be in the same Region where your fleet is deployed. Look up a VPC ID using the VPC Dashboard in the AWS Management Console. Learn more about VPC peering in VPC Peering with Amazon GameLift Fleets.
", + "VpcPeeringConnection$GameLiftVpcId": "A unique identifier for the VPC that contains the Amazon GameLift fleet for this connection. This VPC is managed by Amazon GameLift and does not appear in your AWS account.
", "VpcPeeringConnectionStatus$Code": "Code indicating the status of a VPC peering connection.
", "VpcPeeringConnectionStatus$Message": "Additional messaging associated with the connection status.
" } @@ -1544,7 +1607,7 @@ "base": null, "refs": { "Build$OperatingSystem": "Operating system that the game server binaries are built to run on. This value determines the type of fleet resources that you can use for this build.
", - "CreateBuildInput$OperatingSystem": "Operating system that the game server binaries are built to run on. This value determines the type of fleet resources that you can use for this build. If your game build contains multiple executables, they all must run on the same operating system. If an operating system is not specified when creating a build, Amazon GameLift uses the default value (WINDOWS_2012). This value cannot be changed later.
", + "CreateBuildInput$OperatingSystem": "The operating system that the game server binaries are built to run on. This value determines the type of fleet resources that you can use for this build. If your game build contains multiple executables, they all must run on the same operating system. If an operating system is not specified when creating a build, Amazon GameLift uses the default value (WINDOWS_2012). This value cannot be changed later.
", "FleetAttributes$OperatingSystem": "Operating system of the fleet's computing resources. A fleet's operating system depends on the OS specified for the build that is deployed on this fleet.
", "Instance$OperatingSystem": "Operating system that is running on this instance.
", "InstanceAccess$OperatingSystem": "Operating system that is running on the instance.
" @@ -1559,7 +1622,7 @@ "PlacedPlayerSessionList": { "base": null, "refs": { - "GameSessionPlacement$PlacedPlayerSessions": "Collection of information on player sessions created in response to the game session placement request. These player sessions are created only once a new game session is successfully placed (placement status is FULFILLED
). This information includes the player ID (as provided in the placement request) and the corresponding player session ID. Retrieve full player sessions by calling DescribePlayerSessions with the player session ID.
A collection of information on player sessions created in response to the game session placement request. These player sessions are created only once a new game session is successfully placed (placement status is FULFILLED
). This information includes the player ID (as provided in the placement request) and the corresponding player session ID. Retrieve full player sessions by calling DescribePlayerSessions with the player session ID.
Collection of key:value pairs containing player information for use in matchmaking. Player attribute keys must match the playerAttributes used in a matchmaking rule set. Example: \"PlayerAttributes\": {\"skill\": {\"N\": \"23\"}, \"gameMode\": {\"S\": \"deathmatch\"}}
.
A collection of key:value pairs containing player information for use in matchmaking. Player attribute keys must match the playerAttributes used in a matchmaking rule set. Example: \"PlayerAttributes\": {\"skill\": {\"N\": \"23\"}, \"gameMode\": {\"S\": \"deathmatch\"}}
.
Regional latency information for a player, used when requesting a new game session with StartGameSessionPlacement. This value indicates the amount of time lag that exists when the player is connected to a fleet in the specified region. The relative difference between a player's latency values for multiple regions are used to determine which fleets are best suited to place a new game session for the player.
", + "base": "Regional latency information for a player, used when requesting a new game session with StartGameSessionPlacement. This value indicates the amount of time lag that exists when the player is connected to a fleet in the specified Region. The relative difference between a player's latency values for multiple Regions are used to determine which fleets are best suited to place a new game session for the player.
", "refs": { "PlayerLatencyList$member": null } @@ -1604,12 +1667,12 @@ "PlayerLatencyList": { "base": null, "refs": { - "GameSessionPlacement$PlayerLatencies": "Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS regions.
", - "StartGameSessionPlacementInput$PlayerLatencies": "Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS regions. This information is used to try to place the new game session where it can offer the best possible gameplay experience for the players.
" + "GameSessionPlacement$PlayerLatencies": "Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS Regions.
", + "StartGameSessionPlacementInput$PlayerLatencies": "Set of values, expressed in milliseconds, indicating the amount of latency that a player experiences when connected to AWS Regions. This information is used to try to place the new game session where it can offer the best possible gameplay experience for the players.
" } }, "PlayerLatencyPolicy": { - "base": "Queue setting that determines the highest latency allowed for individual players when placing a game session. When a latency policy is in force, a game session cannot be placed at any destination in a region where a player is reporting latency higher than the cap. Latency policies are only enforced when the placement request contains player latency information.
", + "base": "Queue setting that determines the highest latency allowed for individual players when placing a game session. When a latency policy is in force, a game session cannot be placed with any fleet in a Region where a player reports latency higher than the cap. Latency policies are only enforced when the placement request contains player latency information.
", "refs": { "PlayerLatencyPolicyList$member": null } @@ -1617,16 +1680,16 @@ "PlayerLatencyPolicyList": { "base": null, "refs": { - "CreateGameSessionQueueInput$PlayerLatencyPolicies": "Collection of latency policies to apply when processing game sessions placement requests with player latency information. Multiple policies are evaluated in order of the maximum latency value, starting with the lowest latency values. With just one policy, it is enforced at the start of the game session placement for the duration period. With multiple policies, each policy is enforced consecutively for its duration period. For example, a queue might enforce a 60-second policy followed by a 120-second policy, and then no policy for the remainder of the placement. A player latency policy must set a value for MaximumIndividualPlayerLatencyMilliseconds; if none is set, this API requests will fail.
", - "GameSessionQueue$PlayerLatencyPolicies": "Collection of latency policies to apply when processing game sessions placement requests with player latency information. Multiple policies are evaluated in order of the maximum latency value, starting with the lowest latency values. With just one policy, it is enforced at the start of the game session placement for the duration period. With multiple policies, each policy is enforced consecutively for its duration period. For example, a queue might enforce a 60-second policy followed by a 120-second policy, and then no policy for the remainder of the placement.
", - "UpdateGameSessionQueueInput$PlayerLatencyPolicies": "Collection of latency policies to apply when processing game sessions placement requests with player latency information. Multiple policies are evaluated in order of the maximum latency value, starting with the lowest latency values. With just one policy, it is enforced at the start of the game session placement for the duration period. With multiple policies, each policy is enforced consecutively for its duration period. For example, a queue might enforce a 60-second policy followed by a 120-second policy, and then no policy for the remainder of the placement. When updating policies, provide a complete collection of policies.
" + "CreateGameSessionQueueInput$PlayerLatencyPolicies": "A collection of latency policies to apply when processing game sessions placement requests with player latency information. Multiple policies are evaluated in order of the maximum latency value, starting with the lowest latency values. With just one policy, the policy is enforced at the start of the game session placement for the duration period. With multiple policies, each policy is enforced consecutively for its duration period. For example, a queue might enforce a 60-second policy followed by a 120-second policy, and then no policy for the remainder of the placement. A player latency policy must set a value for MaximumIndividualPlayerLatencyMilliseconds
. If none is set, this API request fails.
A collection of latency policies to apply when processing game sessions placement requests with player latency information. Multiple policies are evaluated in order of the maximum latency value, starting with the lowest latency values. With just one policy, the policy is enforced at the start of the game session placement for the duration period. With multiple policies, each policy is enforced consecutively for its duration period. For example, a queue might enforce a 60-second policy followed by a 120-second policy, and then no policy for the remainder of the placement.
", + "UpdateGameSessionQueueInput$PlayerLatencyPolicies": "A collection of latency policies to apply when processing game sessions placement requests with player latency information. Multiple policies are evaluated in order of the maximum latency value, starting with the lowest latency values. With just one policy, the policy is enforced at the start of the game session placement for the duration period. With multiple policies, each policy is enforced consecutively for its duration period. For example, a queue might enforce a 60-second policy followed by a 120-second policy, and then no policy for the remainder of the placement. When updating policies, provide a complete collection of policies.
" } }, "PlayerList": { "base": null, "refs": { "MatchmakingTicket$Players": "A set of Player
objects, each representing a player to find matches for. Players are identified by a unique player ID and may include latency data for use during matchmaking. If the ticket is in status COMPLETED
, the Player
objects include the team the players were assigned to in the resulting match.
Match information on all players that are currently assigned to the game session. This information is used by the matchmaker to find new players and add them to the existing game.
PlayerID, PlayerAttributes, Team -\\\\- This information is maintained in the GameSession object, MatchmakerData
property, for all players who are currently assigned to the game session. The matchmaker data is in JSON syntax, formatted as a string. For more details, see Match Data.
LatencyInMs -\\\\- If the matchmaker uses player latency, include a latency value, in milliseconds, for the region that the game session is currently in. Do not include latency values for any other region.
Match information on all players that are currently assigned to the game session. This information is used by the matchmaker to find new players and add them to the existing game.
PlayerID, PlayerAttributes, Team -\\\\- This information is maintained in the GameSession object, MatchmakerData
property, for all players who are currently assigned to the game session. The matchmaker data is in JSON syntax, formatted as a string. For more details, see Match Data.
LatencyInMs -\\\\- If the matchmaker uses player latency, include a latency value, in milliseconds, for the Region that the game session is currently in. Do not include latency values for any other Region.
Information on each player to be matched. This information must include a player ID, and may contain player attributes and latency data to be used in the matchmaking process. After a successful match, Player
objects contain the name of the team the player is assigned to.
Unique identifier for a player session to retrieve.
", - "MatchedPlayerSession$PlayerSessionId": "Unique identifier for a player session
", - "PlacedPlayerSession$PlayerSessionId": "Unique identifier for a player session.
", - "PlayerSession$PlayerSessionId": "Unique identifier for a player session.
" + "DescribePlayerSessionsInput$PlayerSessionId": "A unique identifier for a player session to retrieve.
", + "MatchedPlayerSession$PlayerSessionId": "A unique identifier for a player session
", + "PlacedPlayerSession$PlayerSessionId": "A unique identifier for a player session.
", + "PlayerSession$PlayerSessionId": "A unique identifier for a player session.
" } }, "PlayerSessionList": { "base": null, "refs": { - "CreatePlayerSessionsOutput$PlayerSessions": "Collection of player session objects created for the added players.
", - "DescribePlayerSessionsOutput$PlayerSessions": "Collection of objects containing properties for each player session that matches the request.
" + "CreatePlayerSessionsOutput$PlayerSessions": "A collection of player session objects created for the added players.
", + "DescribePlayerSessionsOutput$PlayerSessions": "A collection of objects containing properties for each player session that matches the request.
" } }, "PlayerSessionStatus": { @@ -1669,8 +1732,8 @@ "PolicyType": { "base": null, "refs": { - "PutScalingPolicyInput$PolicyType": "Type of scaling policy to create. For a target-based policy, set the parameter MetricName to 'PercentAvailableGameSessions' and specify a TargetConfiguration. For a rule-based policy set the following parameters: MetricName, ComparisonOperator, Threshold, EvaluationPeriods, ScalingAdjustmentType, and ScalingAdjustment.
", - "ScalingPolicy$PolicyType": "Type of scaling policy to create. For a target-based policy, set the parameter MetricName to 'PercentAvailableGameSessions' and specify a TargetConfiguration. For a rule-based policy set the following parameters: MetricName, ComparisonOperator, Threshold, EvaluationPeriods, ScalingAdjustmentType, and ScalingAdjustment.
" + "PutScalingPolicyInput$PolicyType": "The type of scaling policy to create. For a target-based policy, set the parameter MetricName to 'PercentAvailableGameSessions' and specify a TargetConfiguration. For a rule-based policy set the following parameters: MetricName, ComparisonOperator, Threshold, EvaluationPeriods, ScalingAdjustmentType, and ScalingAdjustment.
", + "ScalingPolicy$PolicyType": "The type of scaling policy to create. For a target-based policy, set the parameter MetricName to 'PercentAvailableGameSessions' and specify a TargetConfiguration. For a rule-based policy set the following parameters: MetricName, ComparisonOperator, Threshold, EvaluationPeriods, ScalingAdjustmentType, and ScalingAdjustment.
" } }, "PortNumber": { @@ -1678,49 +1741,49 @@ "refs": { "GameSession$Port": "Port number for the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.
", "GameSessionPlacement$Port": "Port number for the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number. This value is set once the new game session is placed (placement status is FULFILLED
).
Starting value for a range of allowed port numbers.
", - "IpPermission$ToPort": "Ending value for a range of allowed port numbers. Port numbers are end-inclusive. This value must be higher than FromPort
.
A starting value for a range of allowed port numbers.
", + "IpPermission$ToPort": "An ending value for a range of allowed port numbers. Port numbers are end-inclusive. This value must be higher than FromPort
.
Port number for the game session. To connect to a Amazon GameLift server process, an app needs both the IP address and port number.
" } }, "PositiveInteger": { "base": null, "refs": { - "DescribeFleetAttributesInput$Limit": "Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is limited to 10.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. If a player session ID is specified, this parameter is ignored.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is ignored when the request specifies one or a list of fleet IDs.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. This parameter is limited to 10.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. If a player session ID is specified, this parameter is ignored.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Port number for the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.
", "LatencyMap$value": null, - "ListAliasesInput$Limit": "Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
Length of time (in minutes) the metric must be at or beyond the threshold before a scaling event is triggered.
", "ScalingPolicy$EvaluationPeriods": "Length of time (in minutes) the metric must be at or beyond the threshold before a scaling event is triggered.
", - "SearchGameSessionsInput$Limit": "Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. The maximum number of results returned is 20, even if this value is not set or is set higher than 20.
Number of server processes using this configuration to run concurrently on an instance.
" + "SearchGameSessionsInput$Limit": "The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages. The maximum number of results returned is 20, even if this value is not set or is set higher than 20.
The number of server processes that use this configuration to run concurrently on an instance.
" } }, "PositiveLong": { "base": null, "refs": { "Build$SizeOnDisk": "File size of the uploaded game build, expressed in bytes. When the build status is INITIALIZED
, this value is 0.
File size of the uploaded Realtime script, expressed in bytes. When files are uploaded from an S3 location, this value remains at \"0\".
" + "Script$SizeOnDisk": "The file size of the uploaded Realtime script, expressed in bytes. When files are uploaded from an S3 location, this value remains at \"0\".
" } }, "ProtectionPolicy": { "base": null, "refs": { - "CreateFleetInput$NewGameSessionProtectionPolicy": "Game session protection policy to apply to all instances in this fleet. If this parameter is not set, instances in this fleet default to no protection. You can change a fleet's protection policy using UpdateFleetAttributes, but this change will only affect sessions created after the policy change. You can also set protection for individual instances using UpdateGameSession.
NoProtection -- The game session can be terminated during a scale-down event.
FullProtection -- If the game session is in an ACTIVE
status, it cannot be terminated during a scale-down event.
Type of game session protection to set for all new instances started in the fleet.
NoProtection -- The game session can be terminated during a scale-down event.
FullProtection -- If the game session is in an ACTIVE
status, it cannot be terminated during a scale-down event.
A game session protection policy to apply to all instances in this fleet. If this parameter is not set, instances in this fleet default to no protection. You can change a fleet's protection policy using UpdateFleetAttributes, but this change will only affect sessions created after the policy change. You can also set protection for individual instances using UpdateGameSession.
NoProtection - The game session can be terminated during a scale-down event.
FullProtection - If the game session is in an ACTIVE
status, it cannot be terminated during a scale-down event.
The type of game session protection to set for all new instances started in the fleet.
NoProtection -- The game session can be terminated during a scale-down event.
FullProtection -- If the game session is in an ACTIVE
status, it cannot be terminated during a scale-down event.
Current status of protection for the game session.
NoProtection -- The game session can be terminated during a scale-down event.
FullProtection -- If the game session is in an ACTIVE
status, it cannot be terminated during a scale-down event.
Game session protection policy to apply to all new instances created in this fleet. Instances that already exist are not affected. You can set protection for individual instances using UpdateGameSession.
NoProtection -- The game session can be terminated during a scale-down event.
FullProtection -- If the game session is in an ACTIVE
status, it cannot be terminated during a scale-down event.
Game session protection policy to apply to this game session only.
NoProtection -- The game session can be terminated during a scale-down event.
FullProtection -- If the game session is in an ACTIVE
status, it cannot be terminated during a scale-down event.
Amazon Resource Name (ARN) that is assigned to a game session queue and uniquely identifies it. Format is arn:aws:gamelift:<region>:<aws account>:gamesessionqueue/<queue name>
. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any region.
Amazon Resource Name (ARN) that is assigned to a game session queue and uniquely identifies it. Format is arn:aws:gamelift:<region>:<aws account>:gamesessionqueue/<queue name>
. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any region.
Amazon Resource Name (ARN) that is assigned to a game session queue and uniquely identifies it. Format is arn:aws:gamelift:<region>:<aws account>:gamesessionqueue/<queue name>
. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any region.
Amazon Resource Name (ARN) that is assigned to a GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any Region.
", + "MatchmakingConfiguration$GameSessionQueueArns": "Amazon Resource Name (ARN) that is assigned to a GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. GameLift uses the listed queues when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any Region.
", + "UpdateMatchmakingConfigurationInput$GameSessionQueueArns": "Amazon Resource Name (ARN) that is assigned to a GameLift game session queue resource and uniquely identifies it. ARNs are unique across all Regions. These queues are used when placing game sessions for matches that are created with this matchmaking configuration. Queues can be located in any Region.
" } }, "RequestUploadCredentialsInput": { @@ -1765,67 +1828,67 @@ } }, "ResourceCreationLimitPolicy": { - "base": "Policy that limits the number of game sessions a player can create on the same fleet. This optional policy gives game owners control over how players can consume available game server resources. A resource creation policy makes the following statement: \"An individual player can create a maximum number of new game sessions within a specified time period\".
The policy is evaluated when a player tries to create a new game session. For example, with a policy of 10 new game sessions and a time period of 60 minutes, on receiving a CreateGameSession
request, Amazon GameLift checks that the player (identified by CreatorId
) has created fewer than 10 game sessions in the past 60 minutes.
A policy that limits the number of game sessions a player can create on the same fleet. This optional policy gives game owners control over how players can consume available game server resources. A resource creation policy makes the following statement: \"An individual player can create a maximum number of new game sessions within a specified time period\".
The policy is evaluated when a player tries to create a new game session. For example: Assume you have a policy of 10 new game sessions and a time period of 60 minutes. On receiving a CreateGameSession
request, Amazon GameLift checks that the player (identified by CreatorId
) has created fewer than 10 game sessions in the past 60 minutes.
Policy that limits the number of game sessions an individual player can create over a span of time for this fleet.
", + "CreateFleetInput$ResourceCreationLimitPolicy": "A policy that limits the number of game sessions an individual player can create over a span of time for this fleet.
", "FleetAttributes$ResourceCreationLimitPolicy": "Fleet policy to limit the number of game sessions an individual player can create over a span of time.
", "UpdateFleetAttributesInput$ResourceCreationLimitPolicy": "Policy that limits the number of game sessions an individual player can create over a span of time.
" } }, "RoutingStrategy": { - "base": "Routing configuration for a fleet alias.
", + "base": "The routing configuration for a fleet alias.
", "refs": { - "Alias$RoutingStrategy": "Alias configuration for the alias, including routing type and settings.
", - "CreateAliasInput$RoutingStrategy": "Object that specifies the fleet and routing type to use for the alias.
", - "UpdateAliasInput$RoutingStrategy": "Object that specifies the fleet and routing type to use for the alias.
" + "Alias$RoutingStrategy": "The routing configuration, including routing type and fleet target, for the alias.
", + "CreateAliasInput$RoutingStrategy": "The routing configuration, including routing type and fleet target, for the alias.
", + "UpdateAliasInput$RoutingStrategy": "The routing configuration, including routing type and fleet target, for the alias.
" } }, "RoutingStrategyType": { "base": null, "refs": { - "ListAliasesInput$RoutingStrategyType": "Type of routing to filter results on. Use this parameter to retrieve only aliases of a certain type. To retrieve all aliases, leave this parameter empty.
Possible routing types include the following:
SIMPLE -- The alias resolves to one specific fleet. Use this type when routing to active fleets.
TERMINAL -- The alias does not resolve to a fleet but instead can be used to display a message to the user. A terminal alias throws a TerminalRoutingStrategyException with the RoutingStrategy message embedded.
Type of routing strategy.
Possible routing types include the following:
SIMPLE -- The alias resolves to one specific fleet. Use this type when routing to active fleets.
TERMINAL -- The alias does not resolve to a fleet but instead can be used to display a message to the user. A terminal alias throws a TerminalRoutingStrategyException with the RoutingStrategy message embedded.
The routing type to filter results on. Use this parameter to retrieve only aliases with a certain routing type. To retrieve all aliases, leave this parameter empty.
Possible routing types include the following:
SIMPLE -- The alias resolves to one specific fleet. Use this type when routing to active fleets.
TERMINAL -- The alias does not resolve to a fleet but instead can be used to display a message to the user. A terminal alias throws a TerminalRoutingStrategyException with the RoutingStrategy message embedded.
The type of routing strategy for the alias.
Possible routing types include the following:
SIMPLE - The alias resolves to one specific fleet. Use this type when routing to active fleets.
TERMINAL - The alias does not resolve to a fleet but instead can be used to display a message to the user. A terminal alias throws a TerminalRoutingStrategyException with the RoutingStrategy message embedded.
Collection of matchmaking rules, formatted as a JSON string. Comments are not allowed in JSON, but most elements support a description field.
", - "MatchmakingRuleSet$RuleSetBody": "Collection of matchmaking rules, formatted as a JSON string. Comments are not allowed in JSON, but most elements support a description field.
", - "ValidateMatchmakingRuleSetInput$RuleSetBody": "Collection of matchmaking rules to validate, formatted as a JSON string.
" + "CreateMatchmakingRuleSetInput$RuleSetBody": "A collection of matchmaking rules, formatted as a JSON string. Comments are not allowed in JSON, but most elements support a description field.
", + "MatchmakingRuleSet$RuleSetBody": "A collection of matchmaking rules, formatted as a JSON string. Comments are not allowed in JSON, but most elements support a description field.
", + "ValidateMatchmakingRuleSetInput$RuleSetBody": "A collection of matchmaking rules to validate, formatted as a JSON string.
" } }, "RuleSetLimit": { "base": null, "refs": { - "DescribeMatchmakingRuleSetsInput$Limit": "Maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
The maximum number of results to return. Use this parameter with NextToken
to get results as a set of sequential pages.
A collection of server process configurations that describe what processes to run on each instance in a fleet. Server processes run either a custom game build executable or a Realtime Servers script. Each instance in the fleet starts the specified server processes and continues to start new processes as existing processes end. An instance regularly checks for an updated run-time configuration.
The run-time configuration enables the instances in a fleet to run multiple processes simultaneously. Learn more about Running Multiple Processes on a Fleet .
A Amazon GameLift instance is limited to 50 processes running simultaneously. To calculate the total number of processes in a run-time configuration, add the values of the ConcurrentExecutions
parameter for each ServerProcess object.
Describe fleets:
Update fleets:
Manage fleet actions:
A collection of server process configurations that describe what processes to run on each instance in a fleet. Server processes run either a custom game build executable or a Realtime Servers script. Each instance in the fleet starts the specified server processes and continues to start new processes as existing processes end. Each instance regularly checks for an updated runtime configuration.
The runtime configuration enables the instances in a fleet to run multiple processes simultaneously. Learn more about Running Multiple Processes on a Fleet .
A Amazon GameLift instance is limited to 50 processes running simultaneously. To calculate the total number of processes in a runtime configuration, add the values of the ConcurrentExecutions
parameter for each ServerProcess object.
Manage fleet actions:
Instructions for launching server processes on each instance in the fleet. Server processes run either a custom game build executable or a Realtime Servers script. The run-time configuration lists the types of server processes to run on an instance and includes the following configuration settings: the server executable or launch script file, launch parameters, and the number of processes to run concurrently on each instance. A CreateFleet request must include a run-time configuration with at least one server process configuration.
", + "CreateFleetInput$RuntimeConfiguration": "Instructions for launching server processes on each instance in the fleet. Server processes run either a custom game build executable or a Realtime script. The runtime configuration defines the server executables or launch script file, launch parameters, and the number of processes to run concurrently on each instance. When creating a fleet, the runtime configuration must have at least one server process configuration; otherwise the request fails with an invalid request exception. (This parameter replaces the parameters ServerLaunchPath
and ServerLaunchParameters
, although requests that contain values for these parameters instead of a runtime configuration will continue to work.) This parameter is required unless the parameters ServerLaunchPath
and ServerLaunchParameters
are defined. Runtime configuration replaced these parameters, but fleets that use them will continue to work.
Instructions describing how server processes should be launched and maintained on each instance in the fleet.
", - "UpdateRuntimeConfigurationInput$RuntimeConfiguration": "Instructions for launching server processes on each instance in the fleet. Server processes run either a custom game build executable or a Realtime Servers script. The run-time configuration lists the types of server processes to run on an instance and includes the following configuration settings: the server executable or launch script file, launch parameters, and the number of processes to run concurrently on each instance. A CreateFleet request must include a run-time configuration with at least one server process configuration.
", - "UpdateRuntimeConfigurationOutput$RuntimeConfiguration": "The run-time configuration currently in force. If the update was successful, this object matches the one in the request.
" + "UpdateRuntimeConfigurationInput$RuntimeConfiguration": "Instructions for launching server processes on each instance in the fleet. Server processes run either a custom game build executable or a Realtime Servers script. The runtime configuration lists the types of server processes to run on an instance and includes the following configuration settings: the server executable or launch script file, launch parameters, and the number of processes to run concurrently on each instance. A CreateFleet request must include a runtime configuration with at least one server process configuration.
", + "UpdateRuntimeConfigurationOutput$RuntimeConfiguration": "The runtime configuration currently in force. If the update was successful, this object matches the one in the request.
" } }, "S3Location": { - "base": "Location in Amazon Simple Storage Service (Amazon S3) where build or script files are stored for access by Amazon GameLift. This location is specified in CreateBuild, CreateScript, and UpdateScript requests.
", + "base": "The location in Amazon S3 where build or script files are stored for access by Amazon GameLift. This location is specified in CreateBuild, CreateScript, and UpdateScript requests.
", "refs": { - "CreateBuildInput$StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key, as well as a the ARN for a role that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket must be in the same region that you want to create a new build in.
", + "CreateBuildInput$StorageLocation": "Information indicating where your game build files are stored. Use this parameter only when creating a build with files stored in an Amazon S3 bucket that you own. The storage location must specify an Amazon S3 bucket name and key. The location must also specify a role ARN that you set up to allow Amazon GameLift to access your Amazon S3 bucket. The S3 bucket and your new build must be in the same Region.
", "CreateBuildOutput$StorageLocation": "Amazon S3 location for your game build file, including bucket name and key.
", - "CreateScriptInput$StorageLocation": "Location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the ObjectVersion
parameter to specify an earlier version.
The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the ObjectVersion
parameter to specify an earlier version.
Amazon S3 path and key, identifying where the game build files are stored.
", "Script$StorageLocation": null, - "UpdateScriptInput$StorageLocation": "Location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the ObjectVersion
parameter to specify an earlier version.
The location of the Amazon S3 bucket where a zipped file containing your Realtime scripts is stored. The storage location must specify the Amazon S3 bucket name, the zip file name (the \"key\"), and a role ARN that allows Amazon GameLift to access the Amazon S3 storage location. The S3 bucket must be in the same Region where you want to create a new script. By default, Amazon GameLift uploads the latest version of the zip file; if you have S3 object versioning turned on, you can use the ObjectVersion
parameter to specify an earlier version.
Type of adjustment to make to a fleet's instance count (see FleetCapacity):
ChangeInCapacity -- add (or subtract) the scaling adjustment value from the current instance count. Positive values scale up while negative values scale down.
ExactCapacity -- set the instance count to the scaling adjustment value.
PercentChangeInCapacity -- increase or reduce the current instance count by the scaling adjustment, read as a percentage. Positive values scale up while negative values scale down; for example, a value of \"-10\" scales the fleet down by 10%.
Type of adjustment to make to a fleet's instance count (see FleetCapacity):
ChangeInCapacity -- add (or subtract) the scaling adjustment value from the current instance count. Positive values scale up while negative values scale down.
ExactCapacity -- set the instance count to the scaling adjustment value.
PercentChangeInCapacity -- increase or reduce the current instance count by the scaling adjustment, read as a percentage. Positive values scale up while negative values scale down.
The type of adjustment to make to a fleet's instance count (see FleetCapacity):
ChangeInCapacity -- add (or subtract) the scaling adjustment value from the current instance count. Positive values scale up while negative values scale down.
ExactCapacity -- set the instance count to the scaling adjustment value.
PercentChangeInCapacity -- increase or reduce the current instance count by the scaling adjustment, read as a percentage. Positive values scale up while negative values scale down; for example, a value of \"-10\" scales the fleet down by 10%.
The type of adjustment to make to a fleet's instance count (see FleetCapacity):
ChangeInCapacity -- add (or subtract) the scaling adjustment value from the current instance count. Positive values scale up while negative values scale down.
ExactCapacity -- set the instance count to the scaling adjustment value.
PercentChangeInCapacity -- increase or reduce the current instance count by the scaling adjustment, read as a percentage. Positive values scale up while negative values scale down.
Collection of objects containing the scaling policies matching the request.
" + "DescribeScalingPoliciesOutput$ScalingPolicies": "A collection of objects containing the scaling policies matching the request.
" } }, "ScalingStatusType": { @@ -1850,28 +1913,35 @@ "Script": { "base": "Properties describing a Realtime script.
Related operations
", "refs": { - "CreateScriptOutput$Script": "The newly created script record with a unique script ID. The new script's storage location reflects an Amazon S3 location: (1) If the script was uploaded from an S3 bucket under your account, the storage location reflects the information that was provided in the CreateScript request; (2) If the script file was uploaded from a local zip file, the storage location reflects an S3 location controls by the Amazon GameLift service.
", - "DescribeScriptOutput$Script": "Set of properties describing the requested script.
", + "CreateScriptOutput$Script": "The newly created script record with a unique script ID and ARN. The new script's storage location reflects an Amazon S3 location: (1) If the script was uploaded from an S3 bucket under your account, the storage location reflects the information that was provided in the CreateScript request; (2) If the script file was uploaded from a local zip file, the storage location reflects an S3 location controls by the Amazon GameLift service.
", + "DescribeScriptOutput$Script": "A set of properties describing the requested script.
", "ScriptList$member": null, "UpdateScriptOutput$Script": "The newly created script record with a unique script ID. The new script's storage location reflects an Amazon S3 location: (1) If the script was uploaded from an S3 bucket under your account, the storage location reflects the information that was provided in the CreateScript request; (2) If the script file was uploaded from a local zip file, the storage location reflects an S3 location controls by the Amazon GameLift service.
" } }, + "ScriptArn": { + "base": null, + "refs": { + "FleetAttributes$ScriptArn": "The Amazon Resource Name (ARN) associated with the GameLift script resource that is deployed on instances in this fleet. In a GameLift script ARN, the resource ID matches the ScriptId value.
", + "Script$ScriptArn": "Amazon Resource Name (ARN) that is assigned to a GameLift script resource and uniquely identifies it. ARNs are unique across all Regions. In a GameLift script ARN, the resource ID matches the ScriptId value.
" + } + }, "ScriptId": { "base": null, "refs": { - "CreateFleetInput$ScriptId": "Unique identifier for a Realtime script to be deployed on the new fleet. The Realtime script must have been successfully uploaded to Amazon GameLift. This fleet setting cannot be changed once the fleet is created.
", - "DeleteScriptInput$ScriptId": "Unique identifier for a Realtime script to delete.
", - "DescribeScriptInput$ScriptId": "Unique identifier for a Realtime script to retrieve properties for.
", - "FleetAttributes$ScriptId": "Unique identifier for a Realtime script.
", - "ListFleetsInput$ScriptId": "Unique identifier for a Realtime script to return fleets for. Use this parameter to return only fleets using the specified script. To retrieve all fleets, leave this parameter empty.
", - "Script$ScriptId": "Unique identifier for a Realtime script
", - "UpdateScriptInput$ScriptId": "Unique identifier for a Realtime script to update.
" + "CreateFleetInput$ScriptId": "A unique identifier for a Realtime script to be deployed on the new fleet. You can use either the script ID or ARN value. The Realtime script must have been successfully uploaded to Amazon GameLift. This fleet setting cannot be changed once the fleet is created.
", + "DeleteScriptInput$ScriptId": "A unique identifier for a Realtime script to delete. You can use either the script ID or ARN value.
", + "DescribeScriptInput$ScriptId": "A unique identifier for a Realtime script to retrieve properties for. You can use either the script ID or ARN value.
", + "FleetAttributes$ScriptId": "A unique identifier for a Realtime script.
", + "ListFleetsInput$ScriptId": "A unique identifier for a Realtime script to return fleets for. Use this parameter to return only fleets using the specified script. Use either the script ID or ARN value.To retrieve all fleets, leave this parameter empty.
", + "Script$ScriptId": "A unique identifier for a Realtime script
", + "UpdateScriptInput$ScriptId": "A unique identifier for a Realtime script to update. You can use either the script ID or ARN value.
" } }, "ScriptList": { "base": null, "refs": { - "ListScriptsOutput$Scripts": "Set of properties describing the requested script.
" + "ListScriptsOutput$Scripts": "A set of properties describing the requested script.
" } }, "SearchGameSessionsInput": { @@ -1893,15 +1963,15 @@ "ServerProcessList": { "base": null, "refs": { - "RuntimeConfiguration$ServerProcesses": "Collection of server process configurations that describe which server processes to run on each instance in a fleet.
" + "RuntimeConfiguration$ServerProcesses": "A collection of server process configurations that describe which server processes to run on each instance in a fleet.
" } }, "SnsArnStringModel": { "base": null, "refs": { - "CreateMatchmakingConfigurationInput$NotificationTarget": "SNS topic ARN that is set up to receive matchmaking notifications.
", - "MatchmakingConfiguration$NotificationTarget": "SNS topic ARN that is set up to receive matchmaking notifications.
", - "UpdateMatchmakingConfigurationInput$NotificationTarget": "SNS topic ARN that is set up to receive matchmaking notifications. See Setting up Notifications for Matchmaking for more information.
" + "CreateMatchmakingConfigurationInput$NotificationTarget": "An SNS topic ARN that is set up to receive matchmaking notifications.
", + "MatchmakingConfiguration$NotificationTarget": "An SNS topic ARN that is set up to receive matchmaking notifications.
", + "UpdateMatchmakingConfigurationInput$NotificationTarget": "An SNS topic ARN that is set up to receive matchmaking notifications. See Setting up Notifications for Matchmaking for more information.
" } }, "StartFleetActionsInput": { @@ -1983,7 +2053,7 @@ "StringList": { "base": null, "refs": { - "AcceptMatchInput$PlayerIds": "Unique identifier for a player delivering the response. This parameter can include one or multiple player IDs.
", + "AcceptMatchInput$PlayerIds": "A unique identifier for a player delivering the response. This parameter can include one or multiple player IDs.
", "AttributeValue$SL": "For a list of up to 10 strings. Maximum length for each string is 100 characters. Duplicate values are not recognized; all occurrences of the repeated value after the first of a repeated value are ignored.
", "CreateFleetInput$LogPaths": "This parameter is no longer used. Instead, to specify where Amazon GameLift should store log files once a server process shuts down, use the Amazon GameLift server API ProcessReady()
and specify one or more directory paths in logParameters
. See more information in the Server API Reference.
Location of default log files. When a server process is shut down, Amazon GameLift captures and stores any log files in this location. These logs are in addition to game session logs; see more on game session logs in the Amazon GameLift Developer Guide. If no default log path for a fleet is specified, Amazon GameLift automatically uploads logs that are stored on each instance at C:\\game\\logs
(for Windows) or /local/game/logs
(for Linux). Use the Amazon GameLift console to access stored logs.
IP address of the game session. To connect to a Amazon GameLift game server, an app needs both the IP address and port number.
", + "GameSessionConnectionInfo$IpAddress": "IP address of the instance that is running the game session. When connecting to a Amazon GameLift game server, a client needs to reference an IP address (or DNS name) and port number.
", "MatchmakingTicket$StatusReason": "Code to explain the current status. For example, a status reason may indicate when a ticket has returned to SEARCHING
status after a proposed match fails to receive player acceptances.
Additional information about the current status.
" } }, + "Tag": { + "base": "A label that can be assigned to a GameLift resource.
Learn more
Tagging AWS Resources in the AWS General Reference
Related operations
", + "refs": { + "TagList$member": null + } + }, + "TagKey": { + "base": null, + "refs": { + "Tag$Key": "The key for a developer-defined key:value pair for tagging an AWS resource.
", + "TagKeyList$member": null + } + }, + "TagKeyList": { + "base": null, + "refs": { + "UntagResourceRequest$TagKeys": "A list of one or more tags to remove from the specified GameLift resource. Tags are developer-defined and structured as key-value pairs.
" + } + }, + "TagList": { + "base": null, + "refs": { + "CreateAliasInput$Tags": "A list of labels to assign to the new alias resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "CreateBuildInput$Tags": "A list of labels to assign to the new build resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "CreateFleetInput$Tags": "A list of labels to assign to the new fleet resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "CreateGameSessionQueueInput$Tags": "A list of labels to assign to the new game session queue resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "CreateMatchmakingConfigurationInput$Tags": "A list of labels to assign to the new matchmaking configuration resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "CreateMatchmakingRuleSetInput$Tags": "A list of labels to assign to the new matchmaking rule set resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "CreateScriptInput$Tags": "A list of labels to assign to the new script resource. Tags are developer-defined key-value pairs. Tagging AWS resources are useful for resource management, access management and cost allocation. For more information, see Tagging AWS Resources in the AWS General Reference. Once the resource is created, you can use TagResource, UntagResource, and ListTagsForResource to add, remove, and view tags. The maximum tag limit may be lower than stated. See the AWS General Reference for actual tagging limits.
", + "ListTagsForResourceResponse$Tags": "The collection of tags that have been assigned to the specified resource.
", + "TagResourceRequest$Tags": "A list of one or more tags to assign to the specified GameLift resource. Tags are developer-defined and structured as key-value pairs. The maximum tag limit may be lower than stated. See Tagging AWS Resources for actual tagging limits.
" + } + }, + "TagResourceRequest": { + "base": null, + "refs": { + } + }, + "TagResourceResponse": { + "base": null, + "refs": { + } + }, + "TagValue": { + "base": null, + "refs": { + "Tag$Value": "The value for a developer-defined key:value pair for tagging an AWS resource.
" + } + }, + "TaggingFailedException": { + "base": "The requested tagging operation did not succeed. This may be due to invalid tag format or the maximum tag limit may have been exceeded. Resolve the issue before retrying.
", + "refs": { + } + }, "TargetConfiguration": { "base": "Settings for a target-based scaling policy (see ScalingPolicy. A target-based policy tracks a particular fleet metric specifies a target value for the metric. As player usage changes, the policy triggers Amazon GameLift to adjust capacity so that the metric returns to the target value. The target configuration specifies settings as needed for the target based policy, including the target value.
Manage scaling policies:
PutScalingPolicy (auto-scaling)
DescribeScalingPolicies (auto-scaling)
DeleteScalingPolicy (auto-scaling)
Manage fleet actions:
Object that contains settings for a target-based scaling policy.
", - "ScalingPolicy$TargetConfiguration": "Object that contains settings for a target-based scaling policy.
" + "PutScalingPolicyInput$TargetConfiguration": "The settings for a target-based scaling policy.
", + "ScalingPolicy$TargetConfiguration": "The settings for a target-based scaling policy.
" } }, "TerminalRoutingStrategyException": { @@ -2012,8 +2136,8 @@ "Timestamp": { "base": null, "refs": { - "Alias$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", - "Alias$LastUpdatedTime": "Time stamp indicating when this data object was last modified. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "Alias$CreationTime": "A time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "Alias$LastUpdatedTime": "The time that this data object was last modified. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "Build$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "DescribeFleetEventsInput$StartTime": "Earliest date to retrieve event logs for. If no start time is specified, this call returns entries starting from when the fleet was created to the specified end time. Format is a number expressed in Unix time as milliseconds (ex: \"1469498468.057\").
", "DescribeFleetEventsInput$EndTime": "Most recent date to retrieve event logs for. If no end time is specified, this call returns entries from the specified start time up to the present. Format is a number expressed in Unix time as milliseconds (ex: \"1469498468.057\").
", @@ -2025,13 +2149,13 @@ "GameSessionPlacement$StartTime": "Time stamp indicating when this request was placed in the queue. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "GameSessionPlacement$EndTime": "Time stamp indicating when this request was completed, canceled, or timed out.
", "Instance$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", - "MatchmakingConfiguration$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", - "MatchmakingRuleSet$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "MatchmakingConfiguration$CreationTime": "The time stamp indicating when this data object was created. The format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "MatchmakingRuleSet$CreationTime": "The time stamp indicating when this data object was created. The format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "MatchmakingTicket$StartTime": "Time stamp indicating when this matchmaking request was received. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "MatchmakingTicket$EndTime": "Time stamp indicating when this matchmaking request stopped being processed due to success, failure, or cancellation. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "PlayerSession$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "PlayerSession$TerminationTime": "Time stamp indicating when this data object was terminated. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", - "Script$CreationTime": "Time stamp indicating when this data object was created. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", + "Script$CreationTime": "A time stamp indicating when this data object was created. The format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "VpcPeeringAuthorization$CreationTime": "Time stamp indicating when this authorization was issued. Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
", "VpcPeeringAuthorization$ExpirationTime": "Time stamp indicating when this authorization expires (24 hours after issuance). Format is a number expressed in Unix time as milliseconds (for example \"1469498468.057\").
" } @@ -2042,7 +2166,17 @@ } }, "UnsupportedRegionException": { - "base": "The requested operation is not supported in the region specified.
", + "base": "The requested operation is not supported in the Region specified.
", + "refs": { + } + }, + "UntagResourceRequest": { + "base": null, + "refs": { + } + }, + "UntagResourceResponse": { + "base": null, "refs": { } }, @@ -2166,7 +2300,7 @@ "VpcPeeringAuthorizationList": { "base": null, "refs": { - "DescribeVpcPeeringAuthorizationsOutput$VpcPeeringAuthorizations": "Collection of objects that describe all valid VPC peering operations for the current AWS account.
" + "DescribeVpcPeeringAuthorizationsOutput$VpcPeeringAuthorizations": "A collection of objects that describe all valid VPC peering operations for the current AWS account.
" } }, "VpcPeeringConnection": { @@ -2178,24 +2312,24 @@ "VpcPeeringConnectionList": { "base": null, "refs": { - "DescribeVpcPeeringConnectionsOutput$VpcPeeringConnections": "Collection of VPC peering connection records that match the request.
" + "DescribeVpcPeeringConnectionsOutput$VpcPeeringConnections": "A collection of VPC peering connection records that match the request.
" } }, "VpcPeeringConnectionStatus": { "base": "Represents status information for a VPC peering connection. Status is associated with a VpcPeeringConnection object. Status codes and messages are provided from EC2 (see VpcPeeringConnectionStateReason). Connection status information is also communicated as a fleet Event.
", "refs": { - "VpcPeeringConnection$Status": "Object that contains status information about the connection. Status indicates if a connection is pending, successful, or failed.
" + "VpcPeeringConnection$Status": "The status information about the connection. Status indicates if a connection is pending, successful, or failed.
" } }, "WholeNumber": { "base": null, "refs": { - "CreateGameSessionInput$MaximumPlayerSessionCount": "Maximum number of players that can be connected simultaneously to the game session.
", - "CreateGameSessionQueueInput$TimeoutInSeconds": "Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT
status.
Number of player slots in a match to keep open for future players. For example, if the configuration's rule set specifies a match for a single 12-person team, and the additional player count is set to 2, only 10 players are selected for the match.
", + "CreateGameSessionInput$MaximumPlayerSessionCount": "The maximum number of players that can be connected simultaneously to the game session.
", + "CreateGameSessionQueueInput$TimeoutInSeconds": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT
status.
The number of player slots in a match to keep open for future players. For example, assume that the configuration's rule set specifies a match for a single 12-person team. If the additional player count is set to 2, only 10 players are initially selected for the match.
", "EC2InstanceCounts$DESIRED": "Ideal number of active instances in the fleet.
", - "EC2InstanceCounts$MINIMUM": "Minimum value allowed for the fleet's instance count.
", - "EC2InstanceCounts$MAXIMUM": "Maximum value allowed for the fleet's instance count.
", + "EC2InstanceCounts$MINIMUM": "The minimum value allowed for the fleet's instance count.
", + "EC2InstanceCounts$MAXIMUM": "The maximum value allowed for the fleet's instance count.
", "EC2InstanceCounts$PENDING": "Number of instances in the fleet that are starting but not yet active.
", "EC2InstanceCounts$ACTIVE": "Actual number of active instances in the fleet.
", "EC2InstanceCounts$IDLE": "Number of active instances in the fleet that are not currently hosting a game session.
", @@ -2205,31 +2339,31 @@ "FleetUtilization$ActiveServerProcessCount": "Number of server processes in an ACTIVE
status currently running across all instances in the fleet
Number of active game sessions currently being hosted on all instances in the fleet.
", "FleetUtilization$CurrentPlayerSessionCount": "Number of active player sessions currently being hosted on all instances in the fleet.
", - "FleetUtilization$MaximumPlayerSessionCount": "Maximum players allowed across all game sessions currently being hosted on all instances in the fleet.
", + "FleetUtilization$MaximumPlayerSessionCount": "The maximum number of players allowed across all game sessions currently being hosted on all instances in the fleet.
", "GameSession$CurrentPlayerSessionCount": "Number of players currently in the game session.
", - "GameSession$MaximumPlayerSessionCount": "Maximum number of players that can be connected simultaneously to the game session.
", - "GameSessionPlacement$MaximumPlayerSessionCount": "Maximum number of players that can be connected simultaneously to the game session.
", - "GameSessionQueue$TimeoutInSeconds": "Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT
status.
Number of player slots in a match to keep open for future players. For example, if the configuration's rule set specifies a match for a single 12-person team, and the additional player count is set to 2, only 10 players are selected for the match.
", + "GameSession$MaximumPlayerSessionCount": "The maximum number of players that can be connected simultaneously to the game session.
", + "GameSessionPlacement$MaximumPlayerSessionCount": "The maximum number of players that can be connected simultaneously to the game session.
", + "GameSessionQueue$TimeoutInSeconds": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT
status.
The number of player slots in a match to keep open for future players. For example, assume that the configuration's rule set specifies a match for a single 12-person team. If the additional player count is set to 2, only 10 players are initially selected for the match.
", "MatchmakingTicket$EstimatedWaitTime": "Average amount of time (in seconds) that players are currently waiting for a match. If there is not enough recent data, this property may be empty.
", "PlayerLatencyPolicy$MaximumIndividualPlayerLatencyMilliseconds": "The maximum latency value that is allowed for any player, in milliseconds. All policies must have a value set for this property.
", "PlayerLatencyPolicy$PolicyDurationSeconds": "The length of time, in seconds, that the policy is enforced while placing a new game session. A null value for this property means that the policy is enforced until the queue times out.
", - "ResourceCreationLimitPolicy$NewGameSessionsPerCreator": "Maximum number of game sessions that an individual can create during the policy period.
", - "ResourceCreationLimitPolicy$PolicyPeriodInMinutes": "Time span used in evaluating the resource creation limit policy.
", - "StartGameSessionPlacementInput$MaximumPlayerSessionCount": "Maximum number of players that can be connected simultaneously to the game session.
", + "ResourceCreationLimitPolicy$NewGameSessionsPerCreator": "The maximum number of game sessions that an individual can create during the policy period.
", + "ResourceCreationLimitPolicy$PolicyPeriodInMinutes": "The time span used in evaluating the resource creation limit policy.
", + "StartGameSessionPlacementInput$MaximumPlayerSessionCount": "The maximum number of players that can be connected simultaneously to the game session.
", "UpdateFleetCapacityInput$DesiredInstances": "Number of EC2 instances you want this fleet to host.
", - "UpdateFleetCapacityInput$MinSize": "Minimum value allowed for the fleet's instance count. Default if not set is 0.
", - "UpdateFleetCapacityInput$MaxSize": "Maximum value allowed for the fleet's instance count. Default if not set is 1.
", - "UpdateGameSessionInput$MaximumPlayerSessionCount": "Maximum number of players that can be connected simultaneously to the game session.
", - "UpdateGameSessionQueueInput$TimeoutInSeconds": "Maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT
status.
Number of player slots in a match to keep open for future players. For example, if the configuration's rule set specifies a match for a single 12-person team, and the additional player count is set to 2, only 10 players are selected for the match.
" + "UpdateFleetCapacityInput$MinSize": "The minimum value allowed for the fleet's instance count. Default if not set is 0.
", + "UpdateFleetCapacityInput$MaxSize": "The maximum value allowed for the fleet's instance count. Default if not set is 1.
", + "UpdateGameSessionInput$MaximumPlayerSessionCount": "The maximum number of players that can be connected simultaneously to the game session.
", + "UpdateGameSessionQueueInput$TimeoutInSeconds": "The maximum time, in seconds, that a new game session placement request remains in the queue. When a request exceeds this time, the game session placement changes to a TIMED_OUT
status.
The number of player slots in a match to keep open for future players. For example, assume that the configuration's rule set specifies a match for a single 12-person team. If the additional player count is set to 2, only 10 players are initially selected for the match.
" } }, "ZipBlob": { "base": null, "refs": { - "CreateScriptInput$ZipFile": "Data object containing your Realtime scripts and dependencies as a zip file. The zip file can have one or multiple files. Maximum size of a zip file is 5 MB.
When using the AWS CLI tool to create a script, this parameter is set to the zip file name. It must be prepended with the string \"fileb://\" to indicate that the file data is a binary object. For example: --zip-file fileb://myRealtimeScript.zip
.
Data object containing your Realtime scripts and dependencies as a zip file. The zip file can have one or multiple files. Maximum size of a zip file is 5 MB.
When using the AWS CLI tool to create a script, this parameter is set to the zip file name. It must be prepended with the string \"fileb://\" to indicate that the file data is a binary object. For example: --zip-file fileb://myRealtimeScript.zip
.
A data object containing your Realtime scripts and dependencies as a zip file. The zip file can have one or multiple files. Maximum size of a zip file is 5 MB.
When using the AWS CLI tool to create a script, this parameter is set to the zip file name. It must be prepended with the string \"fileb://\" to indicate that the file data is a binary object. For example: --zip-file fileb://myRealtimeScript.zip
.
A data object containing your Realtime scripts and dependencies as a zip file. The zip file can have one or multiple files. Maximum size of a zip file is 5 MB.
When using the AWS CLI tool to create a script, this parameter is set to the zip file name. It must be prepended with the string \"fileb://\" to indicate that the file data is a binary object. For example: --zip-file fileb://myRealtimeScript.zip
.
The AWS Health API provides programmatic access to the AWS Health information that is presented in the AWS Personal Health Dashboard. You can get information about events that affect your AWS resources:
DescribeEvents: Summary information about events.
DescribeEventDetails: Detailed information about one or more events.
DescribeAffectedEntities: Information about AWS resources that are affected by one or more events.
In addition, these operations provide information about event types and summary counts of events or affected entities:
DescribeEventTypes: Information about the kinds of events that AWS Health tracks.
DescribeEventAggregates: A count of the number of events that meet specified criteria.
DescribeEntityAggregates: A count of the number of affected entities that meet specified criteria.
The Health API requires a Business or Enterprise support plan from AWS Support. Calling the Health API from an account that does not have a Business or Enterprise support plan causes a SubscriptionRequiredException
.
For authentication of requests, AWS Health uses the Signature Version 4 Signing Process.
See the AWS Health User Guide for information about how to use the API.
Service Endpoint
The HTTP endpoint for the AWS Health API is:
https://health.us-east-1.amazonaws.com
The AWS Health API provides programmatic access to the AWS Health information that is presented in the AWS Personal Health Dashboard. You can get information about events that affect your AWS resources:
DescribeEvents: Summary information about events.
DescribeEventDetails: Detailed information about one or more events.
DescribeAffectedEntities: Information about AWS resources that are affected by one or more events.
In addition, these operations provide information about event types and summary counts of events or affected entities:
DescribeEventTypes: Information about the kinds of events that AWS Health tracks.
DescribeEventAggregates: A count of the number of events that meet specified criteria.
DescribeEntityAggregates: A count of the number of affected entities that meet specified criteria.
AWS Health integrates with AWS Organizations to provide a centralized view of AWS Health events across all accounts in your organization.
DescribeEventsForOrganization: Summary information about events across the organization.
DescribeAffectedAccountsForOrganization: List of accounts in your organization impacted by an event.
DescribeEventDetailsForOrganization: Detailed information about events in your organization.
DescribeAffectedEntitiesForOrganization: Information about AWS resources in your organization that are affected by events.
You can use the following operations to enable or disable AWS Health from working with AWS Organizations.
EnableHealthServiceAccessForOrganization: Enables AWS Health to work with AWS Organizations.
DisableHealthServiceAccessForOrganization: Disables AWS Health from working with AWS Organizations.
DescribeHealthServiceStatusForOrganization: Status information about enabling or disabling AWS Health from working with AWS Organizations.
The Health API requires a Business or Enterprise support plan from AWS Support. Calling the Health API from an account that does not have a Business or Enterprise support plan causes a SubscriptionRequiredException
.
For authentication of requests, AWS Health uses the Signature Version 4 Signing Process.
See the AWS Health User Guide for information about how to use the API.
Service Endpoint
The HTTP endpoint for the AWS Health API is:
https://health.us-east-1.amazonaws.com
Returns a list of accounts in the organization from AWS Organizations that are affected by the provided event.
Before you can call this operation, you must first enable AWS Health to work with AWS Organizations. To do this, call the EnableHealthServiceAccessForOrganization operation from your organization's master account.
", "DescribeAffectedEntities": "Returns a list of entities that have been affected by the specified events, based on the specified filter criteria. Entities can refer to individual customer resources, groups of customer resources, or any other construct, depending on the AWS service. Events that have impact beyond that of the affected entities, or where the extent of impact is unknown, include at least one entity indicating this.
At least one event ARN is required. Results are sorted by the lastUpdatedTime
of the entity, starting with the most recent.
Returns a list of entities that have been affected by one or more events for one or more accounts in your organization in AWS Organizations, based on the filter criteria. Entities can refer to individual customer resources, groups of customer resources, or any other construct, depending on the AWS service.
At least one event ARN and account ID are required. Results are sorted by the lastUpdatedTime
of the entity, starting with the most recent.
Before you can call this operation, you must first enable AWS Health to work with AWS Organizations. To do this, call the EnableHealthServiceAccessForOrganization operation from your organization's master account.
", "DescribeEntityAggregates": "Returns the number of entities that are affected by each of the specified events. If no events are specified, the counts of all affected entities are returned.
", "DescribeEventAggregates": "Returns the number of events of each event type (issue, scheduled change, and account notification). If no filter is specified, the counts of all events in each category are returned.
", - "DescribeEventDetails": "Returns detailed information about one or more specified events. Information includes standard event data (region, service, etc., as returned by DescribeEvents), a detailed event description, and possible additional metadata that depends upon the nature of the event. Affected entities are not included; to retrieve those, use the DescribeAffectedEntities operation.
If a specified event cannot be retrieved, an error message is returned for that event.
", + "DescribeEventDetails": "Returns detailed information about one or more specified events. Information includes standard event data (region, service, and so on, as returned by DescribeEvents), a detailed event description, and possible additional metadata that depends upon the nature of the event. Affected entities are not included; to retrieve those, use the DescribeAffectedEntities operation.
If a specified event cannot be retrieved, an error message is returned for that event.
", + "DescribeEventDetailsForOrganization": "Returns detailed information about one or more specified events for one or more accounts in your organization. Information includes standard event data (Region, service, and so on, as returned by DescribeEventsForOrganization, a detailed event description, and possible additional metadata that depends upon the nature of the event. Affected entities are not included; to retrieve those, use the DescribeAffectedEntitiesForOrganization operation.
Before you can call this operation, you must first enable AWS Health to work with AWS Organizations. To do this, call the EnableHealthServiceAccessForOrganization operation from your organization's master account.
", "DescribeEventTypes": "Returns the event types that meet the specified filter criteria. If no filter criteria are specified, all event types are returned, in no particular order.
", - "DescribeEvents": "Returns information about events that meet the specified filter criteria. Events are returned in a summary form and do not include the detailed description, any additional metadata that depends on the event type, or any affected resources. To retrieve that information, use the DescribeEventDetails and DescribeAffectedEntities operations.
If no filter criteria are specified, all events are returned. Results are sorted by lastModifiedTime
, starting with the most recent.
Returns information about events that meet the specified filter criteria. Events are returned in a summary form and do not include the detailed description, any additional metadata that depends on the event type, or any affected resources. To retrieve that information, use the DescribeEventDetails and DescribeAffectedEntities operations.
If no filter criteria are specified, all events are returned. Results are sorted by lastModifiedTime
, starting with the most recent.
Returns information about events across your organization in AWS Organizations, meeting the specified filter criteria. Events are returned in a summary form and do not include the accounts impacted, detailed description, any additional metadata that depends on the event type, or any affected resources. To retrieve that information, use the DescribeAffectedAccountsForOrganization, DescribeEventDetailsForOrganization, and DescribeAffectedEntitiesForOrganization operations.
If no filter criteria are specified, all events across your organization are returned. Results are sorted by lastModifiedTime
, starting with the most recent.
Before you can call this operation, you must first enable Health to work with AWS Organizations. To do this, call the EnableHealthServiceAccessForOrganization operation from your organization's master account.
", + "DescribeHealthServiceStatusForOrganization": "This operation provides status information on enabling or disabling AWS Health to work with your organization. To call this operation, you must sign in as an IAM user, assume an IAM role, or sign in as the root user (not recommended) in the organization's master account.
", + "DisableHealthServiceAccessForOrganization": "Calling this operation disables Health from working with AWS Organizations. This does not remove the Service Linked Role (SLR) from the the master account in your organization. Use the IAM console, API, or AWS CLI to remove the SLR if desired. To call this operation, you must sign in as an IAM user, assume an IAM role, or sign in as the root user (not recommended) in the organization's master account.
", + "EnableHealthServiceAccessForOrganization": "Calling this operation enables AWS Health to work with AWS Organizations. This applies a Service Linked Role (SLR) to the master account in the organization. To learn more about the steps in this process, visit enabling service access for AWS Health in AWS Organizations. To call this operation, you must sign in as an IAM user, assume an IAM role, or sign in as the root user (not recommended) in the organization's master account.
" }, "shapes": { "AffectedEntity": { @@ -16,12 +23,46 @@ "EntityList$member": null } }, + "ConcurrentModificationException": { + "base": "EnableHealthServiceAccessForOrganization is already in progress. Wait for the action to complete before trying again. To get the current status, use the DescribeHealthServiceStatusForOrganization operation.
", + "refs": { + } + }, "DateTimeRange": { "base": "A range of dates and times that is used by the EventFilter and EntityFilter objects. If from
is set and to
is set: match items where the timestamp (startTime
, endTime
, or lastUpdatedTime
) is between from
and to
inclusive. If from
is set and to
is not set: match items where the timestamp value is equal to or after from
. If from
is not set and to
is set: match items where the timestamp value is equal to or before to
.
A JSON set of elements of the failed response, including the awsAccountId
, errorMessage
, errorName
, and eventArn
.
Error messages for any events that could not be retrieved.
" } }, + "DescribeEventDetailsForOrganizationFailedSet": { + "base": null, + "refs": { + "DescribeEventDetailsForOrganizationResponse$failedSet": "Error messages for any events that could not be retrieved.
" + } + }, + "DescribeEventDetailsForOrganizationRequest": { + "base": null, + "refs": { + } + }, + "DescribeEventDetailsForOrganizationResponse": { + "base": null, + "refs": { + } + }, + "DescribeEventDetailsForOrganizationSuccessfulSet": { + "base": null, + "refs": { + "DescribeEventDetailsForOrganizationResponse$successfulSet": "Information about the events that could be retrieved.
" + } + }, "DescribeEventDetailsRequest": { "base": null, "refs": { @@ -84,6 +147,16 @@ "refs": { } }, + "DescribeEventsForOrganizationRequest": { + "base": null, + "refs": { + } + }, + "DescribeEventsForOrganizationResponse": { + "base": null, + "refs": { + } + }, "DescribeEventsRequest": { "base": null, "refs": { @@ -94,6 +167,11 @@ "refs": { } }, + "DescribeHealthServiceStatusForOrganizationResponse": { + "base": null, + "refs": { + } + }, "EntityAggregate": { "base": "The number of entities that are affected by one or more events. Returned by the DescribeEntityAggregates operation.
", "refs": { @@ -109,20 +187,29 @@ "EntityFilter": { "base": "The values to use to filter results from the DescribeAffectedEntities operation.
", "refs": { - "DescribeAffectedEntitiesRequest$filter": "Values to narrow the results returned. At least one event ARN is required.
" + "DescribeAffectedEntitiesRequest$filter": "Values to narrow the results returned. At least one event ARN is required.
" } }, "EntityList": { "base": null, "refs": { + "DescribeAffectedEntitiesForOrganizationResponse$entities": "A JSON set of elements including the awsAccountId
and its entityArn
, entityValue
and its entityArn
, lastUpdatedTime
, statusCode
, and tags
.
The entities that match the filter criteria.
" } }, "Event": { - "base": "Summary information about an event, returned by the DescribeEvents operation. The DescribeEventDetails operation also returns this information, as well as the EventDescription and additional event metadata.
", + "base": "Summary information about an AWS Health event.
", "refs": { "EventDetails$event": "Summary information about the event.
", - "EventList$member": null + "EventList$member": null, + "OrganizationEventDetails$event": null + } + }, + "EventAccountFilter": { + "base": "The values used to filter results from the DescribeEventDetailsForOrganization and DescribeAffectedEntitiesForOrganization operations.
", + "refs": { + "OrganizationEntityFiltersList$member": null, + "OrganizationEventDetailFiltersList$member": null } }, "EventAggregate": { @@ -146,7 +233,8 @@ "EventDescription": { "base": "The detailed description of the event. Included in the information returned by the DescribeEventDetails operation.
", "refs": { - "EventDetails$eventDescription": "The most recent description of the event.
" + "EventDetails$eventDescription": "The most recent description of the event.
", + "OrganizationEventDetails$eventDescription": null } }, "EventDetails": { @@ -209,6 +297,54 @@ "refs": { } }, + "OrganizationAffectedEntitiesErrorItem": { + "base": "Error information returned when a DescribeAffectedEntitiesForOrganization operation cannot find or process a specific entity.
", + "refs": { + "DescribeAffectedEntitiesForOrganizationFailedSet$member": null + } + }, + "OrganizationEntityFiltersList": { + "base": null, + "refs": { + "DescribeAffectedEntitiesForOrganizationRequest$organizationEntityFilters": "A JSON set of elements including the awsAccountId
and the eventArn
.
Summary information about an event, returned by the DescribeEventsForOrganization operation.
", + "refs": { + "OrganizationEventList$member": null + } + }, + "OrganizationEventDetailFiltersList": { + "base": null, + "refs": { + "DescribeEventDetailsForOrganizationRequest$organizationEventDetailFilters": "A set of JSON elements that includes the awsAccountId
and the eventArn
.
Detailed information about an event. A combination of an Event object, an EventDescription object, and additional metadata about the event. Returned by the DescribeEventDetailsForOrganization operation.
", + "refs": { + "DescribeEventDetailsForOrganizationSuccessfulSet$member": null + } + }, + "OrganizationEventDetailsErrorItem": { + "base": "Error information returned when a DescribeEventDetailsForOrganization operation cannot find a specified event.
", + "refs": { + "DescribeEventDetailsForOrganizationFailedSet$member": null + } + }, + "OrganizationEventFilter": { + "base": "The values to filter results from the DescribeEventsForOrganization operation.
", + "refs": { + "DescribeEventsForOrganizationRequest$filter": "Values to narrow the results returned.
" + } + }, + "OrganizationEventList": { + "base": null, + "refs": { + "DescribeEventsForOrganizationResponse$events": "The events that match the specified filter criteria.
" + } + }, "UnsupportedLocale": { "base": "The specified locale is not supported.
", "refs": { @@ -217,7 +353,19 @@ "accountId": { "base": null, "refs": { - "AffectedEntity$awsAccountId": "The 12-digit AWS account number that contains the affected entity.
" + "AffectedEntity$awsAccountId": "The 12-digit AWS account number that contains the affected entity.
", + "EventAccountFilter$awsAccountId": "The 12-digit AWS account numbers that contains the affected entities.
", + "OrganizationAffectedEntitiesErrorItem$awsAccountId": "The 12-digit AWS account numbers that contains the affected entities.
", + "OrganizationEventDetails$awsAccountId": "The 12-digit AWS account numbers that contains the affected entities.
", + "OrganizationEventDetailsErrorItem$awsAccountId": "Error information returned when a DescribeEventDetailsForOrganization operation cannot find a specified event.
", + "affectedAccountsList$member": null, + "awsAccountIdsList$member": null + } + }, + "affectedAccountsList": { + "base": null, + "refs": { + "DescribeAffectedAccountsForOrganizationResponse$affectedAccounts": "A JSON set of elements of the affected accounts.
" } }, "aggregateValue": { @@ -239,6 +387,12 @@ "EventFilter$availabilityZones": "A list of AWS availability zones.
" } }, + "awsAccountIdsList": { + "base": null, + "refs": { + "OrganizationEventFilter$awsAccountIds": "A list of 12-digit AWS account numbers that contains the affected entities.
" + } + }, "count": { "base": null, "refs": { @@ -266,7 +420,8 @@ "base": null, "refs": { "EntityFilter$entityArns": "A list of entity ARNs (unique identifiers).
", - "EventFilter$entityArns": "A list of entity ARNs (unique identifiers).
" + "EventFilter$entityArns": "A list of entity ARNs (unique identifiers).
", + "OrganizationEventFilter$entityArns": "REPLACEME
" } }, "entityStatusCode": { @@ -285,7 +440,7 @@ "entityUrl": { "base": null, "refs": { - "AffectedEntity$entityUrl": null + "AffectedEntity$entityUrl": "The URL of the affected entity.
" } }, "entityValue": { @@ -299,7 +454,8 @@ "base": null, "refs": { "EntityFilter$entityValues": "A list of IDs for affected entities.
", - "EventFilter$entityValues": "A list of entity identifiers, such as EC2 instance IDs (i-34ab692e
) or EBS volumes (vol-426ab23e
).
A list of entity identifiers, such as EC2 instance IDs (i-34ab692e
) or EBS volumes (vol-426ab23e
).
A list of entity identifiers, such as EC2 instance IDs (i-34ab692e) or EBS volumes (vol-426ab23e).
" } }, "eventAggregateField": { @@ -312,10 +468,15 @@ "base": null, "refs": { "AffectedEntity$eventArn": "The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
The unique identifier for the event. Format: arn:aws:health:event-region::event/SERVICE/EVENT_TYPE_CODE/EVENT_TYPE_PLUS_ID
. Example: Example: arn:aws:health:us-east-1::event/EC2/EC2_INSTANCE_RETIREMENT_SCHEDULED/EC2_INSTANCE_RETIREMENT_SCHEDULED_ABC123-DEF456
Additional metadata about the event.
" + "EventDetails$eventMetadata": "Additional metadata about the event.
", + "OrganizationEventDetails$eventMetadata": "Additional metadata about the event.
" } }, "eventStatusCode": { "base": null, "refs": { "Event$statusCode": "The most recent status of the event. Possible values are open
, closed
, and upcoming
.
The most recent status of the event. Possible values are open
, closed
, and upcoming
.
A list of event status codes.
" + "EventFilter$eventStatusCodes": "A list of event status codes.
", + "OrganizationEventFilter$eventStatusCodes": "A list of event status codes.
" } }, "eventType": { @@ -364,13 +528,15 @@ "Event$eventTypeCategory": "The category of the event. Possible values are issue
, scheduledChange
, and accountNotification
.
A list of event type category codes (issue
, scheduledChange
, or accountNotification
).
The category of the event type.
", "eventTypeCategoryList$member": null } }, "eventTypeCategoryList": { "base": null, "refs": { - "EventFilter$eventTypeCategories": "A list of event type category codes (issue
, scheduledChange
, or accountNotification
).
A list of event type category codes (issue
, scheduledChange
, or accountNotification
).
REPLACEME
" } }, "eventTypeCode": { @@ -378,30 +544,44 @@ "refs": { "Event$eventTypeCode": "The unique identifier for the event type. The format is AWS_SERVICE_DESCRIPTION
; for example, AWS_EC2_SYSTEM_MAINTENANCE_EVENT
.
The unique identifier for the event type. The format is AWS_SERVICE_DESCRIPTION
; for example, AWS_EC2_SYSTEM_MAINTENANCE_EVENT
.
The unique identifier for the event type. The format is AWS_SERVICE_DESCRIPTION
. For example, AWS_EC2_SYSTEM_MAINTENANCE_EVENT
.
A list of unique identifiers for event types. For example, \"AWS_EC2_SYSTEM_MAINTENANCE_EVENT\",\"AWS_RDS_MAINTENANCE_SCHEDULED\"
A list of unique identifiers for event types. For example, \"AWS_EC2_SYSTEM_MAINTENANCE_EVENT\",\"AWS_RDS_MAINTENANCE_SCHEDULED\".
A list of unique identifiers for event types. For example, \"AWS_EC2_SYSTEM_MAINTENANCE_EVENT\",\"AWS_RDS_MAINTENANCE_SCHEDULED\".
Information about the status of enabling or disabling AWS Health Organizational View in your organization.
Valid values are ENABLED | DISABLED | PENDING
.
The locale (language) to return information in. English (en) is the default and the only supported value at this time.
", "DescribeAffectedEntitiesRequest$locale": "The locale (language) to return information in. English (en) is the default and the only supported value at this time.
", + "DescribeEventDetailsForOrganizationRequest$locale": "The locale (language) to return information in. English (en) is the default and the only supported value at this time.
", "DescribeEventDetailsRequest$locale": "The locale (language) to return information in. English (en) is the default and the only supported value at this time.
", "DescribeEventTypesRequest$locale": "The locale (language) to return information in. English (en) is the default and the only supported value at this time.
", + "DescribeEventsForOrganizationRequest$locale": "The locale (language) to return information in. English (en) is the default and the only supported value at this time.
", "DescribeEventsRequest$locale": "The locale (language) to return information in. English (en) is the default and the only supported value at this time.
" } }, "maxResults": { "base": null, "refs": { + "DescribeAffectedAccountsForOrganizationRequest$maxResults": "The maximum number of items to return in one batch, between 10 and 100, inclusive.
", + "DescribeAffectedEntitiesForOrganizationRequest$maxResults": "The maximum number of items to return in one batch, between 10 and 100, inclusive.
", "DescribeAffectedEntitiesRequest$maxResults": "The maximum number of items to return in one batch, between 10 and 100, inclusive.
", "DescribeEventAggregatesRequest$maxResults": "The maximum number of items to return in one batch, between 10 and 100, inclusive.
", "DescribeEventTypesRequest$maxResults": "The maximum number of items to return in one batch, between 10 and 100, inclusive.
", + "DescribeEventsForOrganizationRequest$maxResults": "The maximum number of items to return in one batch, between 10 and 100, inclusive.
", "DescribeEventsRequest$maxResults": "The maximum number of items to return in one batch, between 10 and 100, inclusive.
" } }, @@ -420,12 +600,18 @@ "nextToken": { "base": null, "refs": { + "DescribeAffectedAccountsForOrganizationRequest$nextToken": "If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
If the results of a search are large, only a portion of the results are returned, and a nextToken
pagination token is returned in the response. To retrieve the next batch of results, reissue the search request and include the returned token. When all results have been returned, the response does not contain a pagination token value.
The AWS region name of the event.
", + "OrganizationEvent$region": "The AWS Region name of the event.
", "regionList$member": null } }, "regionList": { "base": null, "refs": { - "EventFilter$regions": "A list of AWS regions.
" + "EventFilter$regions": "A list of AWS regions.
", + "OrganizationEventFilter$regions": "A list of AWS Regions.
" } }, "service": { @@ -448,6 +636,7 @@ "refs": { "Event$service": "The AWS service that is affected by the event. For example, EC2
, RDS
.
The AWS service that is affected by the event. For example, EC2
, RDS
.
The AWS service that is affected by the event. For example, EC2, RDS.
", "serviceList$member": null } }, @@ -455,15 +644,21 @@ "base": null, "refs": { "EventFilter$services": "The AWS services associated with the event. For example, EC2
, RDS
.
The AWS services associated with the event. For example, EC2
, RDS
.
The AWS services associated with the event. For example, EC2
, RDS
.
The AWS services associated with the event. For example, EC2
, RDS
.
The name of the error.
", "EventDetailsErrorItem$errorMessage": "A message that describes the error.
", "InvalidPaginationToken$message": null, + "OrganizationAffectedEntitiesErrorItem$errorName": "The name of the error.
", + "OrganizationAffectedEntitiesErrorItem$errorMessage": "The unique identifier for the event type. The format is AWS_SERVICE_DESCRIPTION
. For example, AWS_EC2_SYSTEM_MAINTENANCE_EVENT
.
The name of the error.
", + "OrganizationEventDetailsErrorItem$errorMessage": "A message that describes the error.
", "UnsupportedLocale$message": null } }, @@ -501,7 +696,10 @@ "DateTimeRange$to": "The ending date and time of a time range.
", "Event$startTime": "The date and time that the event began.
", "Event$endTime": "The date and time that the event ended.
", - "Event$lastUpdatedTime": "The most recent date and time that the event was updated.
" + "Event$lastUpdatedTime": "The most recent date and time that the event was updated.
", + "OrganizationEvent$startTime": "The date and time that the event began.
", + "OrganizationEvent$endTime": "The date and time that the event ended.
", + "OrganizationEvent$lastUpdatedTime": "The most recent date and time that the event was updated.
" } } } diff --git a/models/apis/health/2016-08-04/paginators-1.json b/models/apis/health/2016-08-04/paginators-1.json index 3a30b016714..e7e6c2ad880 100644 --- a/models/apis/health/2016-08-04/paginators-1.json +++ b/models/apis/health/2016-08-04/paginators-1.json @@ -1,11 +1,26 @@ { "pagination": { + "DescribeAffectedAccountsForOrganization": { + "input_token": "nextToken", + "limit_key": "maxResults", + "output_token": "nextToken", + "result_key": "affectedAccounts" + }, "DescribeAffectedEntities": { "input_token": "nextToken", "limit_key": "maxResults", "output_token": "nextToken", "result_key": "entities" }, + "DescribeAffectedEntitiesForOrganization": { + "input_token": "nextToken", + "limit_key": "maxResults", + "non_aggregate_keys": [ + "failedSet" + ], + "output_token": "nextToken", + "result_key": "entities" + }, "DescribeEntityAggregates": { "result_key": "entityAggregates" }, @@ -26,6 +41,12 @@ "limit_key": "maxResults", "output_token": "nextToken", "result_key": "events" + }, + "DescribeEventsForOrganization": { + "input_token": "nextToken", + "limit_key": "maxResults", + "output_token": "nextToken", + "result_key": "events" } } } \ No newline at end of file diff --git a/models/apis/iam/2010-05-08/api-2.json b/models/apis/iam/2010-05-08/api-2.json index 05eeccc5444..7f5f98decd6 100644 --- a/models/apis/iam/2010-05-08/api-2.json +++ b/models/apis/iam/2010-05-08/api-2.json @@ -3028,6 +3028,7 @@ "MatchedStatements":{"shape":"StatementListType"}, "MissingContextValues":{"shape":"ContextKeyNamesResultListType"}, "OrganizationsDecisionDetail":{"shape":"OrganizationsDecisionDetail"}, + "PermissionsBoundaryDecisionDetail":{"shape":"PermissionsBoundaryDecisionDetail"}, "EvalDecisionDetails":{"shape":"EvalDecisionDetailsType"}, "ResourceSpecificResults":{"shape":"ResourceSpecificResultListType"} } @@ -4242,6 +4243,12 @@ "type":"string", "enum":["PermissionsBoundaryPolicy"] }, + "PermissionsBoundaryDecisionDetail":{ + "type":"structure", + "members":{ + "AllowedByPermissionsBoundary":{"shape":"booleanType"} + } + }, "Policy":{ "type":"structure", "members":{ @@ -4547,7 +4554,8 @@ "EvalResourceDecision":{"shape":"PolicyEvaluationDecisionType"}, "MatchedStatements":{"shape":"StatementListType"}, "MissingContextValues":{"shape":"ContextKeyNamesResultListType"}, - "EvalDecisionDetails":{"shape":"EvalDecisionDetailsType"} + "EvalDecisionDetails":{"shape":"EvalDecisionDetailsType"}, + "PermissionsBoundaryDecisionDetail":{"shape":"PermissionsBoundaryDecisionDetail"} } }, "ResourceSpecificResultListType":{ @@ -4843,6 +4851,7 @@ ], "members":{ "PolicyInputList":{"shape":"SimulationPolicyListType"}, + "PermissionsBoundaryPolicyInputList":{"shape":"SimulationPolicyListType"}, "ActionNames":{"shape":"ActionNameListType"}, "ResourceArns":{"shape":"ResourceNameListType"}, "ResourcePolicy":{"shape":"policyDocumentType"}, @@ -4871,6 +4880,7 @@ "members":{ "PolicySourceArn":{"shape":"arnType"}, "PolicyInputList":{"shape":"SimulationPolicyListType"}, + "PermissionsBoundaryPolicyInputList":{"shape":"SimulationPolicyListType"}, "ActionNames":{"shape":"ActionNameListType"}, "ResourceArns":{"shape":"ResourceNameListType"}, "ResourcePolicy":{"shape":"policyDocumentType"}, diff --git a/models/apis/iam/2010-05-08/docs-2.json b/models/apis/iam/2010-05-08/docs-2.json index 6bbc6cad38c..c616de66da2 100644 --- a/models/apis/iam/2010-05-08/docs-2.json +++ b/models/apis/iam/2010-05-08/docs-2.json @@ -14,7 +14,7 @@ "CreateGroup": "Creates a new group.
For information about the number of groups you can create, see Limitations on IAM Entities in the IAM User Guide.
", "CreateInstanceProfile": "Creates a new instance profile. For information about instance profiles, go to About Instance Profiles.
For information about the number of instance profiles you can create, see Limitations on IAM Entities in the IAM User Guide.
", "CreateLoginProfile": "Creates a password for the specified user, giving the user the ability to access AWS services through the AWS Management Console. For more information about managing passwords, see Managing Passwords in the IAM User Guide.
", - "CreateOpenIDConnectProvider": "Creates an IAM entity to describe an identity provider (IdP) that supports OpenID Connect (OIDC).
The OIDC provider that you create with this operation can be used as a principal in a role's trust policy. Such a policy establishes a trust relationship between AWS and the OIDC provider.
When you create the IAM OIDC provider, you specify the following:
The URL of the OIDC identity provider (IdP) to trust
A list of client IDs (also known as audiences) that identify the application or applications that are allowed to authenticate using the OIDC provider
A list of thumbprints of the server certificate(s) that the IdP uses
You get all of this information from the OIDC IdP that you want to use to access AWS.
The trust for the OIDC provider is derived from the IAM provider that this operation creates. Therefore, it is best to limit access to the CreateOpenIDConnectProvider operation to highly privileged users.
Creates an IAM entity to describe an identity provider (IdP) that supports OpenID Connect (OIDC).
The OIDC provider that you create with this operation can be used as a principal in a role's trust policy. Such a policy establishes a trust relationship between AWS and the OIDC provider.
When you create the IAM OIDC provider, you specify the following:
The URL of the OIDC identity provider (IdP) to trust
A list of client IDs (also known as audiences) that identify the application or applications that are allowed to authenticate using the OIDC provider
A list of thumbprints of one or more server certificates that the IdP uses
You get all of this information from the OIDC IdP that you want to use to access AWS.
The trust for the OIDC provider is derived from the IAM provider that this operation creates. Therefore, it is best to limit access to the CreateOpenIDConnectProvider operation to highly privileged users.
Creates a new managed policy for your AWS account.
This operation creates a policy version with a version identifier of v1
and sets v1 as the policy's default version. For more information about policy versions, see Versioning for Managed Policies in the IAM User Guide.
For more information about managed policies in general, see Managed Policies and Inline Policies in the IAM User Guide.
", "CreatePolicyVersion": "Creates a new version of the specified managed policy. To update a managed policy, you create a new policy version. A managed policy can have up to five versions. If the policy has five versions, you must delete an existing version using DeletePolicyVersion before you create a new version.
Optionally, you can set the new version as the policy's default version. The default version is the version that is in effect for the IAM users, groups, and roles to which the policy is attached.
For more information about managed policy versions, see Versioning for Managed Policies in the IAM User Guide.
", "CreateRole": "Creates a new role for your AWS account. For more information about roles, go to IAM Roles. For information about limitations on role names and the number of roles you can create, go to Limitations on IAM Entities in the IAM User Guide.
", @@ -119,7 +119,7 @@ "ResyncMFADevice": "Synchronizes the specified MFA device with its IAM resource object on the AWS servers.
For more information about creating and working with virtual MFA devices, go to Using a Virtual MFA Device in the IAM User Guide.
", "SetDefaultPolicyVersion": "Sets the specified version of the specified policy as the policy's default (operative) version.
This operation affects all users, groups, and roles that the policy is attached to. To list the users, groups, and roles that the policy is attached to, use the ListEntitiesForPolicy API.
For information about managed policies, see Managed Policies and Inline Policies in the IAM User Guide.
", "SetSecurityTokenServicePreferences": "Sets the specified version of the global endpoint token as the token version used for the AWS account.
By default, AWS Security Token Service (STS) is available as a global service, and all STS requests go to a single endpoint at https://sts.amazonaws.com
. AWS recommends using Regional STS endpoints to reduce latency, build in redundancy, and increase session token availability. For information about Regional endpoints for STS, see AWS Regions and Endpoints in the AWS General Reference.
If you make an STS call to the global endpoint, the resulting session tokens might be valid in some Regions but not others. It depends on the version that is set in this operation. Version 1 tokens are valid only in AWS Regions that are available by default. These tokens do not work in manually enabled Regions, such as Asia Pacific (Hong Kong). Version 2 tokens are valid in all Regions. However, version 2 tokens are longer and might affect systems where you temporarily store tokens. For information, see Activating and Deactivating STS in an AWS Region in the IAM User Guide.
To view the current session token version, see the GlobalEndpointTokenVersion
entry in the response of the GetAccountSummary operation.
Simulate how a set of IAM policies and optionally a resource-based policy works with a list of API operations and AWS resources to determine the policies' effective permissions. The policies are provided as strings.
The simulation does not perform the API operations; it only checks the authorization to determine if the simulated policies allow or deny the operations.
If you want to simulate existing policies attached to an IAM user, group, or role, use SimulatePrincipalPolicy instead.
Context keys are variables maintained by AWS and its services that provide details about the context of an API query request. You can use the Condition
element of an IAM policy to evaluate context keys. To get the list of context keys that the policies require for correct simulation, use GetContextKeysForCustomPolicy.
If the output is long, you can use MaxItems
and Marker
parameters to paginate the results.
Simulate how a set of IAM policies and optionally a resource-based policy works with a list of API operations and AWS resources to determine the policies' effective permissions. The policies are provided as strings.
The simulation does not perform the API operations; it only checks the authorization to determine if the simulated policies allow or deny the operations.
If you want to simulate existing policies that are attached to an IAM user, group, or role, use SimulatePrincipalPolicy instead.
Context keys are variables that are maintained by AWS and its services and which provide details about the context of an API query request. You can use the Condition
element of an IAM policy to evaluate context keys. To get the list of context keys that the policies require for correct simulation, use GetContextKeysForCustomPolicy.
If the output is long, you can use MaxItems
and Marker
parameters to paginate the results.
Simulate how a set of IAM policies attached to an IAM entity works with a list of API operations and AWS resources to determine the policies' effective permissions. The entity can be an IAM user, group, or role. If you specify a user, then the simulation also includes all of the policies that are attached to groups that the user belongs to.
You can optionally include a list of one or more additional policies specified as strings to include in the simulation. If you want to simulate only policies specified as strings, use SimulateCustomPolicy instead.
You can also optionally include one resource-based policy to be evaluated with each of the resources included in the simulation.
The simulation does not perform the API operations; it only checks the authorization to determine if the simulated policies allow or deny the operations.
Note: This API discloses information about the permissions granted to other users. If you do not want users to see other user's permissions, then consider allowing them to use SimulateCustomPolicy instead.
Context keys are variables maintained by AWS and its services that provide details about the context of an API query request. You can use the Condition
element of an IAM policy to evaluate context keys. To get the list of context keys that the policies require for correct simulation, use GetContextKeysForPrincipalPolicy.
If the output is long, you can use the MaxItems
and Marker
parameters to paginate the results.
Adds one or more tags to an IAM role. The role can be a regular role or a service-linked role. If a tag with the same key name already exists, then that tag is overwritten with the new value.
A tag consists of a key name and an associated value. By assigning tags to your resources, you can do the following:
Administrative grouping and discovery - Attach tags to resources to aid in organization and search. For example, you could search for all resources with the key name Project and the value MyImportantProject. Or search for all resources with the key name Cost Center and the value 41200.
Access control - Reference tags in IAM user-based and resource-based policies. You can use tags to restrict access to only an IAM user or role that has a specified tag attached. You can also restrict access to only those resources that have a certain tag attached. For examples of policies that show how to use tags to control access, see Control Access Using IAM Tags in the IAM User Guide.
Cost allocation - Use tags to help track which individuals and teams are using which AWS resources.
Make sure that you have no invalid tags and that you do not exceed the allowed number of tags per role. In either case, the entire request fails and no tags are added to the role.
AWS always interprets the tag Value
as a single string. If you need to store an array, you can store comma-separated values in the string. However, you must interpret the value in your code.
For more information about tagging, see Tagging IAM Identities in the IAM User Guide.
", "TagUser": "Adds one or more tags to an IAM user. If a tag with the same key name already exists, then that tag is overwritten with the new value.
A tag consists of a key name and an associated value. By assigning tags to your resources, you can do the following:
Administrative grouping and discovery - Attach tags to resources to aid in organization and search. For example, you could search for all resources with the key name Project and the value MyImportantProject. Or search for all resources with the key name Cost Center and the value 41200.
Access control - Reference tags in IAM user-based and resource-based policies. You can use tags to restrict access to only an IAM requesting user or to a role that has a specified tag attached. You can also restrict access to only those resources that have a certain tag attached. For examples of policies that show how to use tags to control access, see Control Access Using IAM Tags in the IAM User Guide.
Cost allocation - Use tags to help track which individuals and teams are using which AWS resources.
Make sure that you have no invalid tags and that you do not exceed the allowed number of tags per role. In either case, the entire request fails and no tags are added to the role.
AWS always interprets the tag Value
as a single string. If you need to store an array, you can store comma-separated values in the string. However, you must interpret the value in your code.
For more information about tagging, see Tagging IAM Identities in the IAM User Guide.
", @@ -678,8 +678,8 @@ "EvalDecisionDetailsType": { "base": null, "refs": { - "EvaluationResult$EvalDecisionDetails": "Additional details about the results of the evaluation decision. When there are both IAM policies and resource policies, this parameter explains how each set of policies contributes to the final evaluation decision. When simulating cross-account access to a resource, both the resource-based policy and the caller's IAM policy must grant access. See How IAM Roles Differ from Resource-based Policies
", - "ResourceSpecificResult$EvalDecisionDetails": "Additional details about the results of the evaluation decision. When there are both IAM policies and resource policies, this parameter explains how each set of policies contributes to the final evaluation decision. When simulating cross-account access to a resource, both the resource-based policy and the caller's IAM policy must grant access.
" + "EvaluationResult$EvalDecisionDetails": "Additional details about the results of the cross-account evaluation decision. This parameter is populated for only cross-account simulations. It contains a brief summary of how each policy type contributes to the final evaluation decision.
If the simulation evaluates policies within the same account and includes a resource ARN, then the parameter is present but the response is empty. If the simulation evaluates policies within the same account and specifies all resources (*
), then the parameter is not returned.
When you make a cross-account request, AWS evaluates the request in the trusting account and the trusted account. The request is allowed only if both evaluations return true
. For more information about how policies are evaluated, see Evaluating Policies Within a Single Account.
If an AWS Organizations SCP included in the evaluation denies access, the simulation ends. In this case, policy evaluation does not proceed any further and this parameter is not returned.
", + "ResourceSpecificResult$EvalDecisionDetails": "Additional details about the results of the evaluation decision on a single resource. This parameter is returned only for cross-account simulations. This parameter explains how each policy type contributes to the resource-specific evaluation decision.
" } }, "EvalDecisionSourceType": { @@ -1386,6 +1386,13 @@ "AttachedPermissionsBoundary$PermissionsBoundaryType": " The permissions boundary usage type that indicates what type of IAM resource is used as the permissions boundary for an entity. This data type can only have a value of Policy
.
Contains information about the effect that a permissions boundary has on a policy simulation when the boundary is applied to an IAM entity.
", + "refs": { + "EvaluationResult$PermissionsBoundaryDecisionDetail": "Contains information about the effect that a permissions boundary has on a policy simulation when the boundary is applied to an IAM entity.
", + "ResourceSpecificResult$PermissionsBoundaryDecisionDetail": "Contains information about the effect that a permissions boundary has on a policy simulation when that boundary is applied to an IAM entity.
" + } + }, "Policy": { "base": "Contains information about a managed policy.
This data type is used as a response element in the CreatePolicy, GetPolicy, and ListPolicies operations.
For more information about managed policies, refer to Managed Policies and Inline Policies in the IAM User Guide.
", "refs": { @@ -1798,10 +1805,12 @@ "SimulationPolicyListType": { "base": null, "refs": { - "GetContextKeysForCustomPolicyRequest$PolicyInputList": "A list of policies for which you want the list of context keys referenced in those policies. Each document is specified as a string containing the complete, valid JSON text of an IAM policy.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
An optional list of additional policies for which you want the list of context keys that are referenced.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
A list of policy documents to include in the simulation. Each document is specified as a string containing the complete, valid JSON text of an IAM policy. Do not include any resource-based policies in this parameter. Any resource-based policy must be submitted with the ResourcePolicy
parameter. The policies cannot be \"scope-down\" policies, such as you could include in a call to GetFederationToken or one of the AssumeRole API operations. In other words, do not use policies designed to restrict what a user can do while using the temporary credentials.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
An optional list of additional policy documents to include in the simulation. Each document is specified as a string containing the complete, valid JSON text of an IAM policy.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
A list of policies for which you want the list of context keys referenced in those policies. Each document is specified as a string containing the complete, valid JSON text of an IAM policy.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
An optional list of additional policies for which you want the list of context keys that are referenced.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
A list of policy documents to include in the simulation. Each document is specified as a string containing the complete, valid JSON text of an IAM policy. Do not include any resource-based policies in this parameter. Any resource-based policy must be submitted with the ResourcePolicy
parameter. The policies cannot be \"scope-down\" policies, such as you could include in a call to GetFederationToken or one of the AssumeRole API operations. In other words, do not use policies designed to restrict what a user can do while using the temporary credentials.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The IAM permissions boundary policy to simulate. The permissions boundary sets the maximum permissions that an IAM entity can have. You can input only one permissions boundary when you pass a policy to this operation. For more information about permissions boundaries, see Permissions Boundaries for IAM Entities in the IAM User Guide. The policy input is specified as a string that contains the complete, valid JSON text of a permissions boundary policy.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
An optional list of additional policy documents to include in the simulation. Each document is specified as a string containing the complete, valid JSON text of an IAM policy.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The IAM permissions boundary policy to simulate. The permissions boundary sets the maximum permissions that the entity can have. You can input only one permissions boundary when you pass a policy to this operation. An IAM entity can only have one permissions boundary in effect at a time. For example, if a permissions boundary is attached to an entity and you pass in a different permissions boundary policy using this parameter, then the new permission boundary policy is used for the simulation. For more information about permissions boundaries, see Permissions Boundaries for IAM Entities in the IAM User Guide. The policy input is specified as a string containing the complete, valid JSON text of a permissions boundary policy.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
Specifies whether to require lowercase characters for IAM user passwords.
", "PasswordPolicy$AllowUsersToChangePassword": "Specifies whether IAM users are allowed to change their own password.
", "PasswordPolicy$ExpirePasswords": "Indicates whether passwords in the account expire. Returns true if MaxPasswordAge
contains a value greater than 0. Returns false if MaxPasswordAge is 0 or not present.
Specifies whether an action is allowed by a permissions boundary that is applied to an IAM entity (user or role). A value of true
means that the permissions boundary does not deny the action. This means that the policy includes an Allow
statement that matches the request. In this case, if an identity-based policy also allows the action, the request is allowed. A value of false
means that either the requested action is not allowed (implicitly denied) or that the action is explicitly denied by the permissions boundary. In both of these cases, the action is not allowed, regardless of the identity-based policy.
Specifies whether the policy can be attached to an IAM user, group, or role.
", "PolicyVersion$IsDefaultVersion": "Specifies whether the policy version is set as the policy's default version.
", "SimulatePolicyResponse$IsTruncated": "A flag that indicates whether there are more items to return. If your results were truncated, you can make a subsequent pagination request using the Marker
request parameter to retrieve more items. Note that IAM might return fewer than the MaxItems
number of results even when there are more results available. We recommend that you check IsTruncated
after every call to ensure that you receive all your results.
The contents of the public key certificate.
", "SigningCertificate$CertificateBody": "The contents of the signing certificate.
", - "UploadServerCertificateRequest$CertificateBody": "The contents of the public key certificate in PEM-encoded format.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The contents of the signing certificate.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The contents of the public key certificate in PEM-encoded format.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The contents of the signing certificate.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The contents of the public key certificate chain.
", - "UploadServerCertificateRequest$CertificateChain": "The contents of the certificate chain. This is typically a concatenation of the PEM-encoded public key certificates of the chain.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The contents of the certificate chain. This is typically a concatenation of the PEM-encoded public key certificates of the chain.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The IAM user's current password.
", - "ChangePasswordRequest$NewPassword": "The new password. The new password must conform to the AWS account's password policy, if one exists.
The regex pattern that is used to validate this parameter is a string of characters. That string can include almost any printable ASCII character from the space (\\u0020) through the end of the ASCII character range (\\u00FF). You can also include the tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D) characters. Any of these characters are valid in a password. However, many tools, such as the AWS Management Console, might restrict the ability to type certain characters because they have special meaning within that tool.
", - "CreateLoginProfileRequest$Password": "The new password for the user.
The regex pattern that is used to validate this parameter is a string of characters. That string can include almost any printable ASCII character from the space (\\u0020) through the end of the ASCII character range (\\u00FF). You can also include the tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D) characters. Any of these characters are valid in a password. However, many tools, such as the AWS Management Console, might restrict the ability to type certain characters because they have special meaning within that tool.
", - "UpdateLoginProfileRequest$Password": "The new password for the specified IAM user.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
However, the format can be further restricted by the account administrator by setting a password policy on the AWS account. For more information, see UpdateAccountPasswordPolicy.
" + "ChangePasswordRequest$NewPassword": "The new password. The new password must conform to the AWS account's password policy, if one exists.
The regex pattern that is used to validate this parameter is a string of characters. That string can include almost any printable ASCII character from the space (\\u0020
) through the end of the ASCII character range (\\u00FF
). You can also include the tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
) characters. Any of these characters are valid in a password. However, many tools, such as the AWS Management Console, might restrict the ability to type certain characters because they have special meaning within that tool.
The new password for the user.
The regex pattern that is used to validate this parameter is a string of characters. That string can include almost any printable ASCII character from the space (\\u0020
) through the end of the ASCII character range (\\u00FF
). You can also include the tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
) characters. Any of these characters are valid in a password. However, many tools, such as the AWS Management Console, might restrict the ability to type certain characters because they have special meaning within that tool.
The new password for the specified IAM user.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
However, the format can be further restricted by the account administrator by setting a password policy on the AWS account. For more information, see UpdateAccountPasswordPolicy.
" } }, "pathPrefixType": { "base": null, "refs": { - "ListGroupsRequest$PathPrefix": " The path prefix for filtering the results. For example, the prefix /division_abc/subdivision_xyz/
gets all groups whose path starts with /division_abc/subdivision_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all groups. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListInstanceProfilesRequest$PathPrefix": " The path prefix for filtering the results. For example, the prefix /application_abc/component_xyz/
gets all instance profiles whose path starts with /application_abc/component_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all instance profiles. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListRolesRequest$PathPrefix": " The path prefix for filtering the results. For example, the prefix /application_abc/component_xyz/
gets all roles whose path starts with /application_abc/component_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all roles. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListServerCertificatesRequest$PathPrefix": " The path prefix for filtering the results. For example: /company/servercerts
would get all server certificates for which the path starts with /company/servercerts
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all server certificates. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListUsersRequest$PathPrefix": " The path prefix for filtering the results. For example: /division_abc/subdivision_xyz/
, which would get all user names whose path starts with /division_abc/subdivision_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all user names. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
" + "ListGroupsRequest$PathPrefix": " The path prefix for filtering the results. For example, the prefix /division_abc/subdivision_xyz/
gets all groups whose path starts with /division_abc/subdivision_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all groups. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. For example, the prefix /application_abc/component_xyz/
gets all instance profiles whose path starts with /application_abc/component_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all instance profiles. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. For example, the prefix /application_abc/component_xyz/
gets all roles whose path starts with /application_abc/component_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all roles. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. For example: /company/servercerts
would get all server certificates for which the path starts with /company/servercerts
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all server certificates. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. For example: /division_abc/subdivision_xyz/
, which would get all user names whose path starts with /division_abc/subdivision_xyz/
.
This parameter is optional. If it is not included, it defaults to a slash (/), listing all user names. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path to the group. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "CreateInstanceProfileRequest$Path": "The path to the instance profile. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "CreateRoleRequest$Path": "The path to the role. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "CreateUserRequest$Path": "The path for the user name. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "CreateVirtualMFADeviceRequest$Path": "The path for the virtual MFA device. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", + "CreateGroupRequest$Path": "The path to the group. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path to the instance profile. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path to the role. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path for the user name. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path for the virtual MFA device. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path to the entity (user or role). For more information about paths, see IAM Identifiers in the IAM User Guide.
", "Group$Path": "The path to the group. For more information about paths, see IAM Identifiers in the IAM User Guide.
", "GroupDetail$Path": "The path to the group. For more information about paths, see IAM Identifiers in the IAM User Guide.
", "InstanceProfile$Path": "The path to the instance profile. For more information about paths, see IAM Identifiers in the IAM User Guide.
", - "ListEntitiesForPolicyRequest$PathPrefix": "The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all entities.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", + "ListEntitiesForPolicyRequest$PathPrefix": "The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all entities.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path to the role. For more information about paths, see IAM Identifiers in the IAM User Guide.
", "RoleDetail$Path": "The path to the role. For more information about paths, see IAM Identifiers in the IAM User Guide.
", "ServerCertificateMetadata$Path": "The path to the server certificate. For more information about paths, see IAM Identifiers in the IAM User Guide.
", - "UpdateGroupRequest$NewPath": "New path for the IAM group. Only include this if changing the group's path.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "UpdateServerCertificateRequest$NewPath": "The new path for the server certificate. Include this only if you are updating the server certificate's path.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "UpdateUserRequest$NewPath": "New path for the IAM user. Include this parameter only if you're changing the user's path.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "UploadServerCertificateRequest$Path": "The path for the server certificate. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/). This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
If you are uploading a server certificate specifically for use with Amazon CloudFront distributions, you must specify a path using the path
parameter. The path must begin with /cloudfront
and must include a trailing slash (for example, /cloudfront/test/
).
New path for the IAM group. Only include this if changing the group's path.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The new path for the server certificate. Include this only if you are updating the server certificate's path.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
New path for the IAM user. Include this parameter only if you're changing the user's path.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path for the server certificate. For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/). This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
If you are uploading a server certificate specifically for use with Amazon CloudFront distributions, you must specify a path using the path
parameter. The path must begin with /cloudfront
and must include a trailing slash (for example, /cloudfront/test/
).
The path to the user. For more information about paths, see IAM Identifiers in the IAM User Guide.
", "UserDetail$Path": "The path to the user. For more information about paths, see IAM Identifiers in the IAM User Guide.
" } @@ -2745,23 +2755,23 @@ "policyDocumentType": { "base": null, "refs": { - "CreatePolicyRequest$PolicyDocument": "The JSON policy document that you want to use as the content for the new policy.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The JSON policy document that you want to use as the content for this new version of the policy.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The trust relationship policy document that grants an entity permission to assume the role.
In IAM, you must provide a JSON policy that has been converted to a string. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
Upon success, the response includes the same trust policy in JSON format.
", + "CreatePolicyRequest$PolicyDocument": "The JSON policy document that you want to use as the content for the new policy.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The JSON policy document that you want to use as the content for this new version of the policy.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The trust relationship policy document that grants an entity permission to assume the role.
In IAM, you must provide a JSON policy that has been converted to a string. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
Upon success, the response includes the same trust policy in JSON format.
", "GetGroupPolicyResponse$PolicyDocument": "The policy document.
IAM stores policies in JSON format. However, resources that were created using AWS CloudFormation templates can be formatted in YAML. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
", "GetRolePolicyResponse$PolicyDocument": "The policy document.
IAM stores policies in JSON format. However, resources that were created using AWS CloudFormation templates can be formatted in YAML. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
", "GetUserPolicyResponse$PolicyDocument": "The policy document.
IAM stores policies in JSON format. However, resources that were created using AWS CloudFormation templates can be formatted in YAML. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
", "PolicyDetail$PolicyDocument": "The policy document.
", "PolicyVersion$Document": "The policy document.
The policy document is returned in the response to the GetPolicyVersion and GetAccountAuthorizationDetails operations. It is not returned in the response to the CreatePolicyVersion or ListPolicyVersions operations.
The policy document returned in this structure is URL-encoded compliant with RFC 3986. You can use a URL decoding method to convert the policy back to plain JSON text. For example, if you use Java, you can use the decode
method of the java.net.URLDecoder
utility class in the Java SDK. Other languages and SDKs provide similar functionality.
The policy document.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The policy document.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The policy document.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The policy document.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The policy document.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The policy document.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The policy that grants an entity permission to assume the role.
", "RoleDetail$AssumeRolePolicyDocument": "The trust policy that grants permission to assume the role.
", - "SimulateCustomPolicyRequest$ResourcePolicy": "A resource-based policy to include in the simulation provided as a string. Each resource in the simulation is treated as if it had this policy attached. You can include only one resource-based policy in a simulation.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
A resource-based policy to include in the simulation provided as a string. Each resource in the simulation is treated as if it had this policy attached. You can include only one resource-based policy in a simulation.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
A resource-based policy to include in the simulation provided as a string. Each resource in the simulation is treated as if it had this policy attached. You can include only one resource-based policy in a simulation.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
A resource-based policy to include in the simulation provided as a string. Each resource in the simulation is treated as if it had this policy attached. You can include only one resource-based policy in a simulation.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The policy that grants an entity permission to assume the role.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The policy that grants an entity permission to assume the role.
You must provide policies in JSON format in IAM. However, for AWS CloudFormation templates formatted in YAML, you can provide the policy in JSON or YAML format. AWS CloudFormation always converts a YAML policy to JSON format before submitting it to IAM.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The path for the policy.
For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListAttachedGroupPoliciesRequest$PathPrefix": "The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListAttachedRolePoliciesRequest$PathPrefix": "The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListAttachedUserPoliciesRequest$PathPrefix": "The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", - "ListPoliciesRequest$PathPrefix": "The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021) through the DEL character (\\u007F), including most punctuation characters, digits, and upper and lowercased letters.
", + "CreatePolicyRequest$Path": "The path for the policy.
For more information about paths, see IAM Identifiers in the IAM User Guide.
This parameter is optional. If it is not included, it defaults to a slash (/).
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies.
This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path prefix for filtering the results. This parameter is optional. If it is not included, it defaults to a slash (/), listing all policies. This parameter allows (through its regex pattern) a string of characters consisting of either a forward slash (/) by itself or a string that must begin and end with forward slashes. In addition, it can contain any ASCII character from the ! (\\u0021
) through the DEL character (\\u007F
), including most punctuation characters, digits, and upper and lowercased letters.
The path to the policy.
For more information about paths, see IAM Identifiers in the IAM User Guide.
", "Policy$Path": "The path to the policy.
For more information about paths, see IAM Identifiers in the IAM User Guide.
" } @@ -2872,7 +2882,7 @@ "privateKeyType": { "base": null, "refs": { - "UploadServerCertificateRequest$PrivateKey": "The contents of the private key in PEM-encoded format.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The contents of the private key in PEM-encoded format.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The SSH public key.
", - "UploadSSHPublicKeyRequest$SSHPublicKeyBody": "The SSH public key. The public key must be encoded in ssh-rsa format or PEM format. The minimum bit-length of the public key is 2048 bits. For example, you can generate a 2048-bit key, and the resulting PEM file is 1679 bytes long.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF)
The special characters tab (\\u0009), line feed (\\u000A), and carriage return (\\u000D)
The SSH public key. The public key must be encoded in ssh-rsa format or PEM format. The minimum bit-length of the public key is 2048 bits. For example, you can generate a 2048-bit key, and the resulting PEM file is 1679 bytes long.
The regex pattern used to validate this parameter is a string of characters consisting of the following:
Any printable ASCII character ranging from the space character (\\u0020
) through the end of the ASCII character range
The printable characters in the Basic Latin and Latin-1 Supplement character set (through \\u00FF
)
The special characters tab (\\u0009
), line feed (\\u000A
), and carriage return (\\u000D
)
The ID of the AWS IoT SiteWise asset. You must specify either a propertyAlias
or both an analiasId
and a propertyId
. Accepts substitution templates.
The ID of the AWS IoT SiteWise asset. You must specify either a propertyAlias
or both an aliasId
and a propertyId
. Accepts substitution templates.
The ID of the asset's property. You must specify either a propertyAlias
or both an analiasId
and a propertyId
. Accepts substitution templates.
The ID of the asset's property. You must specify either a propertyAlias
or both an aliasId
and a propertyId
. Accepts substitution templates.
Configuration for the rollout of OTA updates.
" } }, + "AwsJobPresignedUrlConfig": { + "base": "Configuration information for pre-signed URLs. Valid when protocols
contains HTTP.
Configuration information for pre-signed URLs.
", + "OTAUpdateInfo$awsJobPresignedUrlConfig": "Configuration information for pre-signed URLs. Valid when protocols
contains HTTP.
A Device Defender security profile behavior.
", "refs": { @@ -2683,6 +2690,12 @@ "PresignedUrlConfig$expiresInSec": "How long (in seconds) pre-signed URLs are valid. Valid values are 60 - 3600, the default value is 3600 seconds. Pre-signed URLs are generated when Jobs receives an MQTT request for the job document.
" } }, + "ExpiresInSeconds": { + "base": null, + "refs": { + "AwsJobPresignedUrlConfig$expiresInSec": "How long (in seconds) pre-signed URLs are valid. Valid values are 60 - 3600, the default value is 1800 seconds. Pre-signed URLs are generated when a request for the job document is received.
" + } + }, "ExplicitDeny": { "base": "Information that explicitly denies authorization.
", "refs": { @@ -4209,7 +4222,7 @@ "MqttContext": { "base": "Specifies the MQTT context to use for the test authorizer request
", "refs": { - "TestInvokeAuthorizerRequest$mqttContext": "Specifies a test MQTT authorization request.>
" + "TestInvokeAuthorizerRequest$mqttContext": "Specifies a test MQTT authorization request.
" } }, "MqttPassword": { @@ -4732,6 +4745,19 @@ "JobProcessDetails$processingTargets": "The target devices to which the job execution is being rolled out. This value will be null after the job execution has finished rolling out to all the target devices.
" } }, + "Protocol": { + "base": null, + "refs": { + "Protocols$member": null + } + }, + "Protocols": { + "base": null, + "refs": { + "CreateOTAUpdateRequest$protocols": "The protocol used to transfer the OTA update image. Valid values are [HTTP], [MQTT], [HTTP, MQTT]. When both HTTP and MQTT are specified, the target device can choose the protocol.
", + "OTAUpdateInfo$protocols": "The protocol used to transfer the OTA update image. Valid values are [HTTP], [MQTT], [HTTP, MQTT]. When both HTTP and MQTT are specified, the target device can choose the protocol.
" + } + }, "ProvisioningTemplateListing": { "base": null, "refs": { diff --git a/models/apis/iotevents/2018-07-27/docs-2.json b/models/apis/iotevents/2018-07-27/docs-2.json index ec62ac6c171..d365d16ac42 100644 --- a/models/apis/iotevents/2018-07-27/docs-2.json +++ b/models/apis/iotevents/2018-07-27/docs-2.json @@ -39,7 +39,7 @@ "CreateDetectorModelRequest$roleArn": "The ARN of the role that grants permission to AWS IoT Events to perform its operations.
", "DetectorModelConfiguration$roleArn": "The ARN of the role that grants permission to AWS IoT Events to perform its operations.
", "DetectorModelVersionSummary$roleArn": "The ARN of the role that grants the detector model permission to perform its tasks.
", - "LambdaAction$functionArn": "The ARN of the AWS Lambda function which is executed.
", + "LambdaAction$functionArn": "The ARN of the Lambda function that is executed.
", "ListTagsForResourceRequest$resourceArn": "The ARN of the resource.
", "LoggingOptions$roleArn": "The ARN of the role that grants permission to AWS IoT Events to perform logging.
", "SNSTopicPublishAction$targetArn": "The ARN of the Amazon SNS target where the message is sent.
", @@ -58,8 +58,8 @@ "base": null, "refs": { "Attribute$jsonPath": "An expression that specifies an attribute-value pair in a JSON structure. Use this to specify an attribute from the JSON payload that is made available by the input. Inputs are derived from messages sent to the AWS IoT Events system (BatchPutMessage
). Each such message contains a JSON payload, and the attribute (and its paired value) specified here are available for use in the \"condition\"
expressions used by detectors.
Syntax: <field-name>.<field-name>...
The input attribute key used to identify a device or system to create a detector (an instance of the detector model) and then to route each input received to the appropriate detector (instance). This parameter uses a JSON-path expression to specify the attribute-value pair in the message payload of each input that is used to identify the device associated with the input.
", - "DetectorModelConfiguration$key": "The input attribute key used to identify a device or system to create a detector (an instance of the detector model) and then to route each input received to the appropriate detector (instance). This parameter uses a JSON-path expression to specify the attribute-value pair in the message payload of each input that is used to identify the device associated with the input.
" + "CreateDetectorModelRequest$key": "The input attribute key used to identify a device or system in order to create a detector (an instance of the detector model) and then to route each input received to the appropriate detector (instance). This parameter uses a JSON-path expression to specify the attribute-value pair in the message payload of each input that is used to identify the device associated with the input.
", + "DetectorModelConfiguration$key": "The input attribute key used to identify a device or system in order to create a detector (an instance of the detector model) and then to route each input received to the appropriate detector (instance). This parameter uses a JSON-path expression to specify the attribute-value pair in the message payload of each input that is used to identify the device associated with the input.
" } }, "Attributes": { @@ -263,10 +263,10 @@ "EvaluationMethod": { "base": null, "refs": { - "CreateDetectorModelRequest$evaluationMethod": "When set to SERIAL
, variables are updated and event conditions evaluated in the order that the events are defined. When set to BATCH
, variables are updated and events performed only after all event conditions are evaluated.
When set to SERIAL
, variables are updated and event conditions evaluated in the order that the events are defined. When set to BATCH
, variables are updated and events performed only after all event conditions are evaluated.
When set to SERIAL
, variables are updated and event conditions evaluated in the order that the events are defined. When set to BATCH
, variables are updated and events performed only after all event conditions are evaluated.
When set to SERIAL
, variables are updated and event conditions evaluated in the order that the events are defined. When set to BATCH
, variables are updated and events performed only after all event conditions are evaluated.
Information about the order in which events are evaluated and how actions are executed.
", + "DetectorModelConfiguration$evaluationMethod": "Information about the order in which events are evaluated and how actions are executed.
", + "DetectorModelVersionSummary$evaluationMethod": "Information about the order in which events are evaluated and how actions are executed.
", + "UpdateDetectorModelRequest$evaluationMethod": "Information about the order in which events are evaluated and how actions are executed.
" } }, "Event": { @@ -291,9 +291,9 @@ } }, "FirehoseAction": { - "base": "Sends information about the detector model instance and the event which triggered the action to a Kinesis Data Firehose delivery stream.
", + "base": "Sends information about the detector model instance and the event that triggered the action to a Kinesis Data Firehose delivery stream.
", "refs": { - "Action$firehose": "Sends information about the detector model instance and the event which triggered the action to a Kinesis Data Firehose delivery stream.
" + "Action$firehose": "Sends information about the detector model instance and the event that triggered the action to a Kinesis Data Firehose delivery stream.
" } }, "FirehoseSeparator": { @@ -382,9 +382,9 @@ } }, "IotEventsAction": { - "base": "Sends an IoT Events input, passing in information about the detector model instance and the event which triggered the action.
", + "base": "Sends an AWS IoT Events input, passing in information about the detector model instance and the event that triggered the action.
", "refs": { - "Action$iotEvents": "Sends an IoT Events input, passing in information about the detector model instance and the event which triggered the action.
" + "Action$iotEvents": "Sends an IoT Events input, passing in information about the detector model instance and the event that triggered the action.
" } }, "IotTopicPublishAction": { @@ -400,9 +400,9 @@ } }, "LambdaAction": { - "base": "Calls an AWS Lambda function, passing in information about the detector model instance and the event which triggered the action.
", + "base": "Calls a Lambda function, passing in information about the detector model instance and the event that triggered the action.
", "refs": { - "Action$lambda": "Calls an AWS Lambda function, passing in information about the detector model instance and the event which triggered the action.
" + "Action$lambda": "Calls a Lambda function, passing in information about the detector model instance and the event that triggered the action.
" } }, "LimitExceededException": { @@ -520,7 +520,7 @@ "QueueUrl": { "base": null, "refs": { - "SqsAction$queueUrl": "The URL of the Amazon SQS queue where the data is written.
" + "SqsAction$queueUrl": "The URL of the SQS queue where the data is written.
" } }, "ResetTimerAction": { @@ -574,9 +574,9 @@ } }, "SqsAction": { - "base": "Sends information about the detector model instance and the event which triggered the action to an Amazon SQS queue.
", + "base": "Sends information about the detector model instance and the event that triggered the action to an Amazon SQS queue.
", "refs": { - "Action$sqs": "Sends information about the detector model instance and the event which triggered the action to an Amazon SQS queue.
" + "Action$sqs": "Sends information about the detector model instance and the event that triggered the action to an Amazon SQS queue.
" } }, "State": { diff --git a/models/apis/kinesisanalyticsv2/2018-05-23/api-2.json b/models/apis/kinesisanalyticsv2/2018-05-23/api-2.json index 947ccab6c37..6b7f32110a4 100644 --- a/models/apis/kinesisanalyticsv2/2018-05-23/api-2.json +++ b/models/apis/kinesisanalyticsv2/2018-05-23/api-2.json @@ -1889,7 +1889,8 @@ "type":"string", "enum":[ "SQL-1_0", - "FLINK-1_6" + "FLINK-1_6", + "FLINK-1_8" ] }, "S3ApplicationCodeLocationDescription":{ diff --git a/models/apis/kms/2014-11-01/api-2.json b/models/apis/kms/2014-11-01/api-2.json index 3500cfdabb6..ac0226817bd 100644 --- a/models/apis/kms/2014-11-01/api-2.json +++ b/models/apis/kms/2014-11-01/api-2.json @@ -915,7 +915,9 @@ "NETWORK_ERRORS", "INTERNAL_ERROR", "INSUFFICIENT_CLOUDHSM_HSMS", - "USER_LOCKED_OUT" + "USER_LOCKED_OUT", + "USER_NOT_FOUND", + "USER_LOGGED_IN" ] }, "ConnectionStateType":{ @@ -1696,7 +1698,8 @@ }, "KeyStorePasswordType":{ "type":"string", - "min":1, + "max":32, + "min":7, "sensitive":true }, "KeyUnavailableException":{ diff --git a/models/apis/kms/2014-11-01/docs-2.json b/models/apis/kms/2014-11-01/docs-2.json index 759effc4684..eb48a2e5468 100644 --- a/models/apis/kms/2014-11-01/docs-2.json +++ b/models/apis/kms/2014-11-01/docs-2.json @@ -3,11 +3,11 @@ "service": "AWS Key Management Service (AWS KMS) is an encryption and key management web service. This guide describes the AWS KMS operations that you can call programmatically. For general information about AWS KMS, see the AWS Key Management Service Developer Guide .
AWS provides SDKs that consist of libraries and sample code for various programming languages and platforms (Java, Ruby, .Net, macOS, Android, etc.). The SDKs provide a convenient way to create programmatic access to AWS KMS and other AWS services. For example, the SDKs take care of tasks such as signing requests (see below), managing errors, and retrying requests automatically. For more information about the AWS SDKs, including how to download and install them, see Tools for Amazon Web Services.
We recommend that you use the AWS SDKs to make programmatic API calls to AWS KMS.
Clients must support TLS (Transport Layer Security) 1.0. We recommend TLS 1.2. Clients must also support cipher suites with Perfect Forward Secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes.
Signing Requests
Requests must be signed by using an access key ID and a secret access key. We strongly recommend that you do not use your AWS account (root) access key ID and secret key for everyday work with AWS KMS. Instead, use the access key ID and secret access key for an IAM user. You can also use the AWS Security Token Service to generate temporary security credentials that you can use to sign requests.
All AWS KMS operations require Signature Version 4.
Logging API Requests
AWS KMS supports AWS CloudTrail, a service that logs AWS API calls and related events for your AWS account and delivers them to an Amazon S3 bucket that you specify. By using the information collected by CloudTrail, you can determine what requests were made to AWS KMS, who made the request, when it was made, and so on. To learn more about CloudTrail, including how to turn it on and find your log files, see the AWS CloudTrail User Guide.
Additional Resources
For more information about credentials and request signing, see the following:
AWS Security Credentials - This topic provides general information about the types of credentials used for accessing AWS.
Temporary Security Credentials - This section of the IAM User Guide describes how to create and use temporary security credentials.
Signature Version 4 Signing Process - This set of topics walks you through the process of signing a request using an access key ID and a secret access key.
Commonly Used API Operations
Of the API operations discussed in this guide, the following will prove the most useful for most applications. You will likely perform operations other than these, such as creating keys and assigning policies, by using the console.
", "operations": { "CancelKeyDeletion": "Cancels the deletion of a customer master key (CMK). When this operation succeeds, the key state of the CMK is Disabled
. To enable the CMK, use EnableKey. You cannot perform this operation on a CMK in a different AWS account.
For more information about scheduling and canceling deletion of a CMK, see Deleting Customer Master Keys in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", - "ConnectCustomKeyStore": "Connects or reconnects a custom key store to its associated AWS CloudHSM cluster.
The custom key store must be connected before you can create customer master keys (CMKs) in the key store or use the CMKs it contains. You can disconnect and reconnect a custom key store at any time.
To connect a custom key store, its associated AWS CloudHSM cluster must have at least one active HSM. To get the number of active HSMs in a cluster, use the DescribeClusters operation. To add HSMs to the cluster, use the CreateHsm operation.
The connection process can take an extended amount of time to complete; up to 20 minutes. This operation starts the connection process, but it does not wait for it to complete. When it succeeds, this operation quickly returns an HTTP 200 response and a JSON object with no properties. However, this response does not indicate that the custom key store is connected. To get the connection state of the custom key store, use the DescribeCustomKeyStores operation.
During the connection process, AWS KMS finds the AWS CloudHSM cluster that is associated with the custom key store, creates the connection infrastructure, connects to the cluster, logs into the AWS CloudHSM client as the kmsuser
crypto user (CU), and rotates its password.
The ConnectCustomKeyStore
operation might fail for various reasons. To find the reason, use the DescribeCustomKeyStores operation and see the ConnectionErrorCode
in the response. For help interpreting the ConnectionErrorCode
, see CustomKeyStoresListEntry.
To fix the failure, use the DisconnectCustomKeyStore operation to disconnect the custom key store, correct the error, use the UpdateCustomKeyStore operation if necessary, and then use ConnectCustomKeyStore
again.
If you are having trouble connecting or disconnecting a custom key store, see Troubleshooting a Custom Key Store in the AWS Key Management Service Developer Guide.
", + "ConnectCustomKeyStore": "Connects or reconnects a custom key store to its associated AWS CloudHSM cluster.
The custom key store must be connected before you can create customer master keys (CMKs) in the key store or use the CMKs it contains. You can disconnect and reconnect a custom key store at any time.
To connect a custom key store, its associated AWS CloudHSM cluster must have at least one active HSM. To get the number of active HSMs in a cluster, use the DescribeClusters operation. To add HSMs to the cluster, use the CreateHsm operation. Also, the kmsuser
crypto user (CU) must not be logged into the cluster. This prevents AWS KMS from using this account to log in.
The connection process can take an extended amount of time to complete; up to 20 minutes. This operation starts the connection process, but it does not wait for it to complete. When it succeeds, this operation quickly returns an HTTP 200 response and a JSON object with no properties. However, this response does not indicate that the custom key store is connected. To get the connection state of the custom key store, use the DescribeCustomKeyStores operation.
During the connection process, AWS KMS finds the AWS CloudHSM cluster that is associated with the custom key store, creates the connection infrastructure, connects to the cluster, logs into the AWS CloudHSM client as the kmsuser
CU, and rotates its password.
The ConnectCustomKeyStore
operation might fail for various reasons. To find the reason, use the DescribeCustomKeyStores operation and see the ConnectionErrorCode
in the response. For help interpreting the ConnectionErrorCode
, see CustomKeyStoresListEntry.
To fix the failure, use the DisconnectCustomKeyStore operation to disconnect the custom key store, correct the error, use the UpdateCustomKeyStore operation if necessary, and then use ConnectCustomKeyStore
again.
If you are having trouble connecting or disconnecting a custom key store, see Troubleshooting a Custom Key Store in the AWS Key Management Service Developer Guide.
", "CreateAlias": "Creates a display name for a customer managed customer master key (CMK). You can use an alias to identify a CMK in cryptographic operations, such as Encrypt and GenerateDataKey. You can change the CMK associated with the alias at any time.
Aliases are easier to remember than key IDs. They can also help to simplify your applications. For example, if you use an alias in your code, you can change the CMK your code uses by associating a given alias with a different CMK.
To run the same code in multiple AWS regions, use an alias in your code, such as alias/ApplicationKey
. Then, in each AWS Region, create an alias/ApplicationKey
alias that is associated with a CMK in that Region. When you run your code, it uses the alias/ApplicationKey
CMK for that AWS Region without any Region-specific code.
This operation does not return a response. To get the alias that you created, use the ListAliases operation.
To use aliases successfully, be aware of the following information.
Each alias points to only one CMK at a time, although a single CMK can have multiple aliases. The alias and its associated CMK must be in the same AWS account and Region.
You can associate an alias with any customer managed CMK in the same AWS account and Region. However, you do not have permission to associate an alias with an AWS managed CMK or an AWS owned CMK.
To change the CMK associated with an alias, use the UpdateAlias operation. The current CMK and the new CMK must be the same type (both symmetric or both asymmetric) and they must have the same key usage (ENCRYPT_DECRYPT
or SIGN_VERIFY
). This restriction prevents cryptographic errors in code that uses aliases.
The alias name must begin with alias/
followed by a name, such as alias/ExampleAlias
. It can contain only alphanumeric characters, forward slashes (/), underscores (_), and dashes (-). The alias name cannot begin with alias/aws/
. The alias/aws/
prefix is reserved for AWS managed CMKs.
The alias name must be unique within an AWS Region. However, you can use the same alias name in multiple Regions of the same AWS account. Each instance of the alias is associated with a CMK in its Region.
After you create an alias, you cannot change its alias name. However, you can use the DeleteAlias operation to delete the alias and then create a new alias with the desired name.
You can use an alias name or alias ARN to identify a CMK in AWS KMS cryptographic operations and in the DescribeKey operation. However, you cannot use alias names or alias ARNs in API operations that manage CMKs, such as DisableKey or GetKeyPolicy. For information about the valid CMK identifiers for each AWS KMS API operation, see the descriptions of the KeyId
parameter in the API operation documentation.
Because an alias is not a property of a CMK, you can delete and change the aliases of a CMK without affecting the CMK. Also, aliases do not appear in the response from the DescribeKey operation. To get the aliases and alias ARNs of CMKs in each AWS account and Region, use the ListAliases operation.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "CreateCustomKeyStore": "Creates a custom key store that is associated with an AWS CloudHSM cluster that you own and manage.
This operation is part of the Custom Key Store feature feature in AWS KMS, which combines the convenience and extensive integration of AWS KMS with the isolation and control of a single-tenant key store.
Before you create the custom key store, you must assemble the required elements, including an AWS CloudHSM cluster that fulfills the requirements for a custom key store. For details about the required elements, see Assemble the Prerequisites in the AWS Key Management Service Developer Guide.
When the operation completes successfully, it returns the ID of the new custom key store. Before you can use your new custom key store, you need to use the ConnectCustomKeyStore operation to connect the new key store to its AWS CloudHSM cluster. Even if you are not going to use your custom key store immediately, you might want to connect it to verify that all settings are correct and then disconnect it until you are ready to use it.
For help with failures, see Troubleshooting a Custom Key Store in the AWS Key Management Service Developer Guide.
", "CreateGrant": "Adds a grant to a customer master key (CMK). The grant allows the grantee principal to use the CMK when the conditions specified in the grant are met. When setting permissions, grants are an alternative to key policies.
To create a grant that allows a cryptographic operation only when the request includes a particular encryption context, use the Constraints
parameter. For details, see GrantConstraints.
You can create grants on symmetric and asymmetric CMKs. However, if the grant allows an operation that the CMK does not support, CreateGrant
fails with a ValidationException
.
Grants for symmetric CMKs cannot allow operations that are not supported for symmetric CMKs, including Sign, Verify, and GetPublicKey. (There are limited exceptions to this rule for legacy operations, but you should not create a grant for an operation that AWS KMS does not support.)
Grants for asymmetric CMKs cannot allow operations that are not supported for asymmetric CMKs, including operations that generate data keys or data key pairs, or operations related to automatic key rotation, imported key material, or CMKs in custom key stores.
Grants for asymmetric CMKs with a KeyUsage
of ENCRYPT_DECRYPT
cannot allow the Sign or Verify operations. Grants for asymmetric CMKs with a KeyUsage
of SIGN_VERIFY
cannot allow the Encrypt or Decrypt operations.
Grants for asymmetric CMKs cannot include an encryption context grant constraint. An encryption context is not supported on asymmetric CMKs.
For information about symmetric and asymmetric CMKs, see Using Symmetric and Asymmetric CMKs in the AWS Key Management Service Developer Guide.
To perform this operation on a CMK in a different AWS account, specify the key ARN in the value of the KeyId
parameter. For more information about grants, see Grants in the AWS Key Management Service Developer Guide .
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", - "CreateKey": "Creates a unique customer managed customer master key (CMK) in your AWS account and Region. You cannot use this operation to create a CMK in a different AWS account.
You can use the CreateKey
operation to create symmetric or asymmetric CMKs.
Symmetric CMKs contain a 256-bit symmetric key that never leaves AWS KMS unencrypted. To use the CMK, you must call AWS KMS. You can use a symmetric CMK to encrypt and decrypt small amounts of data, but they are typically used to generate data keys or data key pairs. For details, see GenerateDataKey and GenerateDataKeyPair.
Asymmetric CMKs can contain an RSA key pair or an Elliptic Curve (ECC) key pair. The private key in an asymmetric CMK never leaves AWS KMS unencrypted. However, you can use the GetPublicKey operation to download the public key so it can be used outside of AWS KMS. CMKs with RSA key pairs can be used to encrypt or decrypt data or sign and verify messages (but not both). CMKs with ECC key pairs can be used only to sign and verify messages.
For information about symmetric and asymmetric CMKs, see Using Symmetric and Asymmetric CMKs in the AWS Key Management Service Developer Guide.
To create different types of CMKs, use the following guidance:
To create an asymmetric CMK, use the CustomerMasterKeySpec
parameter to specify the type of key material in the CMK. Then, use the KeyUsage
parameter to determine whether the CMK will be used to encrypt and decrypt or sign and verify. You can't change these properties after the CMK is created.
When creating a symmetric CMK, you don't need to specify the CustomerMasterKeySpec
or KeyUsage
parameters. The default value for CustomerMasterKeySpec
, SYMMETRIC_DEFAULT
, and the default value for KeyUsage
, ENCRYPT_DECRYPT
, are the only valid values for symmetric CMKs.
To import your own key material, begin by creating a symmetric CMK with no key material. To do this, use the Origin
parameter of CreateKey
with a value of EXTERNAL
. Next, use GetParametersForImport operation to get a public key and import token, and use the public key to encrypt your key material. Then, use ImportKeyMaterial with your import token to import the key material. For step-by-step instructions, see Importing Key Material in the AWS Key Management Service Developer Guide . You cannot import the key material into an asymmetric CMK.
To create a symmetric CMK in a custom key store, use the CustomKeyStoreId
parameter to specify the custom key store. You must also use the Origin
parameter with a value of AWS_CLOUDHSM
. The AWS CloudHSM cluster that is associated with the custom key store must have at least two active HSMs in different Availability Zones in the AWS Region.
You cannot create an asymmetric CMK in a custom key store. For information about custom key stores in AWS KMS see Using Custom Key Stores in the AWS Key Management Service Developer Guide .
Creates a unique customer managed customer master key (CMK) in your AWS account and Region. You cannot use this operation to create a CMK in a different AWS account.
You can use the CreateKey
operation to create symmetric or asymmetric CMKs.
Symmetric CMKs contain a 256-bit symmetric key that never leaves AWS KMS unencrypted. To use the CMK, you must call AWS KMS. You can use a symmetric CMK to encrypt and decrypt small amounts of data, but they are typically used to generate data keys and data keys pairs. For details, see GenerateDataKey and GenerateDataKeyPair.
Asymmetric CMKs can contain an RSA key pair or an Elliptic Curve (ECC) key pair. The private key in an asymmetric CMK never leaves AWS KMS unencrypted. However, you can use the GetPublicKey operation to download the public key so it can be used outside of AWS KMS. CMKs with RSA key pairs can be used to encrypt or decrypt data or sign and verify messages (but not both). CMKs with ECC key pairs can be used only to sign and verify messages.
For information about symmetric and asymmetric CMKs, see Using Symmetric and Asymmetric CMKs in the AWS Key Management Service Developer Guide.
To create different types of CMKs, use the following guidance:
To create an asymmetric CMK, use the CustomerMasterKeySpec
parameter to specify the type of key material in the CMK. Then, use the KeyUsage
parameter to determine whether the CMK will be used to encrypt and decrypt or sign and verify. You can't change these properties after the CMK is created.
When creating a symmetric CMK, you don't need to specify the CustomerMasterKeySpec
or KeyUsage
parameters. The default value for CustomerMasterKeySpec
, SYMMETRIC_DEFAULT
, and the default value for KeyUsage
, ENCRYPT_DECRYPT
, are the only valid values for symmetric CMKs.
To import your own key material, begin by creating a symmetric CMK with no key material. To do this, use the Origin
parameter of CreateKey
with a value of EXTERNAL
. Next, use GetParametersForImport operation to get a public key and import token, and use the public key to encrypt your key material. Then, use ImportKeyMaterial with your import token to import the key material. For step-by-step instructions, see Importing Key Material in the AWS Key Management Service Developer Guide . You cannot import the key material into an asymmetric CMK.
To create a symmetric CMK in a custom key store, use the CustomKeyStoreId
parameter to specify the custom key store. You must also use the Origin
parameter with a value of AWS_CLOUDHSM
. The AWS CloudHSM cluster that is associated with the custom key store must have at least two active HSMs in different Availability Zones in the AWS Region.
You cannot create an asymmetric CMK in a custom key store. For information about custom key stores in AWS KMS see Using Custom Key Stores in the AWS Key Management Service Developer Guide .
Decrypts ciphertext that was encrypted by a AWS KMS customer master key (CMK) using any of the following operations:
You can use this operation to decrypt ciphertext that was encrypted under a symmetric or asymmetric CMK. When the CMK is asymmetric, you must specify the CMK and the encryption algorithm that was used to encrypt the ciphertext. For information about symmetric and asymmetric CMKs, see Using Symmetric and Asymmetric CMKs in the AWS Key Management Service Developer Guide.
The Decrypt operation also decrypts ciphertext that was encrypted outside of AWS KMS by the public key in an AWS KMS asymmetric CMK. However, it cannot decrypt ciphertext produced by other libraries, such as the AWS Encryption SDK or Amazon S3 client-side encryption. These libraries return a ciphertext format that is incompatible with AWS KMS.
If the ciphertext was encrypted under a symmetric CMK, you do not need to specify the CMK or the encryption algorithm. AWS KMS can get this information from metadata that it adds to the symmetric ciphertext blob. However, if you prefer, you can specify the KeyId
to ensure that a particular CMK is used to decrypt the ciphertext. If you specify a different CMK than the one used to encrypt the ciphertext, the Decrypt
operation fails.
Whenever possible, use key policies to give users permission to call the Decrypt operation on a particular CMK, instead of using IAM policies. Otherwise, you might create an IAM user policy that gives the user Decrypt permission on all CMKs. This user could decrypt ciphertext that was encrypted by CMKs in other accounts if the key policy for the cross-account CMK permits it. If you must use an IAM policy for Decrypt
permissions, limit the user to particular CMKs or particular trusted accounts.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "DeleteAlias": "Deletes the specified alias. You cannot perform this operation on an alias in a different AWS account.
Because an alias is not a property of a CMK, you can delete and change the aliases of a CMK without affecting the CMK. Also, aliases do not appear in the response from the DescribeKey operation. To get the aliases of all CMKs, use the ListAliases operation.
Each CMK can have multiple aliases. To change the alias of a CMK, use DeleteAlias to delete the current alias and CreateAlias to create a new alias. To associate an existing alias with a different customer master key (CMK), call UpdateAlias.
", "DeleteCustomKeyStore": "Deletes a custom key store. This operation does not delete the AWS CloudHSM cluster that is associated with the custom key store, or affect any users or keys in the cluster.
The custom key store that you delete cannot contain any AWS KMS customer master keys (CMKs). Before deleting the key store, verify that you will never need to use any of the CMKs in the key store for any cryptographic operations. Then, use ScheduleKeyDeletion to delete the AWS KMS customer master keys (CMKs) from the key store. When the scheduled waiting period expires, the ScheduleKeyDeletion
operation deletes the CMKs. Then it makes a best effort to delete the key material from the associated cluster. However, you might need to manually delete the orphaned key material from the cluster and its backups.
After all CMKs are deleted from AWS KMS, use DisconnectCustomKeyStore to disconnect the key store from AWS KMS. Then, you can delete the custom key store.
Instead of deleting the custom key store, consider using DisconnectCustomKeyStore to disconnect it from AWS KMS. While the key store is disconnected, you cannot create or use the CMKs in the key store. But, you do not need to delete CMKs and you can reconnect a disconnected custom key store at any time.
If the operation succeeds, it returns a JSON object with no properties.
This operation is part of the Custom Key Store feature feature in AWS KMS, which combines the convenience and extensive integration of AWS KMS with the isolation and control of a single-tenant key store.
", @@ -20,17 +20,17 @@ "EnableKey": "Sets the key state of a customer master key (CMK) to enabled. This allows you to use the CMK for cryptographic operations. You cannot perform this operation on a CMK in a different AWS account.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "EnableKeyRotation": "Enables automatic rotation of the key material for the specified symmetric customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.
You cannot enable automatic rotation of asymmetric CMKs, CMKs with imported key material, or CMKs in a custom key store.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "Encrypt": "Encrypts plaintext into ciphertext by using a customer master key (CMK). The Encrypt
operation has two primary use cases:
You can encrypt small amounts of arbitrary data, such as a personal identifier or database password, or other sensitive information.
You can use the Encrypt
operation to move encrypted data from one AWS region to another. In the first region, generate a data key and use the plaintext key to encrypt the data. Then, in the new region, call the Encrypt
method on same plaintext data key. Now, you can safely move the encrypted data and encrypted data key to the new region, and decrypt in the new region when necessary.
You don't need to use the Encrypt
operation to encrypt a data key. The GenerateDataKey and GenerateDataKeyPair operations return a plaintext data key and an encrypted copy of that data key.
When you encrypt data, you must specify a symmetric or asymmetric CMK to use in the encryption operation. The CMK must have a KeyUsage
value of ENCRYPT_DECRYPT.
To find the KeyUsage
of a CMK, use the DescribeKey operation.
If you use a symmetric CMK, you can use an encryption context to add additional security to your encryption operation. If you specify an EncryptionContext
when encrypting data, you must specify the same encryption context (a case-sensitive exact match) when decrypting the data. Otherwise, the request to decrypt fails with an InvalidCiphertextException
. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.
If you specify an asymmetric CMK, you must also specify the encryption algorithm. The algorithm must be compatible with the CMK type.
When you use an asymmetric CMK to encrypt or reencrypt data, be sure to record the CMK and encryption algorithm that you choose. You will be required to provide the same CMK and encryption algorithm when you decrypt the data. If the CMK and algorithm do not match the values used to encrypt the data, the decrypt operation fails.
You are not required to supply the CMK ID and encryption algorithm when you decrypt with symmetric CMKs because AWS KMS stores this information in the ciphertext blob. AWS KMS cannot store metadata in ciphertext generated with asymmetric keys. The standard format for asymmetric key ciphertext does not include configurable fields.
The maximum size of the data that you can encrypt varies with the type of CMK and the encryption algorithm that you choose.
Symmetric CMKs
SYMMETRIC_DEFAULT
: 4096 bytes
RSA_2048
RSAES_OAEP_SHA_1
: 214 bytes
RSAES_OAEP_SHA_256
: 190 bytes
RSA_3072
RSAES_OAEP_SHA_1
: 342 bytes
RSAES_OAEP_SHA_256
: 318 bytes
RSA_4096
RSAES_OAEP_SHA_1
: 470 bytes
RSAES_OAEP_SHA_256
: 446 bytes
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
To perform this operation on a CMK in a different AWS account, specify the key ARN or alias ARN in the value of the KeyId parameter.
", - "GenerateDataKey": "Generates a unique symmetric data key. This operation returns a plaintext copy of the data key and a copy that is encrypted under a customer master key (CMK) that you specify. You can use the plaintext key to encrypt your data outside of AWS KMS and store the encrypted data key with the encrypted data.
GenerateDataKey
returns a unique data key for each request. The bytes in the key are not related to the caller or CMK that is used to encrypt the data key.
To generate a data key, specify the symmetric CMK that will be used to encrypt the data key. You cannot use an asymmetric CMK to generate data keys.
You must also specify the length of the data key. Use either the KeySpec
or NumberOfBytes
parameters (but not both). For 128-bit and 256-bit data keys, use the KeySpec
parameter.
If the operation succeeds, the plaintext copy of the data key is in the Plaintext
field of the response, and the encrypted copy of the data key in the CiphertextBlob
field.
To get only an encrypted copy of the data key, use GenerateDataKeyWithoutPlaintext. To generate an asymmetric data key pair, use the GenerateDataKeyPair or GenerateDataKeyPairWithoutPlaintext operation. To get a cryptographically secure random byte string, use GenerateRandom.
You can use the optional encryption context to add additional security to the encryption operation. If you specify an EncryptionContext
, you must specify the same encryption context (a case-sensitive exact match) when decrypting the encrypted data key. Otherwise, the request to decrypt fails with an InvalidCiphertextException. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
We recommend that you use the following pattern to encrypt data locally in your application:
Use the GenerateDataKey
operation to get a data encryption key.
Use the plaintext data key (returned in the Plaintext
field of the response) to encrypt data locally, then erase the plaintext data key from memory.
Store the encrypted data key (returned in the CiphertextBlob
field of the response) alongside the locally encrypted data.
To decrypt data locally:
Use the Decrypt operation to decrypt the encrypted data key. The operation returns a plaintext copy of the data key.
Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.
Generates a unique symmetric data key. This operation returns a plaintext copy of the data key and a copy that is encrypted under a customer master key (CMK) that you specify. You can use the plaintext key to encrypt your data outside of AWS KMS and store the encrypted data key with the encrypted data.
GenerateDataKey
returns a unique data key for each request. The bytes in the key are not related to the caller or CMK that is used to encrypt the data key.
To generate a data key, specify the symmetric CMK that will be used to encrypt the data key. You cannot use an asymmetric CMK to generate data keys. To get the type of your CMK, use the DescribeKey operation.
You must also specify the length of the data key. Use either the KeySpec
or NumberOfBytes
parameters (but not both). For 128-bit and 256-bit data keys, use the KeySpec
parameter.
If the operation succeeds, the plaintext copy of the data key is in the Plaintext
field of the response, and the encrypted copy of the data key in the CiphertextBlob
field.
To get only an encrypted copy of the data key, use GenerateDataKeyWithoutPlaintext. To generate an asymmetric data key pair, use the GenerateDataKeyPair or GenerateDataKeyPairWithoutPlaintext operation. To get a cryptographically secure random byte string, use GenerateRandom.
You can use the optional encryption context to add additional security to the encryption operation. If you specify an EncryptionContext
, you must specify the same encryption context (a case-sensitive exact match) when decrypting the encrypted data key. Otherwise, the request to decrypt fails with an InvalidCiphertextException. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
We recommend that you use the following pattern to encrypt data locally in your application:
Use the GenerateDataKey
operation to get a data encryption key.
Use the plaintext data key (returned in the Plaintext
field of the response) to encrypt data locally, then erase the plaintext data key from memory.
Store the encrypted data key (returned in the CiphertextBlob
field of the response) alongside the locally encrypted data.
To decrypt data locally:
Use the Decrypt operation to decrypt the encrypted data key. The operation returns a plaintext copy of the data key.
Use the plaintext data key to decrypt data locally, then erase the plaintext data key from memory.
Generates a unique asymmetric data key pair. The GenerateDataKeyPair
operation returns a plaintext public key, a plaintext private key, and a copy of the private key that is encrypted under the symmetric CMK you specify. You can use the data key pair to perform asymmetric cryptography outside of AWS KMS.
GenerateDataKeyPair
returns a unique data key pair for each request. The bytes in the keys are not related to the caller or the CMK that is used to encrypt the private key.
You can use the public key that GenerateDataKeyPair
returns to encrypt data or verify a signature outside of AWS KMS. Then, store the encrypted private key with the data. When you are ready to decrypt data or sign a message, you can use the Decrypt operation to decrypt the encrypted private key.
To generate a data key pair, you must specify a symmetric customer master key (CMK) to encrypt the private key in a data key pair. You cannot use an asymmetric CMK. To get the type of your CMK, use the DescribeKey operation.
If you are using the data key pair to encrypt data, or for any operation where you don't immediately need a private key, consider using the GenerateDataKeyPairWithoutPlaintext operation. GenerateDataKeyPairWithoutPlaintext
returns a plaintext public key and an encrypted private key, but omits the plaintext private key that you need only to decrypt ciphertext or sign a message. Later, when you need to decrypt the data or sign a message, use the Decrypt operation to decrypt the encrypted private key in the data key pair.
You can use the optional encryption context to add additional security to the encryption operation. If you specify an EncryptionContext
, you must specify the same encryption context (a case-sensitive exact match) when decrypting the encrypted data key. Otherwise, the request to decrypt fails with an InvalidCiphertextException. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "GenerateDataKeyPairWithoutPlaintext": "Generates a unique asymmetric data key pair. The GenerateDataKeyPairWithoutPlaintext
operation returns a plaintext public key and a copy of the private key that is encrypted under the symmetric CMK you specify. Unlike GenerateDataKeyPair, this operation does not return a plaintext private key.
To generate a data key pair, you must specify a symmetric customer master key (CMK) to encrypt the private key in the data key pair. You cannot use an asymmetric CMK. To get the type of your CMK, use the KeySpec
field in the DescribeKey response.
You can use the public key that GenerateDataKeyPairWithoutPlaintext
returns to encrypt data or verify a signature outside of AWS KMS. Then, store the encrypted private key with the data. When you are ready to decrypt data or sign a message, you can use the Decrypt operation to decrypt the encrypted private key.
GenerateDataKeyPairWithoutPlaintext
returns a unique data key pair for each request. The bytes in the key are not related to the caller or CMK that is used to encrypt the private key.
You can use the optional encryption context to add additional security to the encryption operation. If you specify an EncryptionContext
, you must specify the same encryption context (a case-sensitive exact match) when decrypting the encrypted data key. Otherwise, the request to decrypt fails with an InvalidCiphertextException. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", - "GenerateDataKeyWithoutPlaintext": "Generates a unique symmetric data key. This operation returns a data key that is encrypted under a customer master key (CMK) that you specify. To request an asymmetric data key pair, use the GenerateDataKeyPair or GenerateDataKeyPairWithoutPlaintext operations.
GenerateDataKeyWithoutPlaintext
is identical to the GenerateDataKey operation except that returns only the encrypted copy of the data key. This operation is useful for systems that need to encrypt data at some point, but not immediately. When you need to encrypt the data, you call the Decrypt operation on the encrypted copy of the key.
It's also useful in distributed systems with different levels of trust. For example, you might store encrypted data in containers. One component of your system creates new containers and stores an encrypted data key with each container. Then, a different component puts the data into the containers. That component first decrypts the data key, uses the plaintext data key to encrypt data, puts the encrypted data into the container, and then destroys the plaintext data key. In this system, the component that creates the containers never sees the plaintext data key.
GenerateDataKeyWithoutPlaintext
returns a unique data key for each request. The bytes in the keys are not related to the caller or CMK that is used to encrypt the private key.
To generate a data key, you must specify the symmetric customer master key (CMK) that is used to encrypt the data key. You cannot use an asymmetric CMK to generate a data key. To get the type of your CMK, use the KeySpec
field in the DescribeKey response. You must also specify the length of the data key using either the KeySpec
or NumberOfBytes
field (but not both). For common key lengths (128-bit and 256-bit symmetric keys), use the KeySpec
parameter.
If the operation succeeds, you will find the plaintext copy of the data key in the Plaintext
field of the response, and the encrypted copy of the data key in the CiphertextBlob
field.
You can use the optional encryption context to add additional security to the encryption operation. If you specify an EncryptionContext
, you must specify the same encryption context (a case-sensitive exact match) when decrypting the encrypted data key. Otherwise, the request to decrypt fails with an InvalidCiphertextException. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", + "GenerateDataKeyWithoutPlaintext": "Generates a unique symmetric data key. This operation returns a data key that is encrypted under a customer master key (CMK) that you specify. To request an asymmetric data key pair, use the GenerateDataKeyPair or GenerateDataKeyPairWithoutPlaintext operations.
GenerateDataKeyWithoutPlaintext
is identical to the GenerateDataKey operation except that returns only the encrypted copy of the data key. This operation is useful for systems that need to encrypt data at some point, but not immediately. When you need to encrypt the data, you call the Decrypt operation on the encrypted copy of the key.
It's also useful in distributed systems with different levels of trust. For example, you might store encrypted data in containers. One component of your system creates new containers and stores an encrypted data key with each container. Then, a different component puts the data into the containers. That component first decrypts the data key, uses the plaintext data key to encrypt data, puts the encrypted data into the container, and then destroys the plaintext data key. In this system, the component that creates the containers never sees the plaintext data key.
GenerateDataKeyWithoutPlaintext
returns a unique data key for each request. The bytes in the keys are not related to the caller or CMK that is used to encrypt the private key.
To generate a data key, you must specify the symmetric customer master key (CMK) that is used to encrypt the data key. You cannot use an asymmetric CMK to generate a data key. To get the type of your CMK, use the DescribeKey operation.
If the operation succeeds, you will find the encrypted copy of the data key in the CiphertextBlob
field.
You can use the optional encryption context to add additional security to the encryption operation. If you specify an EncryptionContext
, you must specify the same encryption context (a case-sensitive exact match) when decrypting the encrypted data key. Otherwise, the request to decrypt fails with an InvalidCiphertextException. For more information, see Encryption Context in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "GenerateRandom": "Returns a random byte string that is cryptographically secure.
By default, the random byte string is generated in AWS KMS. To generate the byte string in the AWS CloudHSM cluster that is associated with a custom key store, specify the custom key store ID.
For more information about entropy and random number generation, see the AWS Key Management Service Cryptographic Details whitepaper.
", "GetKeyPolicy": "Gets a key policy attached to the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account.
", "GetKeyRotationStatus": "Gets a Boolean value that indicates whether automatic rotation of the key material is enabled for the specified customer master key (CMK).
You cannot enable automatic rotation of asymmetric CMKs, CMKs with imported key material, or CMKs in a custom key store. The key rotation status for these CMKs is always false
.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
Disabled: The key rotation status does not change when you disable a CMK. However, while the CMK is disabled, AWS KMS does not rotate the backing key.
Pending deletion: While a CMK is pending deletion, its key rotation status is false
and AWS KMS does not rotate the backing key. If you cancel the deletion, the original key rotation status is restored.
To perform this operation on a CMK in a different AWS account, specify the key ARN in the value of the KeyId
parameter.
Returns the items you need to import key material into a symmetric, customer managed customer master key (CMK). For more information about importing key material into AWS KMS, see Importing Key Material in the AWS Key Management Service Developer Guide.
This operation returns a public key and an import token. Use the public key to encrypt the symmetric key material. Store the import token to send with a subsequent ImportKeyMaterial request.
You must specify the key ID of the symmetric CMK into which you will import key material. This CMK's Origin
must be EXTERNAL
. You must also specify the wrapping algorithm and type of wrapping key (public key) that you will use to encrypt the key material. You cannot perform this operation on an asymmetric CMK or on any CMK in a different AWS account.
To import key material, you must use the public key and import token from the same response. These items are valid for 24 hours. The expiration date and time appear in the GetParametersForImport
response. You cannot use an expired token in an ImportKeyMaterial request. If your key and token expire, send another GetParametersForImport
request.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "GetPublicKey": "Returns the public key of an asymmetric CMK. Unlike the private key of a asymmetric CMK, which never leaves AWS KMS unencrypted, callers with kms:GetPublicKey
permission can download the public key of an asymmetric CMK. You can share the public key to allow others to encrypt messages and verify signatures outside of AWS KMS. For information about symmetric and asymmetric CMKs, see Using Symmetric and Asymmetric CMKs in the AWS Key Management Service Developer Guide.
You do not need to download the public key. Instead, you can use the public key within AWS KMS by calling the Encrypt, ReEncrypt, or Verify operations with the identifier of an asymmetric CMK. When you use the public key within AWS KMS, you benefit from the authentication, authorization, and logging that are part of every AWS KMS operation. You also reduce of risk of encrypting data that cannot be decrypted. These features are not effective outside of AWS KMS. For details, see Special Considerations for Downloading Public Keys.
To help you use the public key safely outside of AWS KMS, GetPublicKey
returns important information about the public key in the response, including:
CustomerMasterKeySpec: The type of key material in the public key, such as RSA_4096
or ECC_NIST_P521
.
KeyUsage: Whether the key is used for encryption or signing.
EncryptionAlgorithms or SigningAlgorithms: A list of the encryption algorithms or the signing algorithms for the key.
Although AWS KMS cannot enforce these restrictions on external operations, it is crucial that you use this information to prevent the public key from being used improperly. For example, you can prevent a public signing key from being used encrypt data, or prevent a public key from being used with an encryption algorithm that is not supported by AWS KMS. You can also avoid errors, such as using the wrong signing algorithm in a verification operation.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", "ImportKeyMaterial": "Imports key material into an existing symmetric AWS KMS customer master key (CMK) that was created without key material. After you successfully import key material into a CMK, you can reimport the same key material into that CMK, but you cannot import different key material.
You cannot perform this operation on an asymmetric CMK or on any CMK in a different AWS account. For more information about creating CMKs with no key material and then importing key material, see Importing Key Material in the AWS Key Management Service Developer Guide.
Before using this operation, call GetParametersForImport. Its response includes a public key and an import token. Use the public key to encrypt the key material. Then, submit the import token from the same GetParametersForImport
response.
When calling this operation, you must specify the following values:
The key ID or key ARN of a CMK with no key material. Its Origin
must be EXTERNAL
.
To create a CMK with no key material, call CreateKey and set the value of its Origin
parameter to EXTERNAL
. To get the Origin
of a CMK, call DescribeKey.)
The encrypted key material. To get the public key to encrypt the key material, call GetParametersForImport.
The import token that GetParametersForImport returned. You must use a public key and token from the same GetParametersForImport
response.
Whether the key material expires and if so, when. If you set an expiration date, AWS KMS deletes the key material from the CMK on the specified date, and the CMK becomes unusable. To use the CMK again, you must reimport the same key material. The only way to change an expiration date is by reimporting the same key material and specifying a new expiration date.
When this operation is successful, the key state of the CMK changes from PendingImport
to Enabled
, and you can use the CMK.
If this operation fails, use the exception to help determine the problem. If the error is related to the key material, the import token, or wrapping key, use GetParametersForImport to get a new public key and import token for the CMK and repeat the import procedure. For help, see How To Import Key Material in the AWS Key Management Service Developer Guide.
The CMK that you use for this operation must be in a compatible key state. For details, see How Key State Affects Use of a Customer Master Key in the AWS Key Management Service Developer Guide.
", - "ListAliases": "Gets a list of aliases in the caller's AWS account and region. You cannot list aliases in other accounts. For more information about aliases, see CreateAlias.
By default, the ListAliases command returns all aliases in the account and region. To get only the aliases that point to a particular customer master key (CMK), use the KeyId
parameter.
The ListAliases
response can include aliases that you created and associated with your customer managed CMKs, and aliases that AWS created and associated with AWS managed CMKs in your account. You can recognize AWS aliases because their names have the format aws/<service-name>
, such as aws/dynamodb
.
The response might also include aliases that have no TargetKeyId
field. These are predefined aliases that AWS has created but has not yet associated with a CMK. Aliases that AWS creates in your account, including predefined aliases, do not count against your AWS KMS aliases limit.
Gets a list of aliases in the caller's AWS account and region. You cannot list aliases in other accounts. For more information about aliases, see CreateAlias.
By default, the ListAliases command returns all aliases in the account and region. To get only the aliases that point to a particular customer master key (CMK), use the KeyId
parameter.
The ListAliases
response can include aliases that you created and associated with your customer managed CMKs, and aliases that AWS created and associated with AWS managed CMKs in your account. You can recognize AWS aliases because their names have the format aws/<service-name>
, such as aws/dynamodb
.
The response might also include aliases that have no TargetKeyId
field. These are predefined aliases that AWS has created but has not yet associated with a CMK. Aliases that AWS creates in your account, including predefined aliases, do not count against your AWS KMS aliases quota.
Gets a list of all grants for the specified customer master key (CMK).
To perform this operation on a CMK in a different AWS account, specify the key ARN in the value of the KeyId
parameter.
Gets the names of the key policies that are attached to a customer master key (CMK). This operation is designed to get policy names that you can use in a GetKeyPolicy operation. However, the only valid policy name is default
. You cannot perform this operation on a CMK in a different AWS account.
Gets a list of all customer master keys (CMKs) in the caller's AWS account and Region.
", @@ -187,13 +187,13 @@ "ConnectionErrorCodeType": { "base": null, "refs": { - "CustomKeyStoresListEntry$ConnectionErrorCode": "Describes the connection error. Valid values are:
CLUSTER_NOT_FOUND
- AWS KMS cannot find the AWS CloudHSM cluster with the specified cluster ID.
INSUFFICIENT_CLOUDHSM_HSMS
- The associated AWS CloudHSM cluster does not contain any active HSMs. To connect a custom key store to its AWS CloudHSM cluster, the cluster must contain at least one active HSM.
INTERNAL_ERROR
- AWS KMS could not complete the request due to an internal error. Retry the request. For ConnectCustomKeyStore
requests, disconnect the custom key store before trying to connect again.
INVALID_CREDENTIALS
- AWS KMS does not have the correct password for the kmsuser
crypto user in the AWS CloudHSM cluster.
NETWORK_ERRORS
- Network errors are preventing AWS KMS from connecting to the custom key store.
USER_LOCKED_OUT
- The kmsuser
CU account is locked out of the associated AWS CloudHSM cluster due to too many failed password attempts. Before you can connect your custom key store to its AWS CloudHSM cluster, you must change the kmsuser
account password and update the password value for the custom key store.
For help with connection failures, see Troubleshooting Custom Key Stores in the AWS Key Management Service Developer Guide.
" + "CustomKeyStoresListEntry$ConnectionErrorCode": "Describes the connection error. This field appears in the response only when the ConnectionState
is FAILED
. For help resolving these errors, see How to Fix a Connection Failure in AWS Key Management Service Developer Guide.
Valid values are:
CLUSTER_NOT_FOUND
- AWS KMS cannot find the AWS CloudHSM cluster with the specified cluster ID.
INSUFFICIENT_CLOUDHSM_HSMS
- The associated AWS CloudHSM cluster does not contain any active HSMs. To connect a custom key store to its AWS CloudHSM cluster, the cluster must contain at least one active HSM.
INTERNAL_ERROR
- AWS KMS could not complete the request due to an internal error. Retry the request. For ConnectCustomKeyStore
requests, disconnect the custom key store before trying to connect again.
INVALID_CREDENTIALS
- AWS KMS does not have the correct password for the kmsuser
crypto user in the AWS CloudHSM cluster. Before you can connect your custom key store to its AWS CloudHSM cluster, you must change the kmsuser
account password and update the key store password value for the custom key store.
NETWORK_ERRORS
- Network errors are preventing AWS KMS from connecting to the custom key store.
USER_LOCKED_OUT
- The kmsuser
CU account is locked out of the associated AWS CloudHSM cluster due to too many failed password attempts. Before you can connect your custom key store to its AWS CloudHSM cluster, you must change the kmsuser
account password and update the key store password value for the custom key store.
USER_LOGGED_IN
- The kmsuser
CU account is logged into the the associated AWS CloudHSM cluster. This prevents AWS KMS from rotating the kmsuser
account password and logging into the cluster. Before you can connect your custom key store to its AWS CloudHSM cluster, you must log the kmsuser
CU out of the cluster. If you changed the kmsuser
password to log into the cluster, you must also and update the key store password value for the custom key store. For help, see How to Log Out and Reconnect in the AWS Key Management Service Developer Guide.
USER_NOT_FOUND
- AWS KMS cannot find a kmsuser
CU account in the associated AWS CloudHSM cluster. Before you can connect your custom key store to its AWS CloudHSM cluster, you must create a kmsuser
CU account in the cluster, and then update the key store password value for the custom key store.
Indicates whether the custom key store is connected to its AWS CloudHSM cluster.
You can create and use CMKs in your custom key stores only when its connection state is CONNECTED
.
The value is DISCONNECTED
if the key store has never been connected or you use the DisconnectCustomKeyStore operation to disconnect it. If the value is CONNECTED
but you are having trouble using the custom key store, make sure that its associated AWS CloudHSM cluster is active and contains at least one active HSM.
A value of FAILED
indicates that an attempt to connect was unsuccessful. For help resolving a connection failure, see Troubleshooting a Custom Key Store in the AWS Key Management Service Developer Guide.
Indicates whether the custom key store is connected to its AWS CloudHSM cluster.
You can create and use CMKs in your custom key stores only when its connection state is CONNECTED
.
The value is DISCONNECTED
if the key store has never been connected or you use the DisconnectCustomKeyStore operation to disconnect it. If the value is CONNECTED
but you are having trouble using the custom key store, make sure that its associated AWS CloudHSM cluster is active and contains at least one active HSM.
A value of FAILED
indicates that an attempt to connect was unsuccessful. The ConnectionErrorCode
field in the response indicates the cause of the failure. For help resolving a connection failure, see Troubleshooting a Custom Key Store in the AWS Key Management Service Developer Guide.
Specifies the type of CMK to create. The CustomerMasterKeySpec
determines whether the CMK contains a symmetric key or an asymmetric key pair. It also determines the encryption algorithms or signing algorithms that the CMK supports. You can't change the CustomerMasterKeySpec
after the CMK is created. To further restrict the algorithms that can be used with the CMK, use its key policy or IAM policy.
For help with choosing a key spec for your CMK, see Selecting a Customer Master Key Spec in the AWS Key Management Service Developer Guide.
The default value, SYMMETRIC_DEFAULT
, creates a CMK with a 256-bit symmetric key.
AWS KMS supports the following key specs for CMKs:
Symmetric key (default)
SYMMETRIC_DEFAULT
(AES-256-GCM)
Asymmetric RSA key pairs
RSA_2048
RSA_3072
RSA_4096
Asymmetric NIST-recommended elliptic curve key pairs
ECC_NIST_P256
(secp256r1)
ECC_NIST_P384
(secp384r1)
ECC_NIST_P521
(secp521r1)
Other asymmetric elliptic curve key pairs
ECC_SECG_P256K1
(secp256k1), commonly used for cryptocurrencies.
Specifies the type of CMK to create. The default value, SYMMETRIC_DEFAULT
, creates a CMK with a 256-bit symmetric key for encryption and decryption. For help choosing a key spec for your CMK, see How to Choose Your CMK Configuration in the AWS Key Management Service Developer Guide.
The CustomerMasterKeySpec
determines whether the CMK contains a symmetric key or an asymmetric key pair. It also determines the encryption algorithms or signing algorithms that the CMK supports. You can't change the CustomerMasterKeySpec
after the CMK is created. To further restrict the algorithms that can be used with the CMK, use a condition key in its key policy or IAM policy. For more information, see kms:EncryptionAlgorithm or kms:Signing Algorithm in the AWS Key Management Service Developer Guide.
AWS services that are integrated with AWS KMS use symmetric CMKs to protect your data. These services do not support asymmetric CMKs. For help determining whether a CMK is symmetric or asymmetric, see Identifying Symmetric and Asymmetric CMKs in the AWS Key Management Service Developer Guide.
AWS KMS supports the following key specs for CMKs:
Symmetric key (default)
SYMMETRIC_DEFAULT
(AES-256-GCM)
Asymmetric RSA key pairs
RSA_2048
RSA_3072
RSA_4096
Asymmetric NIST-recommended elliptic curve key pairs
ECC_NIST_P256
(secp256r1)
ECC_NIST_P384
(secp384r1)
ECC_NIST_P521
(secp521r1)
Other asymmetric elliptic curve key pairs
ECC_SECG_P256K1
(secp256k1), commonly used for cryptocurrencies.
The type of the of the public key that was downloaded.
", "KeyMetadata$CustomerMasterKeySpec": "Describes the type of key material in the CMK.
" } @@ -793,8 +793,8 @@ "EncryptResponse$KeyId": "The ID of the key used during encryption.
", "GenerateDataKeyPairRequest$KeyId": "Specifies the symmetric CMK that encrypts the private key in the data key pair. You cannot specify an asymmetric CMKs.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "GenerateDataKeyPairResponse$KeyId": "The identifier of the CMK that encrypted the private key.
", - "GenerateDataKeyPairWithoutPlaintextRequest$KeyId": "Specifies the CMK that encrypts the private key in the data key pair. You must specify a symmetric CMK. You cannot use an asymmetric CMK.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", - "GenerateDataKeyPairWithoutPlaintextResponse$KeyId": "Specifies the CMK that encrypted the private key in the data key pair. You must specify a symmetric CMK. You cannot use an asymmetric CMK.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "GenerateDataKeyPairWithoutPlaintextRequest$KeyId": "Specifies the CMK that encrypts the private key in the data key pair. You must specify a symmetric CMK. You cannot use an asymmetric CMK. To get the type of your CMK, use the DescribeKey operation.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", + "GenerateDataKeyPairWithoutPlaintextResponse$KeyId": "Specifies the CMK that encrypted the private key in the data key pair. You must specify a symmetric CMK. You cannot use an asymmetric CMK. To get the type of your CMK, use the DescribeKey operation.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "GenerateDataKeyRequest$KeyId": "Identifies the symmetric CMK that encrypts the data key.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", "GenerateDataKeyResponse$KeyId": "The identifier of the CMK that encrypted the data key.
", "GenerateDataKeyWithoutPlaintextRequest$KeyId": "The identifier of the symmetric customer master key (CMK) that encrypts the data key.
To specify a CMK, use its key ID, Amazon Resource Name (ARN), alias name, or alias ARN. When using an alias name, prefix it with \"alias/\"
. To specify a CMK in a different AWS account, you must use the key ARN or alias ARN.
For example:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-east-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias ARN: arn:aws:kms:us-east-2:111122223333:alias/ExampleAlias
To get the key ID and key ARN for a CMK, use ListKeys or DescribeKey. To get the alias name and alias ARN, use ListAliases.
", @@ -866,7 +866,7 @@ "KeyStorePasswordType": { "base": null, "refs": { - "CreateCustomKeyStoreRequest$KeyStorePassword": "Enter the password of the kmsuser
crypto user (CU) account in the specified AWS CloudHSM cluster. AWS KMS logs into the cluster as this user to manage key material on your behalf.
This parameter tells AWS KMS the kmsuser
account password; it does not change the password in the AWS CloudHSM cluster.
Enter the password of the kmsuser
crypto user (CU) account in the specified AWS CloudHSM cluster. AWS KMS logs into the cluster as this user to manage key material on your behalf.
The password must be a string of 7 to 32 characters. Its value is case sensitive.
This parameter tells AWS KMS the kmsuser
account password; it does not change the password in the AWS CloudHSM cluster.
Enter the current password of the kmsuser
crypto user (CU) in the AWS CloudHSM cluster that is associated with the custom key store.
This parameter tells AWS KMS the current password of the kmsuser
crypto user (CU). It does not set or change the password of any users in the AWS CloudHSM cluster.
The request was rejected because a limit was exceeded. For more information, see Limits in the AWS Key Management Service Developer Guide.
", + "base": "The request was rejected because a quota was exceeded. For more information, see Quotas in the AWS Key Management Service Developer Guide.
", "refs": { } }, @@ -982,7 +982,7 @@ "base": null, "refs": { "SignRequest$MessageType": "Tells AWS KMS whether the value of the Message
parameter is a message or message digest. To indicate a message, enter RAW
. To indicate a message digest, enter DIGEST
.
Tells AWS KMS whether the value of the Message
parameter is a message or message digest. To indicate a message, enter RAW
. To indicate a message digest, enter DIGEST
.
Tells AWS KMS whether the value of the Message
parameter is a message or message digest. The default value, RAW, indicates a message. To indicate a message digest, enter DIGEST
.
Use the DIGEST
value only when the value of the Message
parameter is a message digest. If you use the DIGEST
value with a raw message, the security of the verification operation can be compromised.
The random byte string. When you use the HTTP API or the AWS CLI, the value is Base64-encoded. Otherwise, it is not Base64-encoded.
", "GetParametersForImportResponse$PublicKey": "The public key to use to encrypt the key material before importing it with ImportKeyMaterial.
", "SignRequest$Message": "Specifies the message or message digest to sign. Messages can be 0-4096 bytes. To sign a larger message, provide the message digest.
If you provide a message, AWS KMS generates a hash digest of the message and then signs it.
", - "VerifyRequest$Message": "Specifies the message that was signed, or a hash digest of that message. Messages can be 0-4096 bytes. To verify a larger message, provide a hash digest of the message.
If the digest of the message specified here is different from the message digest that was signed, the signature verification fails.
" + "VerifyRequest$Message": "Specifies the message that was signed. You can submit a raw message of up to 4096 bytes, or a hash digest of the message. If you submit a digest, use the MessageType
parameter with a value of DIGEST
.
If the message specified here is different from the message that was signed, the signature verification fails. A message and its hash digest are considered to be the same message.
" } }, "PolicyNameList": { @@ -1041,9 +1041,9 @@ "PolicyType": { "base": null, "refs": { - "CreateKeyRequest$Policy": "The key policy to attach to the CMK.
If you provide a key policy, it must meet the following criteria:
If you don't set BypassPolicyLockoutSafetyCheck
to true, the key policy must allow the principal that is making the CreateKey
request to make a subsequent PutKeyPolicy request on the CMK. This reduces the risk that the CMK becomes unmanageable. For more information, refer to the scenario in the Default Key Policy section of the AWS Key Management Service Developer Guide .
Each statement in the key policy must contain one or more principals. The principals in the key policy must exist and be visible to AWS KMS. When you create a new AWS principal (for example, an IAM user or role), you might need to enforce a delay before including the new principal in a key policy because the new principal might not be immediately visible to AWS KMS. For more information, see Changes that I make are not always immediately visible in the AWS Identity and Access Management User Guide.
If you do not provide a key policy, AWS KMS attaches a default key policy to the CMK. For more information, see Default Key Policy in the AWS Key Management Service Developer Guide.
The key policy size limit is 32 kilobytes (32768 bytes).
", + "CreateKeyRequest$Policy": "The key policy to attach to the CMK.
If you provide a key policy, it must meet the following criteria:
If you don't set BypassPolicyLockoutSafetyCheck
to true, the key policy must allow the principal that is making the CreateKey
request to make a subsequent PutKeyPolicy request on the CMK. This reduces the risk that the CMK becomes unmanageable. For more information, refer to the scenario in the Default Key Policy section of the AWS Key Management Service Developer Guide .
Each statement in the key policy must contain one or more principals. The principals in the key policy must exist and be visible to AWS KMS. When you create a new AWS principal (for example, an IAM user or role), you might need to enforce a delay before including the new principal in a key policy because the new principal might not be immediately visible to AWS KMS. For more information, see Changes that I make are not always immediately visible in the AWS Identity and Access Management User Guide.
If you do not provide a key policy, AWS KMS attaches a default key policy to the CMK. For more information, see Default Key Policy in the AWS Key Management Service Developer Guide.
The key policy size quota is 32 kilobytes (32768 bytes).
", "GetKeyPolicyResponse$Policy": "A key policy document in JSON format.
", - "PutKeyPolicyRequest$Policy": "The key policy to attach to the CMK.
The key policy must meet the following criteria:
If you don't set BypassPolicyLockoutSafetyCheck
to true, the key policy must allow the principal that is making the PutKeyPolicy
request to make a subsequent PutKeyPolicy
request on the CMK. This reduces the risk that the CMK becomes unmanageable. For more information, refer to the scenario in the Default Key Policy section of the AWS Key Management Service Developer Guide.
Each statement in the key policy must contain one or more principals. The principals in the key policy must exist and be visible to AWS KMS. When you create a new AWS principal (for example, an IAM user or role), you might need to enforce a delay before including the new principal in a key policy because the new principal might not be immediately visible to AWS KMS. For more information, see Changes that I make are not always immediately visible in the AWS Identity and Access Management User Guide.
The key policy size limit is 32 kilobytes (32768 bytes).
" + "PutKeyPolicyRequest$Policy": "The key policy to attach to the CMK.
The key policy must meet the following criteria:
If you don't set BypassPolicyLockoutSafetyCheck
to true, the key policy must allow the principal that is making the PutKeyPolicy
request to make a subsequent PutKeyPolicy
request on the CMK. This reduces the risk that the CMK becomes unmanageable. For more information, refer to the scenario in the Default Key Policy section of the AWS Key Management Service Developer Guide.
Each statement in the key policy must contain one or more principals. The principals in the key policy must exist and be visible to AWS KMS. When you create a new AWS principal (for example, an IAM user or role), you might need to enforce a delay before including the new principal in a key policy because the new principal might not be immediately visible to AWS KMS. For more information, see Changes that I make are not always immediately visible in the AWS Identity and Access Management User Guide.
The key policy cannot exceed 32 kilobytes (32768 bytes). For more information, see Resource Quotas in the AWS Key Management Service Developer Guide.
" } }, "PrincipalIdType": { diff --git a/models/apis/lambda/2015-03-31/api-2.json b/models/apis/lambda/2015-03-31/api-2.json index 8b8513dd629..fea3f6f3c63 100644 --- a/models/apis/lambda/2015-03-31/api-2.json +++ b/models/apis/lambda/2015-03-31/api-2.json @@ -721,7 +721,8 @@ {"shape":"ServiceException"}, {"shape":"ResourceNotFoundException"}, {"shape":"InvalidParameterValueException"}, - {"shape":"TooManyRequestsException"} + {"shape":"TooManyRequestsException"}, + {"shape":"ResourceConflictException"} ] }, "UntagResource":{ @@ -736,7 +737,8 @@ {"shape":"ServiceException"}, {"shape":"ResourceNotFoundException"}, {"shape":"InvalidParameterValueException"}, - {"shape":"TooManyRequestsException"} + {"shape":"TooManyRequestsException"}, + {"shape":"ResourceConflictException"} ] }, "UpdateAlias":{ @@ -1841,7 +1843,10 @@ "EniLimitExceeded", "InsufficientRolePermissions", "InvalidConfiguration", - "InternalError" + "InternalError", + "SubnetOutOfIPAddresses", + "InvalidSubnet", + "InvalidSecurityGroup" ] }, "Layer":{ @@ -2670,7 +2675,9 @@ "InsufficientRolePermissions", "InvalidConfiguration", "InternalError", - "SubnetOutOfIPAddresses" + "SubnetOutOfIPAddresses", + "InvalidSubnet", + "InvalidSecurityGroup" ] }, "StatementId":{ diff --git a/models/apis/lambda/2015-03-31/docs-2.json b/models/apis/lambda/2015-03-31/docs-2.json index 2bb1103c79d..7aa169415f8 100644 --- a/models/apis/lambda/2015-03-31/docs-2.json +++ b/models/apis/lambda/2015-03-31/docs-2.json @@ -5,7 +5,7 @@ "AddLayerVersionPermission": "Adds permissions to the resource-based policy of a version of an AWS Lambda layer. Use this action to grant layer usage permission to other accounts. You can grant permission to a single account, all AWS accounts, or all accounts in an organization.
To revoke permission, call RemoveLayerVersionPermission with the statement ID that you specified when you added it.
", "AddPermission": "Grants an AWS service or another account permission to use a function. You can apply the policy at the function level, or specify a qualifier to restrict access to a single version or alias. If you use a qualifier, the invoker must use the full Amazon Resource Name (ARN) of that version or alias to invoke the function.
To grant permission to another account, specify the account ID as the Principal
. For AWS services, the principal is a domain-style identifier defined by the service, like s3.amazonaws.com
or sns.amazonaws.com
. For AWS services, you can also specify the ARN or owning account of the associated resource as the SourceArn
or SourceAccount
. If you grant permission to a service principal without specifying the source, other accounts could potentially configure resources in their account to invoke your Lambda function.
This action adds a statement to a resource-based permissions policy for the function. For more information about function policies, see Lambda Function Policies.
", "CreateAlias": "Creates an alias for a Lambda function version. Use aliases to provide clients with a function identifier that you can update to invoke a different version.
You can also map an alias to split invocation requests between two versions. Use the RoutingConfig
parameter to specify a second version and the percentage of invocation requests that it receives.
Creates a mapping between an event source and an AWS Lambda function. Lambda reads items from the event source and triggers the function.
For details about each event source type, see the following topics.
The following error handling options are only available for stream sources (DynamoDB and Kinesis):
BisectBatchOnFunctionError
- If the function returns an error, split the batch in two and retry.
DestinationConfig
- Send discarded records to an Amazon SQS queue or Amazon SNS topic.
MaximumRecordAgeInSeconds
- Discard records older than the specified age.
MaximumRetryAttempts
- Discard records after the specified number of retries.
Creates a mapping between an event source and an AWS Lambda function. Lambda reads items from the event source and triggers the function.
For details about each event source type, see the following topics.
The following error handling options are only available for stream sources (DynamoDB and Kinesis):
BisectBatchOnFunctionError
- If the function returns an error, split the batch in two and retry.
DestinationConfig
- Send discarded records to an Amazon SQS queue or Amazon SNS topic.
MaximumRecordAgeInSeconds
- Discard records older than the specified age.
MaximumRetryAttempts
- Discard records after the specified number of retries.
ParallelizationFactor
- Process multiple batches from each shard concurrently.
Creates a Lambda function. To create a function, you need a deployment package and an execution role. The deployment package contains your function code. The execution role grants the function permission to use AWS services, such as Amazon CloudWatch Logs for log streaming and AWS X-Ray for request tracing.
When you create a function, Lambda provisions an instance of the function and its supporting resources. If your function connects to a VPC, this process can take a minute or so. During this time, you can't invoke or modify the function. The State
, StateReason
, and StateReasonCode
fields in the response from GetFunctionConfiguration indicate when the function is ready to invoke. For more information, see Function States.
A function has an unpublished version, and can have published versions and aliases. The unpublished version changes when you update your function's code and configuration. A published version is a snapshot of your function code and configuration that can't be changed. An alias is a named resource that maps to a version, and can be changed to map to a different version. Use the Publish
parameter to create version 1
of your function from its initial configuration.
The other parameters let you configure version-specific and function-level settings. You can modify version-specific settings later with UpdateFunctionConfiguration. Function-level settings apply to both the unpublished and published versions of the function, and include tags (TagResource) and per-function concurrency limits (PutFunctionConcurrency).
If another account or an AWS service invokes your function, use AddPermission to grant permission by creating a resource-based IAM policy. You can grant permissions at the function level, on a version, or on an alias.
To invoke your function directly, use Invoke. To invoke your function in response to events in other AWS services, create an event source mapping (CreateEventSourceMapping), or configure a function trigger in the other service. For more information, see Invoking Functions.
", "DeleteAlias": "Deletes a Lambda function alias.
", "DeleteEventSourceMapping": "Deletes an event source mapping. You can get the identifier of a mapping from the output of ListEventSourceMappings.
When you delete an event source mapping, it enters a Deleting
state and might not be completely deleted for several seconds.
Adds tags to a function.
", "UntagResource": "Removes tags from a function.
", "UpdateAlias": "Updates the configuration of a Lambda function alias.
", - "UpdateEventSourceMapping": "Updates an event source mapping. You can change the function that AWS Lambda invokes, or pause invocation and resume later from the same location.
The following error handling options are only available for stream sources (DynamoDB and Kinesis):
BisectBatchOnFunctionError
- If the function returns an error, split the batch in two and retry.
DestinationConfig
- Send discarded records to an Amazon SQS queue or Amazon SNS topic.
MaximumRecordAgeInSeconds
- Discard records older than the specified age.
MaximumRetryAttempts
- Discard records after the specified number of retries.
Updates an event source mapping. You can change the function that AWS Lambda invokes, or pause invocation and resume later from the same location.
The following error handling options are only available for stream sources (DynamoDB and Kinesis):
BisectBatchOnFunctionError
- If the function returns an error, split the batch in two and retry.
DestinationConfig
- Send discarded records to an Amazon SQS queue or Amazon SNS topic.
MaximumRecordAgeInSeconds
- Discard records older than the specified age.
MaximumRetryAttempts
- Discard records after the specified number of retries.
ParallelizationFactor
- Process multiple batches from each shard concurrently.
Updates a Lambda function's code.
The function's code is locked when you publish a version. You can't modify the code of a published version, only the unpublished version.
", "UpdateFunctionConfiguration": "Modify the version-specific settings of a Lambda function.
When you update a function, Lambda provisions an instance of the function and its supporting resources. If your function connects to a VPC, this process can take a minute. During this time, you can't modify the function, but you can still invoke it. The LastUpdateStatus
, LastUpdateStatusReason
, and LastUpdateStatusReasonCode
fields in the response from GetFunctionConfiguration indicate when the update is complete and the function is processing events with the new configuration. For more information, see Function States.
These settings can vary between versions of a function and are locked when you publish a version. You can't modify the configuration of a published version, only the unpublished version.
To configure function concurrency, use PutFunctionConcurrency. To grant invoke permissions to an account or AWS service, use AddPermission.
", "UpdateFunctionEventInvokeConfig": "Updates the configuration for asynchronous invocation for a function, version, or alias.
To configure options for asynchronous invocation, use PutFunctionEventInvokeConfig.
" @@ -683,7 +683,7 @@ "LastUpdateStatus": { "base": null, "refs": { - "FunctionConfiguration$LastUpdateStatus": "The status of the last update that was performed on the function.
" + "FunctionConfiguration$LastUpdateStatus": "The status of the last update that was performed on the function. This is first set to Successful
after function creation completes.
For Lambda@Edge functions, the AWS Region of the master function. For example, us-east-2
or ALL
. If specified, you must set FunctionVersion
to ALL
.
For Lambda@Edge functions, the AWS Region of the master function. For example, us-east-1
filters the list of functions to only include Lambda@Edge functions replicated from a master function in US East (N. Virginia). If specified, you must set FunctionVersion
to ALL
.
Up to 3583 bytes of base64-encoded data about the invoking client to pass to the function in the context object.
", - "InvocationResponse$FunctionError": "If present, indicates that an error occurred during function execution. Details about the error are included in the response payload.
Handled
- The runtime caught an error thrown by the function and formatted it into a JSON document.
Unhandled
- The runtime didn't handle the error. For example, the function ran out of memory or timed out.
If present, indicates that an error occurred during function execution. Details about the error are included in the response payload.
", "InvocationResponse$LogResult": "The last 4 KB of the execution log, which is base64 encoded.
", "KMSAccessDeniedException$Type": null, "KMSAccessDeniedException$Message": null, @@ -1522,7 +1522,7 @@ } }, "TracingConfig": { - "base": "The function's AWS X-Ray tracing configuration.
", + "base": "The function's AWS X-Ray tracing configuration. To sample and record incoming requests, set Mode
to Active
.
Set Mode
to Active
to sample and trace a subset of incoming requests with AWS X-Ray.
Set Mode
to Active
to sample and trace a subset of incoming requests with AWS X-Ray.
The content type of the message string.
" } }, + "ConversationLogsRequest": { + "base": "Provides the settings needed for conversation logs.
", + "refs": { + "PutBotAliasRequest$conversationLogs": "Settings for conversation logs for the alias.
" + } + }, + "ConversationLogsResponse": { + "base": "Contains information about conversation log settings.
", + "refs": { + "BotAliasMetadata$conversationLogs": "Settings that determine how Amazon Lex uses conversation logs for the alias.
", + "GetBotAliasResponse$conversationLogs": "The settings that determine how Amazon Lex uses conversation logs for the alias.
", + "PutBotAliasResponse$conversationLogs": "The settings that determine how Amazon Lex uses conversation logs for the alias.
" + } + }, "Count": { "base": null, "refs": { @@ -393,6 +407,13 @@ "SlotTypeMetadata$description": "A description of the slot type.
" } }, + "Destination": { + "base": null, + "refs": { + "LogSettingsRequest$destination": "Where the logs will be delivered. Text logs are delivered to a CloudWatch Logs log group. Audio logs are delivered to an S3 bucket.
", + "LogSettingsResponse$destination": "The destination where logs are delivered.
" + } + }, "EnumerationValue": { "base": "Each slot type can have a set of values. Each enumeration value represents a value the slot type can take.
For example, a pizza ordering bot could have a slot type that specifies the type of crust that the pizza should have. The slot type could include the values
thick
thin
stuffed
Identifies the message group that the message belongs to. When a group is assigned to a message, Amazon Lex returns one message from each group in the response.
" } }, + "IamRoleArn": { + "base": null, + "refs": { + "ConversationLogsRequest$iamRoleArn": "The Amazon Resource Name (ARN) of an IAM role with permission to write to your CloudWatch Logs for text logs and your S3 bucket for audio logs. If audio encryption is enabled, this role also provides access permission for the AWS KMS key used for encrypting audio logs. For more information, see Creating an IAM Role and Policy for Conversation Logs.
", + "ConversationLogsResponse$iamRoleArn": "The Amazon Resource Name (ARN) of the IAM role used to write your logs to CloudWatch Logs or an S3 bucket.
" + } + }, "ImportStatus": { "base": null, "refs": { @@ -707,6 +735,13 @@ "refs": { } }, + "KmsKeyArn": { + "base": null, + "refs": { + "LogSettingsRequest$kmsKeyArn": "The Amazon Resource Name (ARN) of the AWS KMS customer managed key for encrypting audio logs delivered to an S3 bucket. The key does not apply to CloudWatch Logs and is optional for S3 buckets.
", + "LogSettingsResponse$kmsKeyArn": "The Amazon Resource Name (ARN) of the key used to encrypt audio logs in an S3 bucket.
" + } + }, "LambdaARN": { "base": null, "refs": { @@ -750,6 +785,37 @@ "GetBuiltinIntentResponse$supportedLocales": "A list of locales that the intent supports.
" } }, + "LogSettingsRequest": { + "base": "Settings used to configure delivery mode and destination for conversation logs.
", + "refs": { + "LogSettingsRequestList$member": null + } + }, + "LogSettingsRequestList": { + "base": null, + "refs": { + "ConversationLogsRequest$logSettings": "The settings for your conversation logs. You can log the conversation text, conversation audio, or both.
" + } + }, + "LogSettingsResponse": { + "base": "The settings for conversation logs.
", + "refs": { + "LogSettingsResponseList$member": null + } + }, + "LogSettingsResponseList": { + "base": null, + "refs": { + "ConversationLogsResponse$logSettings": "The settings for your conversation logs. You can log text, audio, or both.
" + } + }, + "LogType": { + "base": null, + "refs": { + "LogSettingsRequest$logType": "The type of logging to enable. Text logs are delivered to a CloudWatch Logs log group. Audio logs are delivered to an S3 bucket.
", + "LogSettingsResponse$logType": "The type of logging that is enabled.
" + } + }, "MaxResults": { "base": null, "refs": { @@ -842,6 +908,12 @@ "GetExportResponse$version": "The version of the bot being exported.
" } }, + "ObfuscationSetting": { + "base": null, + "refs": { + "Slot$obfuscationSetting": "Determines whether a slot is obfuscated in conversation logs and stored utterances. When you obfuscate a slot, the value is replaced by the slot name in curly braces ({}). For example, if the slot name is \"full_name\", obfuscated values are replaced with \"{full_name}\". For more information, see Slot Obfuscation .
" + } + }, "PreconditionFailedException": { "base": "The checksum of the resource that you are trying to change does not match the checksum in the request. Check the resource's checksum and try again.
", "refs": { @@ -926,11 +998,24 @@ "ResourceInUseException$referenceType": null } }, + "ResourceArn": { + "base": null, + "refs": { + "LogSettingsRequest$resourceArn": "The Amazon Resource Name (ARN) of the CloudWatch Logs log group or S3 bucket where the logs should be delivered.
", + "LogSettingsResponse$resourceArn": "The Amazon Resource Name (ARN) of the CloudWatch Logs log group or S3 bucket where the logs are delivered.
" + } + }, "ResourceInUseException": { "base": "The resource that you are attempting to delete is referred to by another resource. Use this information to remove references to the resource that you are trying to delete.
The body of the exception contains a JSON object that describes the resource.
{ \"resourceType\": BOT | BOTALIAS | BOTCHANNEL | INTENT,
\"resourceReference\": {
\"name\": string, \"version\": string } }
The resource prefix is the first part of the S3 object key within the S3 bucket that you specified to contain audio logs. For CloudWatch Logs it is the prefix of the log stream name within the log group that you specified.
" + } + }, "ResourceReference": { "base": "Describes the resource that refers to the resource that you are attempting to delete. This object is returned as part of the ResourceInUseException
exception.
Attaches a Transport Layer Security (TLS) certificate to your load balancer. TLS is just an updated, more secure version of Secure Socket Layer (SSL).
Once you create and validate your certificate, you can attach it to your load balancer. You can also use this API to rotate the certificates on your account. Use the attach load balancer tls certificate
operation with the non-attached certificate, and it will replace the existing one and become the attached certificate.
The attach load balancer tls certificate
operation supports tag-based access control via resource tags applied to the resource identified by load balancer name
. For more information, see the Lightsail Dev Guide.
Attaches a static IP address to a specific Amazon Lightsail instance.
", "CloseInstancePublicPorts": "Closes the public ports on a specific Amazon Lightsail instance.
The close instance public ports
operation supports tag-based access control via resource tags applied to the resource identified by instance name
. For more information, see the Lightsail Dev Guide.
Copies a manual instance or disk snapshot as another manual snapshot, or copies an automatic instance or disk snapshot as a manual snapshot. This operation can also be used to copy a manual or automatic snapshot of an instance or a disk from one AWS Region to another in Amazon Lightsail.
When copying a manual snapshot, be sure to define the source region
, source snapshot name
, and target snapshot name
parameters.
When copying an automatic snapshot, be sure to define the source region
, source resource name
, target snapshot name
, and either the restore date
or the use latest restorable auto snapshot
parameters.
Database snapshots cannot be copied at this time.
Copies a manual snapshot of an instance or disk as another manual snapshot, or copies an automatic snapshot of an instance or disk as a manual snapshot. This operation can also be used to copy a manual or automatic snapshot of an instance or a disk from one AWS Region to another in Amazon Lightsail.
When copying a manual snapshot, be sure to define the source region
, source snapshot name
, and target snapshot name
parameters.
When copying an automatic snapshot, be sure to define the source region
, source resource name
, target snapshot name
, and either the restore date
or the use latest restorable auto snapshot
parameters.
Creates an AWS CloudFormation stack, which creates a new Amazon EC2 instance from an exported Amazon Lightsail snapshot. This operation results in a CloudFormation stack record that can be used to track the AWS CloudFormation stack created. Use the get cloud formation stack records
operation to get a list of the CloudFormation stacks created.
Wait until after your new Amazon EC2 instance is created before running the create cloud formation stack
operation again with the same export snapshot record.
Creates a block storage disk that can be attached to an Amazon Lightsail instance in the same Availability Zone (e.g., us-east-2a
).
The create disk
operation supports tag-based access control via request tags. For more information, see the Lightsail Dev Guide.
Creates a block storage disk from a manual or automatic snapshot of a disk. The resulting disk can be attached to an Amazon Lightsail instance in the same Availability Zone (e.g., us-east-2a
).
The create disk from snapshot
operation supports tag-based access control via request tags and resource tags applied to the resource identified by disk snapshot name
. For more information, see the Lightsail Dev Guide.
Creates a new database in Amazon Lightsail.
The create relational database
operation supports tag-based access control via request tags. For more information, see the Lightsail Dev Guide.
Creates a new database from an existing database snapshot in Amazon Lightsail.
You can create a new database from a snapshot in if something goes wrong with your original database, or to change it to a different plan, such as a high availability or standard plan.
The create relational database from snapshot
operation supports tag-based access control via request tags and resource tags applied to the resource identified by relationalDatabaseSnapshotName. For more information, see the Lightsail Dev Guide.
Creates a snapshot of your database in Amazon Lightsail. You can use snapshots for backups, to make copies of a database, and to save data before deleting a database.
The create relational database snapshot
operation supports tag-based access control via request tags. For more information, see the Lightsail Dev Guide.
Deletes an automatic snapshot for an instance or disk.
", + "DeleteAutoSnapshot": "Deletes an automatic snapshot of an instance or disk. For more information, see the Lightsail Dev Guide.
", "DeleteDisk": "Deletes the specified block storage disk. The disk must be in the available
state (not attached to a Lightsail instance).
The disk may remain in the deleting
state for several minutes.
The delete disk
operation supports tag-based access control via resource tags applied to the resource identified by disk name
. For more information, see the Lightsail Dev Guide.
Deletes the specified disk snapshot.
When you make periodic snapshots of a disk, the snapshots are incremental, and only the blocks on the device that have changed since your last snapshot are saved in the new snapshot. When you delete a snapshot, only the data not needed for any other snapshot is removed. So regardless of which prior snapshots have been deleted, all active snapshots will have access to all the information needed to restore the disk.
The delete disk snapshot
operation supports tag-based access control via resource tags applied to the resource identified by disk snapshot name
. For more information, see the Lightsail Dev Guide.
Deletes the specified domain recordset and all of its domain records.
The delete domain
operation supports tag-based access control via resource tags applied to the resource identified by domain name
. For more information, see the Lightsail Dev Guide.
Enables or modifies an add-on for an Amazon Lightsail resource. For more information, see the Lightsail Dev Guide.
", "ExportSnapshot": "Exports an Amazon Lightsail instance or block storage disk snapshot to Amazon Elastic Compute Cloud (Amazon EC2). This operation results in an export snapshot record that can be used with the create cloud formation stack
operation to create new Amazon EC2 instances.
Exported instance snapshots appear in Amazon EC2 as Amazon Machine Images (AMIs), and the instance system disk appears as an Amazon Elastic Block Store (Amazon EBS) volume. Exported disk snapshots appear in Amazon EC2 as Amazon EBS volumes. Snapshots are exported to the same Amazon Web Services Region in Amazon EC2 as the source Lightsail snapshot.
The export snapshot
operation supports tag-based access control via resource tags applied to the resource identified by source snapshot name
. For more information, see the Lightsail Dev Guide.
Use the get instance snapshots
or get disk snapshots
operations to get a list of snapshots that you can export to Amazon EC2.
Returns the names of all active (not deleted) resources.
", - "GetAutoSnapshots": "Returns the available automatic snapshots for the specified resource name. For more information, see the Lightsail Dev Guide.
", + "GetAutoSnapshots": "Returns the available automatic snapshots for an instance or disk. For more information, see the Lightsail Dev Guide.
", "GetBlueprints": "Returns the list of available instance images, or blueprints. You can use a blueprint to create a new instance already running a specific operating system, as well as a preinstalled app or development stack. The software each instance is running depends on the blueprint image you choose.
Use active blueprints when creating new instances. Inactive blueprints are listed to support customers with existing instances and are not necessarily available to create new instances. Blueprints are marked inactive when they become outdated due to operating system updates or new application releases.
Returns the list of bundles that are available for purchase. A bundle describes the specs for your virtual private server (or instance).
", "GetCloudFormationStackRecords": "Returns the CloudFormation stack record created as a result of the create cloud formation stack
operation.
An AWS CloudFormation stack is used to create a new Amazon EC2 instance from an exported Lightsail snapshot.
", @@ -106,7 +106,7 @@ "UpdateDomainEntry": "Updates a domain recordset after it is created.
The update domain entry
operation supports tag-based access control via resource tags applied to the resource identified by domain name
. For more information, see the Lightsail Dev Guide.
Updates the specified attribute for a load balancer. You can only update one attribute at a time.
The update load balancer attribute
operation supports tag-based access control via resource tags applied to the resource identified by load balancer name
. For more information, see the Lightsail Dev Guide.
Allows the update of one or more attributes of a database in Amazon Lightsail.
Updates are applied immediately, or in cases where the updates could result in an outage, are applied during the database's predefined maintenance window.
The update relational database
operation supports tag-based access control via resource tags applied to the resource identified by relationalDatabaseName. For more information, see the Lightsail Dev Guide.
Allows the update of one or more parameters of a database in Amazon Lightsail.
Parameter updates don't cause outages; therefore, their application is not subject to the preferred maintenance window. However, there are two ways in which paramater updates are applied: dynamic
or pending-reboot
. Parameters marked with a dynamic
apply type are applied immediately. Parameters marked with a pending-reboot
apply type are applied only after the database is rebooted using the reboot relational database
operation.
The update relational database parameters
operation supports tag-based access control via resource tags applied to the resource identified by relationalDatabaseName. For more information, see the Lightsail Dev Guide.
Allows the update of one or more parameters of a database in Amazon Lightsail.
Parameter updates don't cause outages; therefore, their application is not subject to the preferred maintenance window. However, there are two ways in which parameter updates are applied: dynamic
or pending-reboot
. Parameters marked with a dynamic
apply type are applied immediately. Parameters marked with a pending-reboot
apply type are applied only after the database is rebooted using the reboot relational database
operation.
The update relational database parameters
operation supports tag-based access control via resource tags applied to the resource identified by relationalDatabaseName. For more information, see the Lightsail Dev Guide.
The date of the automatic snapshot to delete in YYYY-MM-DD
format.
Use the get auto snapshots
operation to get the available automatic snapshots for a resource.
The date of the automatic snapshot to delete in YYYY-MM-DD
format. Use the get auto snapshots
operation to get the available automatic snapshots for a resource.
An array of objects that describe the automatic snapshots that are available for the specified source resource.asdf
" + "GetAutoSnapshotsResult$autoSnapshots": "An array of objects that describe the automatic snapshots that are available for the specified source instance or disk.
" } }, "AutoSnapshotStatus": { @@ -2265,17 +2265,17 @@ "Blueprint$name": "The friendly name of the blueprint (e.g., Amazon Linux
).
The name of the instance on which you're attempting to close the public ports.
", "CloudFormationStackRecord$name": "The name of the CloudFormation stack record. It starts with CloudFormationStackRecord
followed by a GUID.
The name of the source instance or disk snapshot to be copied.
Define this parameter only when copying a manual snapshot as another manual snapshot.
The name of the new instance or disk snapshot to be created as a copy.
", + "CopySnapshotRequest$sourceSnapshotName": "The name of the source manual snapshot to copy.
Constraint:
Define this parameter only when copying a manual snapshot as another manual snapshot.
The name of the new manual snapshot to be created as a copy.
", "CreateDiskFromSnapshotRequest$diskName": "The unique Lightsail disk name (e.g., my-disk
).
The name of the disk snapshot (e.g., my-snapshot
) from which to create the new storage disk.
This parameter cannot be defined together with the source disk name
parameter. The disk snapshot name
and source disk name
parameters are mutually exclusive.
The name of the disk snapshot (e.g., my-snapshot
) from which to create the new storage disk.
Constraint:
This parameter cannot be defined together with the source disk name
parameter. The disk snapshot name
and source disk name
parameters are mutually exclusive.
The unique Lightsail disk name (e.g., my-disk
).
The unique name of the source disk (e.g., Disk-Virginia-1
).
This parameter cannot be defined together with the instance name
parameter. The disk name
and instance name
parameters are mutually exclusive.
The name of the destination disk snapshot (e.g., my-disk-snapshot
) based on the source disk.
The unique name of the source instance (e.g., Amazon_Linux-512MB-Virginia-1
). When this is defined, a snapshot of the instance's system volume is created.
This parameter cannot be defined together with the disk name
parameter. The instance name
and disk name
parameters are mutually exclusive.
The name for your new snapshot.
", "CreateInstanceSnapshotRequest$instanceName": "The Lightsail instance on which to base your snapshot.
", - "CreateInstancesFromSnapshotRequest$instanceSnapshotName": "The name of the instance snapshot on which you are basing your new instances. Use the get instance snapshots operation to return information about your existing snapshots.
This parameter cannot be defined together with the source instance name
parameter. The instance snapshot name
and source instance name
parameters are mutually exclusive.
The name of the instance snapshot on which you are basing your new instances. Use the get instance snapshots operation to return information about your existing snapshots.
Constraint:
This parameter cannot be defined together with the source instance name
parameter. The instance snapshot name
and source instance name
parameters are mutually exclusive.
The name for your key pair.
", "CreateInstancesRequest$customImageName": "(Deprecated) The name for your custom image.
In releases prior to June 12, 2017, this parameter was ignored by the API. It is now deprecated.
The name of your key pair.
", @@ -2290,7 +2290,7 @@ "CreateRelationalDatabaseRequest$relationalDatabaseName": "The name to use for your new database.
Constraints:
Must contain from 2 to 255 alphanumeric characters, or hyphens.
The first and last character must be a letter or number.
The name of the database on which to base your new snapshot.
", "CreateRelationalDatabaseSnapshotRequest$relationalDatabaseSnapshotName": "The name for your new database snapshot.
Constraints:
Must contain from 2 to 255 alphanumeric characters, or hyphens.
The first and last character must be a letter or number.
The name of the source resource from which to delete the automatic snapshot.
", + "DeleteAutoSnapshotRequest$resourceName": "The name of the source instance or disk from which to delete the automatic snapshot.
", "DeleteDiskRequest$diskName": "The unique name of the disk you want to delete (e.g., my-disk
).
The name of the disk snapshot you want to delete (e.g., my-disk-snapshot
).
The name of the instance to delete.
", @@ -2306,7 +2306,7 @@ "DetachDiskRequest$diskName": "The unique name of the disk you want to detach from your instance (e.g., my-disk
).
The name of the Lightsail load balancer.
", "DetachStaticIpRequest$staticIpName": "The name of the static IP to detach from the instance.
", - "DisableAddOnRequest$resourceName": "The name of the source resource from which to disable the add-on.
", + "DisableAddOnRequest$resourceName": "The name of the source resource for which to disable the add-on.
", "Disk$name": "The unique name of the disk.
", "Disk$attachedTo": "The resources to which the disk is attached.
", "DiskMap$newDiskName": "The new disk name (e.g., my-new-disk
).
The name of the source resource for which to enable or modify the add-on.
", "ExportSnapshotRecord$name": "The export snapshot record name.
", "ExportSnapshotRequest$sourceSnapshotName": "The name of the instance or disk snapshot to be exported to Amazon EC2.
", - "GetAutoSnapshotsRequest$resourceName": "The name of the source resource from which to get automatic snapshot information.
", - "GetAutoSnapshotsResult$resourceName": "The name of the source resource for the automatic snapshots.
", + "GetAutoSnapshotsRequest$resourceName": "The name of the source instance or disk from which to get automatic snapshot information.
", + "GetAutoSnapshotsResult$resourceName": "The name of the source instance or disk for the automatic snapshots.
", "GetDiskRequest$diskName": "The name of the disk (e.g., my-disk
).
The name of the disk snapshot (e.g., my-disk-snapshot
).
The name of the instance to access.
", @@ -2631,9 +2631,9 @@ "refs": { "Blueprint$isActive": "A Boolean value indicating whether the blueprint is active. Inactive blueprints are listed to support customers with existing instances but are not necessarily available for launch of new instances. Blueprints are marked inactive when they become outdated due to operating system updates or new application releases.
", "Bundle$isActive": "A Boolean value indicating whether the bundle is active.
", - "CopySnapshotRequest$useLatestRestorableAutoSnapshot": "A Boolean value to indicate whether to use the latest available automatic snapshot.
This parameter cannot be defined together with the restore date
parameter. The use latest restorable auto snapshot
and restore date
parameters are mutually exclusive.
Define this parameter only when copying an automatic snapshot as a manual snapshot. For more information, see the Lightsail Dev Guide.
A Boolean value to indicate whether to use the latest available automatic snapshot.
This parameter cannot be defined together with the restore date
parameter. The use latest restorable auto snapshot
and restore date
parameters are mutually exclusive.
Define this parameter only when creating a new disk from an automatic snapshot. For more information, see the Lightsail Dev Guide.
A Boolean value to indicate whether to use the latest available automatic snapshot.
This parameter cannot be defined together with the restore date
parameter. The use latest restorable auto snapshot
and restore date
parameters are mutually exclusive.
Define this parameter only when creating a new instance from an automatic snapshot. For more information, see the Lightsail Dev Guide.
A Boolean value to indicate whether to use the latest available automatic snapshot of the specified source instance or disk.
Constraints:
This parameter cannot be defined together with the restore date
parameter. The use latest restorable auto snapshot
and restore date
parameters are mutually exclusive.
Define this parameter only when copying an automatic snapshot as a manual snapshot. For more information, see the Lightsail Dev Guide.
A Boolean value to indicate whether to use the latest available automatic snapshot.
Constraints:
This parameter cannot be defined together with the restore date
parameter. The use latest restorable auto snapshot
and restore date
parameters are mutually exclusive.
Define this parameter only when creating a new disk from an automatic snapshot. For more information, see the Lightsail Dev Guide.
A Boolean value to indicate whether to use the latest available automatic snapshot.
Constraints:
This parameter cannot be defined together with the restore date
parameter. The use latest restorable auto snapshot
and restore date
parameters are mutually exclusive.
Define this parameter only when creating a new instance from an automatic snapshot. For more information, see the Lightsail Dev Guide.
Specifies the accessibility options for your new database. A value of true
specifies a database that is available to resources outside of your Lightsail account. A value of false
specifies a database that is available only to your Lightsail resources in the same region as your database.
Specifies whether your database is restored from the latest backup time. A value of true
restores from the latest backup time.
Default: false
Constraints: Cannot be specified if the restore time
parameter is provided.
Specifies the accessibility options for your new database. A value of true
specifies a database that is available to resources outside of your Lightsail account. A value of false
specifies a database that is available only to your Lightsail resources in the same region as your database.
The end-user license agreement URL for the image or blueprint.
", "Bundle$instanceType": "The Amazon EC2 instance type (e.g., t2.micro
).
A friendly name for the bundle (e.g., Micro
).
The name of the source resource from which the automatic snapshot was created.
Define this parameter only when copying an automatic snapshot as a manual snapshot. For more information, see the Lightsail Dev Guide.
The date of the automatic snapshot to copy for the new manual snapshot.
Use the get auto snapshots
operation to identify the dates of the available automatic snapshots.
Constraints:
Must be specified in YYYY-MM-DD
format.
This parameter cannot be defined together with the use latest restorable auto snapshot
parameter. The restore date
and use latest restorable auto snapshot
parameters are mutually exclusive.
Define this parameter only when copying an automatic snapshot as a manual snapshot. For more information, see the Lightsail Dev Guide.
The name of the source disk from which the source automatic snapshot was created.
This parameter cannot be defined together with the disk snapshot name
parameter. The source disk name
and disk snapshot name
parameters are mutually exclusive.
Define this parameter only when creating a new disk from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The date of the automatic snapshot to use for the new disk.
Use the get auto snapshots
operation to identify the dates of the available automatic snapshots.
Constraints:
Must be specified in YYYY-MM-DD
format.
This parameter cannot be defined together with the use latest restorable auto snapshot
parameter. The restore date
and use latest restorable auto snapshot
parameters are mutually exclusive.
Define this parameter only when creating a new disk from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The name of the source instance or disk from which the source automatic snapshot was created.
Constraint:
Define this parameter only when copying an automatic snapshot as a manual snapshot. For more information, see the Lightsail Dev Guide.
The date of the source automatic snapshot to copy. Use the get auto snapshots
operation to identify the dates of the available automatic snapshots.
Constraints:
Must be specified in YYYY-MM-DD
format.
This parameter cannot be defined together with the use latest restorable auto snapshot
parameter. The restore date
and use latest restorable auto snapshot
parameters are mutually exclusive.
Define this parameter only when copying an automatic snapshot as a manual snapshot. For more information, see the Lightsail Dev Guide.
The name of the source disk from which the source automatic snapshot was created.
Constraints:
This parameter cannot be defined together with the disk snapshot name
parameter. The source disk name
and disk snapshot name
parameters are mutually exclusive.
Define this parameter only when creating a new disk from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The date of the automatic snapshot to use for the new disk. Use the get auto snapshots
operation to identify the dates of the available automatic snapshots.
Constraints:
Must be specified in YYYY-MM-DD
format.
This parameter cannot be defined together with the use latest restorable auto snapshot
parameter. The restore date
and use latest restorable auto snapshot
parameters are mutually exclusive.
Define this parameter only when creating a new disk from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The Availability Zone where you want to create your instances. Use the following formatting: us-east-2a
(case sensitive). You can get a list of Availability Zones by using the get regions operation. Be sure to add the include Availability Zones
parameter to your request.
You can create a launch script that configures a server with additional user data. For example, apt-get -y update
.
Depending on the machine image you choose, the command to get software on your instance varies. Amazon Linux and CentOS use yum
, Debian and Ubuntu use apt-get
, and FreeBSD uses pkg
. For a complete list, see the Dev Guide.
The name of the source instance from which the source automatic snapshot was created.
This parameter cannot be defined together with the instance snapshot name
parameter. The source instance name
and instance snapshot name
parameters are mutually exclusive.
Define this parameter only when creating a new instance from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The date of the automatic snapshot to use for the new instance.
Use the get auto snapshots
operation to identify the dates of the available automatic snapshots.
Constraints:
Must be specified in YYYY-MM-DD
format.
This parameter cannot be defined together with the use latest restorable auto snapshot
parameter. The restore date
and use latest restorable auto snapshot
parameters are mutually exclusive.
Define this parameter only when creating a new instance from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The name of the source instance from which the source automatic snapshot was created.
Constraints:
This parameter cannot be defined together with the instance snapshot name
parameter. The source instance name
and instance snapshot name
parameters are mutually exclusive.
Define this parameter only when creating a new instance from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The date of the automatic snapshot to use for the new instance. Use the get auto snapshots
operation to identify the dates of the available automatic snapshots.
Constraints:
Must be specified in YYYY-MM-DD
format.
This parameter cannot be defined together with the use latest restorable auto snapshot
parameter. The restore date
and use latest restorable auto snapshot
parameters are mutually exclusive.
Define this parameter only when creating a new instance from an automatic snapshot. For more information, see the Lightsail Dev Guide.
The Availability Zone in which to create your instance. Use the following format: us-east-2a
(case sensitive). You can get a list of Availability Zones by using the get regions operation. Be sure to add the include Availability Zones
parameter to your request.
A launch script you can create that configures a server with additional user data. For example, you might want to run apt-get -y update
.
Depending on the machine image you choose, the command to get software on your instance varies. Amazon Linux and CentOS use yum
, Debian and Ubuntu use apt-get
, and FreeBSD uses pkg
. For a complete list, see the Dev Guide.
The path you provided to perform the load balancer health check. If you didn't specify a health check path, Lightsail uses the root path of your website (e.g., \"/\"
).
You may want to specify a custom health check path other than the root of your application if your home page loads slowly or has a lot of media or scripting on it.
", @@ -2871,6 +2871,7 @@ "RelationalDatabase$supportCode": "The support code for the database. Include this code in your email to support when you have questions about a database in Lightsail. This code enables our support team to look up your Lightsail information more easily.
", "RelationalDatabase$masterDatabaseName": "The name of the master database created when the Lightsail database resource is created.
", "RelationalDatabase$secondaryAvailabilityZone": "Describes the secondary Availability Zone of a high availability database.
The secondary database is used for failover support of a high availability database.
", + "RelationalDatabase$caCertificateIdentifier": "The certificate associated with the database.
", "RelationalDatabaseBlueprint$blueprintId": "The ID for the database blueprint.
", "RelationalDatabaseBlueprint$engineVersion": "The database engine version for the database blueprint (for example, 5.7.23
).
The description of the database engine for the database blueprint.
", @@ -2900,7 +2901,8 @@ "UnauthenticatedException$message": null, "UnauthenticatedException$tip": null, "UpdateRelationalDatabaseRequest$preferredBackupWindow": "The daily time range during which automated backups are created for your database if automated backups are enabled.
Constraints:
Must be in the hh24:mi-hh24:mi
format.
Example: 16:00-16:30
Specified in Coordinated Universal Time (UTC).
Must not conflict with the preferred maintenance window.
Must be at least 30 minutes.
The weekly time range during which system maintenance can occur on your database.
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week.
Constraints:
Must be in the ddd:hh24:mi-ddd:hh24:mi
format.
Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.
Must be at least 30 minutes.
Specified in Coordinated Universal Time (UTC).
Example: Tue:17:00-Tue:17:30
The weekly time range during which system maintenance can occur on your database.
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week.
Constraints:
Must be in the ddd:hh24:mi-ddd:hh24:mi
format.
Valid days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.
Must be at least 30 minutes.
Specified in Coordinated Universal Time (UTC).
Example: Tue:17:00-Tue:17:30
Indicates the certificate that needs to be associated with the database.
" } }, "timestamp": { diff --git a/models/apis/logs/2014-03-28/docs-2.json b/models/apis/logs/2014-03-28/docs-2.json index 63521292674..64de9540877 100644 --- a/models/apis/logs/2014-03-28/docs-2.json +++ b/models/apis/logs/2014-03-28/docs-2.json @@ -2,11 +2,11 @@ "version": "2.0", "service": "You can use Amazon CloudWatch Logs to monitor, store, and access your log files from Amazon EC2 instances, AWS CloudTrail, or other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console, CloudWatch Logs commands in the AWS CLI, CloudWatch Logs API, or CloudWatch Logs SDK.
You can use CloudWatch Logs to:
Monitor logs from EC2 instances in real-time: You can use CloudWatch Logs to monitor applications and systems using log data. For example, CloudWatch Logs can track the number of errors that occur in your application logs and send you a notification whenever the rate of errors exceeds a threshold that you specify. CloudWatch Logs uses your log data for monitoring; so, no code changes are required. For example, you can monitor application logs for specific literal terms (such as \"NullReferenceException\") or count the number of occurrences of a literal term at a particular position in log data (such as \"404\" status codes in an Apache access log). When the term you are searching for is found, CloudWatch Logs reports the data to a CloudWatch metric that you specify.
Monitor AWS CloudTrail logged events: You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail and use the notification to perform troubleshooting.
Archive log data: You can use CloudWatch Logs to store your log data in highly durable storage. You can change the log retention setting so that any log events older than this setting are automatically deleted. The CloudWatch Logs agent makes it easy to quickly send both rotated and non-rotated log data off of a host and into the log service. You can then access the raw log data when you need it.
Associates the specified AWS Key Management Service (AWS KMS) customer master key (CMK) with the specified log group.
Associating an AWS KMS CMK with a log group overrides any existing associations between the log group and a CMK. After a CMK is associated with a log group, all newly ingested data for the log group is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.
Note that it can take up to 5 minutes for this operation to take effect.
If you attempt to associate a CMK with a log group but the CMK does not exist or the CMK is disabled, you will receive an InvalidParameterException
error.
Associates the specified AWS Key Management Service (AWS KMS) customer master key (CMK) with the specified log group.
Associating an AWS KMS CMK with a log group overrides any existing associations between the log group and a CMK. After a CMK is associated with a log group, all newly ingested data for the log group is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.
Important: CloudWatch Logs supports only symmetric CMKs. Do not use an associate an asymmetric CMK with your log group. For more information, see Using Symmetric and Asymmetric Keys.
Note that it can take up to 5 minutes for this operation to take effect.
If you attempt to associate a CMK with a log group but the CMK does not exist or the CMK is disabled, you will receive an InvalidParameterException
error.
Cancels the specified export task.
The task must be in the PENDING
or RUNNING
state.
Creates an export task, which allows you to efficiently export data from a log group to an Amazon S3 bucket.
This is an asynchronous call. If all the required information is provided, this operation initiates an export task and responds with the ID of the task. After the task has started, you can use DescribeExportTasks to get the status of the export task. Each account can only have one active (RUNNING
or PENDING
) export task at a time. To cancel an export task, use CancelExportTask.
You can export logs from multiple log groups or multiple time ranges to the same S3 bucket. To separate out log data for each export task, you can specify a prefix to be used as the Amazon S3 key prefix for all exported objects.
Exporting to S3 buckets that are encrypted with AES-256 is supported. Exporting to S3 buckets encrypted with SSE-KMS is not supported.
", - "CreateLogGroup": "Creates a log group with the specified name.
You can create up to 20,000 log groups per account.
You must use the following guidelines when naming a log group:
Log group names must be unique within a region for an AWS account.
Log group names can be between 1 and 512 characters long.
Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign)
If you associate a AWS Key Management Service (AWS KMS) customer master key (CMK) with the log group, ingested data is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a CMK with the log group but the CMK does not exist or the CMK is disabled, you will receive an InvalidParameterException
error.
Creates a log stream for the specified log group.
There is no limit on the number of log streams that you can create for a log group.
You must use the following guidelines when naming a log stream:
Log stream names must be unique within the log group.
Log stream names can be between 1 and 512 characters long.
The ':' (colon) and '*' (asterisk) characters are not allowed.
Creates a log group with the specified name.
You can create up to 20,000 log groups per account.
You must use the following guidelines when naming a log group:
Log group names must be unique within a region for an AWS account.
Log group names can be between 1 and 512 characters long.
Log group names consist of the following characters: a-z, A-Z, 0-9, '_' (underscore), '-' (hyphen), '/' (forward slash), '.' (period), and '#' (number sign)
If you associate a AWS Key Management Service (AWS KMS) customer master key (CMK) with the log group, ingested data is encrypted using the CMK. This association is stored as long as the data encrypted with the CMK is still within Amazon CloudWatch Logs. This enables Amazon CloudWatch Logs to decrypt this data whenever it is requested.
If you attempt to associate a CMK with the log group but the CMK does not exist or the CMK is disabled, you will receive an InvalidParameterException
error.
Important: CloudWatch Logs supports only symmetric CMKs. Do not associate an asymmetric CMK with your log group. For more information, see Using Symmetric and Asymmetric Keys.
Creates a log stream for the specified log group.
There is no limit on the number of log streams that you can create for a log group. There is a limit of 50 TPS on CreateLogStream
operations, after which transactions are throttled.
You must use the following guidelines when naming a log stream:
Log stream names must be unique within the log group.
Log stream names can be between 1 and 512 characters long.
The ':' (colon) and '*' (asterisk) characters are not allowed.
Deletes the specified destination, and eventually disables all the subscription filters that publish to it. This operation does not delete the physical resource encapsulated by the destination.
", "DeleteLogGroup": "Deletes the specified log group and permanently deletes all the archived log events associated with the log group.
", "DeleteLogStream": "Deletes the specified log stream and permanently deletes all the archived log events associated with the log stream.
", @@ -31,7 +31,7 @@ "ListTagsLogGroup": "Lists the tags for the specified log group.
", "PutDestination": "Creates or updates a destination. This operation is used only to create destinations for cross-account subscriptions.
A destination encapsulates a physical resource (such as an Amazon Kinesis stream) and enables you to subscribe to a real-time stream of log events for a different account, ingested using PutLogEvents.
Through an access policy, a destination controls what is written to it. By default, PutDestination
does not set any access policy with the destination, which means a cross-account user cannot call PutSubscriptionFilter against this destination. To enable this, the destination owner must call PutDestinationPolicy after PutDestination
.
Creates or updates an access policy associated with an existing destination. An access policy is an IAM policy document that is used to authorize claims to register a subscription filter against a given destination.
", - "PutLogEvents": "Uploads a batch of log events to the specified log stream.
You must include the sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token using DescribeLogStreams. If you call PutLogEvents
twice within a narrow time period using the same value for sequenceToken
, both calls may be successful, or one may be rejected.
The batch of events must satisfy the following constraints:
The maximum batch size is 1,048,576 bytes, and this size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
None of the log events in the batch can be more than 2 hours in the future.
None of the log events in the batch can be older than 14 days or older than the retention period of the log group.
The log events in the batch must be in chronological ordered by their timestamp. The timestamp is the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In AWS Tools for PowerShell and the AWS SDK for .NET, the timestamp is specified in .NET format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.)
The maximum number of log events in a batch is 10,000.
A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
If a call to PutLogEvents returns \"UnrecognizedClientException\" the most likely cause is an invalid AWS access key ID or secret key.
", + "PutLogEvents": "Uploads a batch of log events to the specified log stream.
You must include the sequence token obtained from the response of the previous call. An upload in a newly created log stream does not require a sequence token. You can also get the sequence token in the expectedSequenceToken
field from InvalidSequenceTokenException
. If you call PutLogEvents
twice within a narrow time period using the same value for sequenceToken
, both calls may be successful, or one may be rejected.
The batch of events must satisfy the following constraints:
The maximum batch size is 1,048,576 bytes, and this size is calculated as the sum of all event messages in UTF-8, plus 26 bytes for each log event.
None of the log events in the batch can be more than 2 hours in the future.
None of the log events in the batch can be older than 14 days or older than the retention period of the log group.
The log events in the batch must be in chronological ordered by their timestamp. The timestamp is the time the event occurred, expressed as the number of milliseconds after Jan 1, 1970 00:00:00 UTC. (In AWS Tools for PowerShell and the AWS SDK for .NET, the timestamp is specified in .NET format: yyyy-mm-ddThh:mm:ss. For example, 2017-09-15T13:45:30.)
A batch of log events in a single request cannot span more than 24 hours. Otherwise, the operation fails.
The maximum number of log events in a batch is 10,000.
There is a quota of 5 requests per second per log stream. Additional requests are throttled. This quota can't be changed.
If a call to PutLogEvents returns \"UnrecognizedClientException\" the most likely cause is an invalid AWS access key ID or secret key.
", "PutMetricFilter": "Creates or updates a metric filter and associates it with the specified log group. Metric filters allow you to configure rules to extract metric data from log events ingested through PutLogEvents.
The maximum number of metric filters that can be associated with a log group is 100.
", "PutResourcePolicy": "Creates or updates a resource policy allowing other AWS services to put log events to this account, such as Amazon Route 53. An account can have up to 10 resource policies per region.
", "PutRetentionPolicy": "Sets the retention of the specified log group. A retention policy allows you to configure the number of days for which to retain log events in the specified log group.
", @@ -525,14 +525,14 @@ } }, "InvalidSequenceTokenException": { - "base": "The sequence token is not valid.
", + "base": "The sequence token is not valid. You can get the correct sequence token in the expectedSequenceToken
field in the InvalidSequenceTokenException
message.
The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. For more information, see Amazon Resource Names - AWS Key Management Service (AWS KMS).
", + "AssociateKmsKeyRequest$kmsKeyId": "The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. This must be a symmetric CMK. For more information, see Amazon Resource Names - AWS Key Management Service (AWS KMS) and Using Symmetric and Asymmetric Keys.
", "CreateLogGroupRequest$kmsKeyId": "The Amazon Resource Name (ARN) of the CMK to use when encrypting log data. For more information, see Amazon Resource Names - AWS Key Management Service (AWS KMS).
", "LogGroup$kmsKeyId": "The Amazon Resource Name (ARN) of the CMK to use when encrypting log data.
" } diff --git a/models/apis/marketplacecommerceanalytics/2015-07-01/docs-2.json b/models/apis/marketplacecommerceanalytics/2015-07-01/docs-2.json index 8fabcbba9cd..39846aaea97 100644 --- a/models/apis/marketplacecommerceanalytics/2015-07-01/docs-2.json +++ b/models/apis/marketplacecommerceanalytics/2015-07-01/docs-2.json @@ -29,7 +29,7 @@ "DataSetType": { "base": null, "refs": { - "GenerateDataSetRequest$dataSetType": "The desired data set type.
From 2014-07-21 to present: Available daily by 24:00 UTC.
From 2014-07-21 to present: Available daily by 24:00 UTC.
From 2015-01-26 to present: Available daily by 24:00 UTC.
From 2015-01-26 to present: Available daily by 24:00 UTC.
From 2015-01-26 to present: Available daily by 24:00 UTC.
From 2015-01-26 to present: Available daily by 24:00 UTC.
From 2015-01-26 to present: Available daily by 24:00 UTC.
From 2015-01-26 to present: Available daily by 24:00 UTC.
From 2015-02 to 2017-06: Available monthly on the 4th day of the month by 24:00 UTC. Data includes metered transactions (e.g. hourly) from two months prior.
From 2017-07 to present: Available monthly on the 15th day of the month by 24:00 UTC. Data includes metered transactions (e.g. hourly) from one month prior.
From 2015-02 to 2017-06: Available monthly on the 4th day of the month by 24:00 UTC. Data includes up-front software charges (e.g. annual) from one month prior.
From 2017-07 to present: Available monthly on the 15th day of the month by 24:00 UTC. Data includes up-front software charges (e.g. annual) from one month prior.
From 2018-03-15 to present: Available monthly on the 15th day of the month by 24:00 UTC.
From 2018-11-15 to present: Available monthly on the 15th day of the month by 24:00 UTC.
From 2015-01-26 to present: Available every 30 days by 24:00 UTC.
From 2012-04-19 to 2015-01-25: Available every 30 days by 24:00 UTC.
From 2015-01-26 to present: This data set was split into three data sets: disbursed_amount_by_product, disbursed_amount_by_age_of_uncollected_funds, and disbursed_amount_by_age_of_disbursed_funds.
From 2012-09-04 to present: Available every 30 days by 24:00 UTC.
From 2012-04-19 to present: Available every 30 days by 24:00 UTC.
From 2015-01-26 to present: Available every 30 days by 24:00 UTC.
From 2015-01-26 to present: Available every 30 days by 24:00 UTC.
From 2018-04-07 to present: Available every 30 days by 24:00 UTC.
From 2019-10-04 to present: Available every 30 days by 24:00 UTC.
From 2015-10-01 to 2017-06-29: Available daily by 24:00 UTC.
From 2017-06-30 to present: This data set is no longer available.
From 2015-10-01 to 2017-06-29: Available daily by 24:00 UTC.
From 2017-06-30 to present: This data set is no longer available.
From 2015-10-01 to 2017-06-29: Available daily by 24:00 UTC.
From 2017-06-30 to present: This data set is no longer available.
From 2016-12 to 2017-06: Available monthly on the 4th day of the month by 24:00 UTC. Data includes metered transactions (e.g. hourly) from two months prior, and up-front software charges (e.g. annual) from one month prior.
From 2017-06 to present: Available monthly on the 15th day of the month by 24:00 UTC. Data includes metered transactions (e.g. hourly) from one month prior, and up-front software charges (e.g. annual) from one month prior.
From 2017-02-15 to present: Available monthly on the 15th day of the month by 24:00 UTC.
The desired data set type.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available daily by 24:00 UTC.
From 2017-09-15 to present: Available monthly on the 15th day of the month by 24:00 UTC. Data includes metered transactions (e.g. hourly) from one month prior.
From 2017-09-15 to present: Available monthly on the 15th day of the month by 24:00 UTC. Data includes up-front software charges (e.g. annual) from one month prior.
From 2018-03-15 to present: Available monthly on the 15th day of the month by 24:00 UTC.
From 2018-11-15 to present: Available monthly on the 15th day of the month by 24:00 UTC.
From 2017-09-15 to present: Available every 30 days by 24:00 UTC.
From 2017-09-15 to present: Available every 30 days by 24:00 UTC.
From 2017-09-15 to present: Available every 30 days by 24:00 UTC.
From 2017-09-15 to present: Available every 30 days by 24:00 UTC.
From 2017-09-15 to present: Available every 30 days by 24:00 UTC.
From 2018-04-07 to present: Available every 30 days by 24:00 UTC.
From 2019-10-04 to present: Available every 30 days by 24:00 UTC.
From 2017-09-15 to present: Available monthly on the 15th day of the month by 24:00 UTC. Data includes metered transactions (e.g. hourly) from one month prior, and up-front software charges (e.g. annual) from one month prior.
From 2017-09-15 to present: Available monthly on the 15th day of the month by 24:00 UTC.
The time zone to use for the metric. This is useful to enable the model to automatically account for daylight savings time changes if the metric is sensitive to such time changes.
To specify a time zone, use the name of the time zone as specified in the standard tz database. For more information, see tz database.
" } }, + "AnomalyDetectorStateValue": { + "base": null, + "refs": { + "AnomalyDetector$StateValue": "The current status of the anomaly detector's training. The possible values are TRAINED | PENDING_TRAINING | TRAINED_INSUFFICIENT_DATA
A Boolean value indicating if the subscription is enabled. True indicates the subscription is enabled.
", "ModifyDBClusterMessage$ApplyImmediately": "A value that specifies whether the modifications in this request and any pending modifications are asynchronously applied as soon as possible, regardless of the PreferredMaintenanceWindow
setting for the DB cluster. If this parameter is set to false
, changes to the DB cluster are applied during the next maintenance window.
The ApplyImmediately
parameter only affects the NewDBClusterIdentifier
and MasterUserPassword
values. If you set the ApplyImmediately
parameter value to false, then changes to the NewDBClusterIdentifier
and MasterUserPassword
values are applied during the next maintenance window. All other changes are applied immediately, regardless of the value of the ApplyImmediately
parameter.
Default: false
Specifies whether the modifications in this request and any pending modifications are asynchronously applied as soon as possible, regardless of the PreferredMaintenanceWindow
setting for the DB instance.
If this parameter is set to false
, changes to the DB instance are applied during the next maintenance window. Some parameter changes can cause an outage and are applied on the next call to RebootDBInstance, or the next failure reboot.
Default: false
Indicates that major version upgrades are allowed. Changing this parameter doesn't result in an outage and the change is asynchronously applied as soon as possible.
Constraints: This parameter must be set to true when specifying a value for the EngineVersion parameter that is a different major version than the DB instance's current version.
", + "ModifyDBInstanceMessage$AllowMajorVersionUpgrade": "Indicates that major version upgrades are allowed. Changing this parameter doesn't result in an outage and the change is asynchronously applied as soon as possible.
", "OrderableDBInstanceOption$MultiAZCapable": "Indicates whether a DB instance is Multi-AZ capable.
", "OrderableDBInstanceOption$ReadReplicaCapable": "Indicates whether a DB instance can have a Read Replica.
", "OrderableDBInstanceOption$Vpc": "Indicates whether a DB instance is in a VPC.
", @@ -166,7 +166,7 @@ "OrderableDBInstanceOption$SupportsIops": "Indicates whether a DB instance supports provisioned IOPS.
", "OrderableDBInstanceOption$SupportsEnhancedMonitoring": "Indicates whether a DB instance supports Enhanced Monitoring at intervals from 1 to 60 seconds.
", "OrderableDBInstanceOption$SupportsIAMDatabaseAuthentication": "Indicates whether a DB instance supports IAM database authentication.
", - "OrderableDBInstanceOption$SupportsPerformanceInsights": "True if a DB instance supports Performance Insights, otherwise false.
", + "OrderableDBInstanceOption$SupportsPerformanceInsights": "(Not supported by Neptune)
", "Parameter$IsModifiable": " Indicates whether (true
) or not (false
) the parameter can be modified. Some parameters have security or operational implications that prevent them from being changed.
A value that is set to true
to reset all parameters in the DB cluster parameter group to their default values, and false
otherwise. You can't use this parameter if there is a list of parameter names specified for the Parameters
parameter.
Specifies whether (true
) or not (false
) to reset all parameters in the DB parameter group to default values.
Default: true
True to copy all tags from the source DB cluster snapshot to the target DB cluster snapshot, and otherwise false. The default is false.
", "CreateDBClusterMessage$StorageEncrypted": "Specifies whether the DB cluster is encrypted.
", "CreateDBClusterMessage$EnableIAMDatabaseAuthentication": "True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.
Default: false
A value that indicates whether the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.
", "CreateDBInstanceMessage$MultiAZ": "Specifies if the DB instance is a Multi-AZ deployment. You can't set the AvailabilityZone parameter if the MultiAZ parameter is set to true.
", "CreateDBInstanceMessage$AutoMinorVersionUpgrade": "Indicates that minor engine upgrades are applied automatically to the DB instance during the maintenance window.
Default: true
This flag should no longer be used.
", "CreateDBInstanceMessage$StorageEncrypted": "Specifies whether the DB instance is encrypted.
Not applicable. The encryption for DB instances is managed by the DB cluster. For more information, see CreateDBCluster.
Default: false
", "CreateDBInstanceMessage$CopyTagsToSnapshot": "True to copy all tags from the DB instance to snapshots of the DB instance, and otherwise false. The default is false.
", "CreateDBInstanceMessage$EnableIAMDatabaseAuthentication": "True to enable AWS Identity and Access Management (IAM) authentication for Neptune.
Default: false
True to enable Performance Insights for the DB instance, and otherwise false.
", + "CreateDBInstanceMessage$EnablePerformanceInsights": "(Not supported by Neptune)
", + "CreateDBInstanceMessage$DeletionProtection": "A value that indicates whether the DB instance has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.
You can enable or disable deletion protection for the DB cluster. For more information, see CreateDBCluster. DB instances in a DB cluster can be deleted even when deletion protection is enabled for the DB cluster.
", "CreateEventSubscriptionMessage$Enabled": "A Boolean value; set to true to activate the subscription, set to false to create the subscription but not active it.
", - "DBInstance$PerformanceInsightsEnabled": "True if Performance Insights is enabled for the DB instance, and otherwise false.
", + "DBCluster$DeletionProtection": "Indicates if the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled.
", + "DBInstance$PerformanceInsightsEnabled": "(Not supported by Neptune)
", + "DBInstance$DeletionProtection": "Indicates if the DB instance has deletion protection enabled. The database can't be deleted when deletion protection is enabled.
", "DescribeDBEngineVersionsMessage$ListSupportedCharacterSets": "If this parameter is specified and the requested engine supports the CharacterSetName
parameter for CreateDBInstance
, the response includes a list of supported character sets for each engine version.
If this parameter is specified and the requested engine supports the TimeZone
parameter for CreateDBInstance
, the response includes a list of supported time zones for each engine version.
The VPC filter value. Specify this parameter to show only the available VPC or non-VPC offerings.
", "ModifyDBClusterMessage$EnableIAMDatabaseAuthentication": "True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.
Default: false
A value that indicates whether the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.
", "ModifyDBInstanceMessage$MultiAZ": "Specifies if the DB instance is a Multi-AZ deployment. Changing this parameter doesn't result in an outage and the change is applied during the next maintenance window unless the ApplyImmediately
parameter is set to true
for this request.
Indicates that minor version upgrades are applied automatically to the DB instance during the maintenance window. Changing this parameter doesn't result in an outage except in the following case and the change is asynchronously applied as soon as possible. An outage will result if this parameter is set to true
during the maintenance window, and a newer minor version is available, and Neptune has enabled auto patching for that engine version.
True to copy all tags from the DB instance to snapshots of the DB instance, and otherwise false. The default is false.
", "ModifyDBInstanceMessage$PubliclyAccessible": "This flag should no longer be used.
", "ModifyDBInstanceMessage$EnableIAMDatabaseAuthentication": "True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.
You can enable IAM database authentication for the following database engines
Not applicable. Mapping AWS IAM accounts to database accounts is managed by the DB cluster. For more information, see ModifyDBCluster.
Default: false
Not supported.
", + "ModifyDBInstanceMessage$EnablePerformanceInsights": "(Not supported by Neptune)
", + "ModifyDBInstanceMessage$DeletionProtection": "A value that indicates whether the DB instance has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.
", "ModifyEventSubscriptionMessage$Enabled": "A Boolean value; set to true to activate the subscription.
", "PendingModifiedValues$MultiAZ": "Indicates that the Single-AZ DB instance is to change to a Multi-AZ deployment.
", "RebootDBInstanceMessage$ForceFailover": " When true
, the reboot is conducted through a MultiAZ failover.
Constraint: You can't specify true
if the instance is not configured for MultiAZ.
True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.
Default: false
True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.
Default: false
A value that indicates whether the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.
", + "RestoreDBClusterToPointInTimeMessage$EnableIAMDatabaseAuthentication": "True to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts, and otherwise false.
Default: false
A value that indicates whether the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.
" } }, "CertificateNotFoundFault": { @@ -215,7 +223,7 @@ "CharacterSet": { "base": "Specifies a character set.
", "refs": { - "DBEngineVersion$DefaultCharacterSet": " The default character set for new instances of this engine version, if the CharacterSetName
parameter of the CreateDBInstance API is not specified.
(Not supported by Neptune)
", "SupportedCharacterSetsList$member": null } }, @@ -375,7 +383,7 @@ "DBClusterOptionGroupMemberships": { "base": null, "refs": { - "DBCluster$DBClusterOptionGroupMemberships": "Provides the list of option group memberships for this DB cluster.
" + "DBCluster$DBClusterOptionGroupMemberships": "(Not supported by Neptune)
" } }, "DBClusterOptionGroupStatus": { @@ -1286,7 +1294,7 @@ "OptionGroupMembershipList": { "base": null, "refs": { - "DBInstance$OptionGroupMemberships": "Provides the list of option group memberships for this DB instance.
" + "DBInstance$OptionGroupMemberships": "(Not supported by Neptune)
" } }, "OptionGroupNotFoundFault": { @@ -1332,7 +1340,7 @@ "PendingCloudwatchLogsExports": { "base": "A list of the log types whose configuration is still pending. In other words, these log types are in the process of being activated or deactivated.
", "refs": { - "PendingModifiedValues$PendingCloudwatchLogsExports": "Specifies the CloudWatch logs to be exported.
" + "PendingModifiedValues$PendingCloudwatchLogsExports": "This PendingCloudwatchLogsExports
structure specifies pending changes to which CloudWatch logs are enabled and which are disabled.
A description for the copied DB cluster parameter group.
", "CopyDBClusterSnapshotMessage$SourceDBClusterSnapshotIdentifier": "The identifier of the DB cluster snapshot to copy. This parameter is not case-sensitive.
You can't copy from one AWS Region to another.
Constraints:
Must specify a valid system snapshot in the \"available\" state.
Specify a valid DB snapshot identifier.
Example: my-cluster-snapshot1
The identifier of the new DB cluster snapshot to create from the source DB cluster snapshot. This parameter is not case-sensitive.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Example: my-cluster-snapshot2
The AWS AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you copy an unencrypted DB cluster snapshot and specify a value for the KmsKeyId
parameter, Amazon Neptune encrypts the target DB cluster snapshot using the specified KMS encryption key.
If you copy an encrypted DB cluster snapshot from your AWS account, you can specify a value for KmsKeyId
to encrypt the copy with a new KMS encryption key. If you don't specify a value for KmsKeyId
, then the copy of the DB cluster snapshot is encrypted with the same KMS key as the source DB cluster snapshot.
If you copy an encrypted DB cluster snapshot that is shared from another AWS account, then you must specify a value for KmsKeyId
.
KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.
", + "CopyDBClusterSnapshotMessage$KmsKeyId": "The AWS AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you copy an encrypted DB cluster snapshot from your AWS account, you can specify a value for KmsKeyId
to encrypt the copy with a new KMS encryption key. If you don't specify a value for KmsKeyId
, then the copy of the DB cluster snapshot is encrypted with the same KMS key as the source DB cluster snapshot.
If you copy an encrypted DB cluster snapshot that is shared from another AWS account, then you must specify a value for KmsKeyId
.
KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.
You cannot encrypt an unencrypted DB cluster snapshot when you copy it. If you try to copy an unencrypted DB cluster snapshot and specify a value for the KmsKeyId parameter, an error is returned.
", "CopyDBClusterSnapshotMessage$PreSignedUrl": "Not currently supported.
", "CopyDBParameterGroupMessage$SourceDBParameterGroupIdentifier": "The identifier or ARN for the source DB parameter group. For information about creating an ARN, see Constructing an Amazon Resource Name (ARN).
Constraints:
Must specify a valid DB parameter group.
Must specify a valid DB parameter group identifier, for example my-db-param-group
, or a valid ARN.
The identifier for the copied DB parameter group.
Constraints:
Cannot be null, empty, or blank.
Must contain from 1 to 255 letters, numbers, or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Example: my-db-parameter-group
A description for the copied DB parameter group.
", - "CreateDBClusterMessage$CharacterSetName": "A value that indicates that the DB cluster should be associated with the specified CharacterSet.
", + "CreateDBClusterMessage$CharacterSetName": "(Not supported by Neptune)
", "CreateDBClusterMessage$DatabaseName": "The name for your database of up to 64 alpha-numeric characters. If you do not provide a name, Amazon Neptune will not create a database in the DB cluster you are creating.
", "CreateDBClusterMessage$DBClusterIdentifier": "The DB cluster identifier. This parameter is stored as a lowercase string.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Example: my-cluster1
The name of the DB cluster parameter group to associate with this DB cluster. If this argument is omitted, the default is used.
Constraints:
If supplied, must match the name of an existing DBClusterParameterGroup.
A DB subnet group to associate with this DB cluster.
Constraints: Must match the name of an existing DBSubnetGroup. Must not be default.
Example: mySubnetgroup
The name of the database engine to be used for this DB cluster.
Valid Values: neptune
The version number of the database engine to use.
Example: 1.0.1
The version number of the database engine to use. Currently, setting this parameter has no effect.
Example: 1.0.1
The name of the master user for the DB cluster.
Constraints:
Must be 1 to 16 letters or numbers.
First character must be a letter.
Cannot be a reserved word for the chosen database engine.
The password for the master database user. This password can contain any printable ASCII character except \"/\", \"\"\", or \"@\".
Constraints: Must contain from 8 to 41 characters.
", - "CreateDBClusterMessage$OptionGroupName": "A value that indicates that the DB cluster should be associated with the specified option group.
Permanent options can't be removed from an option group. The option group can't be removed from a DB cluster once it is associated with a DB cluster.
", + "CreateDBClusterMessage$OptionGroupName": "(Not supported by Neptune)
", "CreateDBClusterMessage$PreferredBackupWindow": "The daily time range during which automated backups are created if automated backups are enabled using the BackupRetentionPeriod
parameter.
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon Neptune User Guide.
Constraints:
Must be in the format hh24:mi-hh24:mi
.
Must be in Universal Coordinated Time (UTC).
Must not conflict with the preferred maintenance window.
Must be at least 30 minutes.
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: ddd:hh24:mi-ddd:hh24:mi
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week. To see the time blocks available, see Adjusting the Preferred Maintenance Window in the Amazon Neptune User Guide.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.
Constraints: Minimum 30-minute window.
", "CreateDBClusterMessage$ReplicationSourceIdentifier": "The Amazon Resource Name (ARN) of the source DB instance or DB cluster if this DB cluster is created as a Read Replica.
", @@ -1593,10 +1601,10 @@ "CreateDBInstanceMessage$PreferredMaintenanceWindow": "The time range each week during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: ddd:hh24:mi-ddd:hh24:mi
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.
Constraints: Minimum 30-minute window.
", "CreateDBInstanceMessage$DBParameterGroupName": "The name of the DB parameter group to associate with this DB instance. If this argument is omitted, the default DBParameterGroup for the specified engine is used.
Constraints:
Must be 1 to 255 letters, numbers, or hyphens.
First character must be a letter
Cannot end with a hyphen or contain two consecutive hyphens
The daily time range during which automated backups are created.
Not applicable. The daily time range for creating automated backups is managed by the DB cluster. For more information, see CreateDBCluster.
", - "CreateDBInstanceMessage$EngineVersion": "The version number of the database engine to use.
", + "CreateDBInstanceMessage$EngineVersion": "The version number of the database engine to use. Currently, setting this parameter has no effect.
", "CreateDBInstanceMessage$LicenseModel": "License model information for this DB instance.
Valid values: license-included
| bring-your-own-license
| general-public-license
Indicates that the DB instance should be associated with the specified option group.
Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance
", - "CreateDBInstanceMessage$CharacterSetName": "Indicates that the DB instance should be associated with the specified CharacterSet.
Not applicable. The character set is managed by the DB cluster. For more information, see CreateDBCluster.
", + "CreateDBInstanceMessage$OptionGroupName": "(Not supported by Neptune)
", + "CreateDBInstanceMessage$CharacterSetName": "(Not supported by Neptune)
", "CreateDBInstanceMessage$DBClusterIdentifier": "The identifier of the DB cluster that the instance will belong to.
For information on creating a DB cluster, see CreateDBCluster.
Type: String
", "CreateDBInstanceMessage$StorageType": "Specifies the storage type to be associated with the DB instance.
Not applicable. Storage is managed by the DB Cluster.
", "CreateDBInstanceMessage$TdeCredentialArn": "The ARN from the key store with which to associate the instance for TDE encryption.
", @@ -1606,7 +1614,7 @@ "CreateDBInstanceMessage$MonitoringRoleArn": "The ARN for the IAM role that permits Neptune to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess
.
If MonitoringInterval
is set to a value other than 0, then you must supply a MonitoringRoleArn
value.
Specify the name of the IAM role to be used when making API calls to the Directory Service.
", "CreateDBInstanceMessage$Timezone": "The time zone of the DB instance.
", - "CreateDBInstanceMessage$PerformanceInsightsKMSKeyId": "The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
", + "CreateDBInstanceMessage$PerformanceInsightsKMSKeyId": "(Not supported by Neptune)
", "CreateDBParameterGroupMessage$DBParameterGroupName": "The name of the DB parameter group.
Constraints:
Must be 1 to 255 letters, numbers, or hyphens.
First character must be a letter
Cannot end with a hyphen or contain two consecutive hyphens
This value is stored as a lowercase string.
The DB parameter group family name. A DB parameter group can be associated with one and only one DB parameter group family, and can be applied only to a DB instance running a database engine and engine version compatible with that DB parameter group family.
", "CreateDBParameterGroupMessage$Description": "The description for the DB parameter group.
", @@ -1615,7 +1623,7 @@ "CreateEventSubscriptionMessage$SubscriptionName": "The name of the subscription.
Constraints: The name must be less than 255 characters.
", "CreateEventSubscriptionMessage$SnsTopicArn": "The Amazon Resource Name (ARN) of the SNS topic created for event notification. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
", "CreateEventSubscriptionMessage$SourceType": "The type of source that is generating the events. For example, if you want to be notified of events generated by a DB instance, you would set this parameter to db-instance. if this value is not specified, all events are returned.
Valid values: db-instance
| db-cluster
| db-parameter-group
| db-security-group
| db-snapshot
| db-cluster-snapshot
If present, specifies the name of the character set that this cluster is associated with.
", + "DBCluster$CharacterSetName": "(Not supported by Neptune)
", "DBCluster$DatabaseName": "Contains the name of the initial database of this DB cluster that was provided at create time, if one was specified when the DB cluster was created. This same name is returned for the life of the DB cluster.
", "DBCluster$DBClusterIdentifier": "Contains a user-supplied DB cluster identifier. This identifier is the unique key that identifies a DB cluster.
", "DBCluster$DBClusterParameterGroup": "Specifies the name of the DB cluster parameter group for the DB cluster.
", @@ -1682,7 +1690,7 @@ "DBInstance$EngineVersion": "Indicates the database engine version.
", "DBInstance$ReadReplicaSourceDBInstanceIdentifier": "Contains the identifier of the source DB instance if this DB instance is a Read Replica.
", "DBInstance$LicenseModel": "License model information for this DB instance.
", - "DBInstance$CharacterSetName": "If present, specifies the name of the character set that this instance is associated with.
", + "DBInstance$CharacterSetName": "(Not supported by Neptune)
", "DBInstance$SecondaryAvailabilityZone": "If present, specifies the name of the secondary Availability Zone for a DB instance with multi-AZ support.
", "DBInstance$StorageType": "Specifies the storage type associated with DB instance.
", "DBInstance$TdeCredentialArn": "The ARN from the key store with which the instance is associated for TDE encryption.
", @@ -1694,7 +1702,7 @@ "DBInstance$MonitoringRoleArn": "The ARN for the IAM role that permits Neptune to send Enhanced Monitoring metrics to Amazon CloudWatch Logs.
", "DBInstance$DBInstanceArn": "The Amazon Resource Name (ARN) for the DB instance.
", "DBInstance$Timezone": "Not supported.
", - "DBInstance$PerformanceInsightsKMSKeyId": "The AWS KMS key identifier for encryption of Performance Insights data. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
", + "DBInstance$PerformanceInsightsKMSKeyId": "(Not supported by Neptune)
", "DBInstanceMessage$Marker": " An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
This value is currently \"read replication.\"
", "DBInstanceStatusInfo$Status": "Status of the DB instance. For a StatusType of read replica, the values can be replicating, error, stopped, or terminated.
", @@ -1801,10 +1809,10 @@ "ModifyDBClusterMessage$NewDBClusterIdentifier": "The new DB cluster identifier for the DB cluster when renaming a DB cluster. This value is stored as a lowercase string.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens
The first character must be a letter
Cannot end with a hyphen or contain two consecutive hyphens
Example: my-cluster2
The name of the DB cluster parameter group to use for the DB cluster.
", "ModifyDBClusterMessage$MasterUserPassword": "The new password for the master database user. This password can contain any printable ASCII character except \"/\", \"\"\", or \"@\".
Constraints: Must contain from 8 to 41 characters.
", - "ModifyDBClusterMessage$OptionGroupName": "A value that indicates that the DB cluster should be associated with the specified option group. Changing this parameter doesn't result in an outage except in the following case, and the change is applied during the next maintenance window unless the ApplyImmediately
parameter is set to true
for this request. If the parameter change results in an option group that enables OEM, this change can cause a brief (sub-second) period during which new connections are rejected but existing connections are not interrupted.
Permanent options can't be removed from an option group. The option group can't be removed from a DB cluster once it is associated with a DB cluster.
", + "ModifyDBClusterMessage$OptionGroupName": "(Not supported by Neptune)
", "ModifyDBClusterMessage$PreferredBackupWindow": "The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod
parameter.
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region.
Constraints:
Must be in the format hh24:mi-hh24:mi
.
Must be in Universal Coordinated Time (UTC).
Must not conflict with the preferred maintenance window.
Must be at least 30 minutes.
The weekly time range during which system maintenance can occur, in Universal Coordinated Time (UTC).
Format: ddd:hh24:mi-ddd:hh24:mi
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region, occurring on a random day of the week.
Valid Days: Mon, Tue, Wed, Thu, Fri, Sat, Sun.
Constraints: Minimum 30-minute window.
", - "ModifyDBClusterMessage$EngineVersion": "The version number of the database engine to which you want to upgrade. Changing this parameter results in an outage. The change is applied during the next maintenance window unless the ApplyImmediately parameter is set to true.
For a list of valid engine versions, see CreateDBInstance, or call DescribeDBEngineVersions.
", + "ModifyDBClusterMessage$EngineVersion": "The version number of the database engine. Currently, setting this parameter has no effect. To upgrade your database engine to the most recent release, use the ApplyPendingMaintenanceAction API.
For a list of valid engine versions, see CreateDBInstance, or call DescribeDBEngineVersions.
", "ModifyDBClusterParameterGroupMessage$DBClusterParameterGroupName": "The name of the DB cluster parameter group to modify.
", "ModifyDBClusterSnapshotAttributeMessage$DBClusterSnapshotIdentifier": "The identifier for the DB cluster snapshot to modify the attributes for.
", "ModifyDBClusterSnapshotAttributeMessage$AttributeName": "The name of the DB cluster snapshot attribute to modify.
To manage authorization for other AWS accounts to copy or restore a manual DB cluster snapshot, set this value to restore
.
The name of the DB parameter group to apply to the DB instance. Changing this setting doesn't result in an outage. The parameter group name itself is changed immediately, but the actual parameter changes are not applied until you reboot the instance without failover. The db instance will NOT be rebooted automatically and the parameter changes will NOT be applied during the next maintenance window.
Default: Uses existing setting
Constraints: The DB parameter group must be in the same DB parameter group family as this DB instance.
", "ModifyDBInstanceMessage$PreferredBackupWindow": "The daily time range during which automated backups are created if automated backups are enabled.
Not applicable. The daily time range for creating automated backups is managed by the DB cluster. For more information, see ModifyDBCluster.
Constraints:
Must be in the format hh24:mi-hh24:mi
Must be in Universal Time Coordinated (UTC)
Must not conflict with the preferred maintenance window
Must be at least 30 minutes
The weekly time range (in UTC) during which system maintenance can occur, which might result in an outage. Changing this parameter doesn't result in an outage, except in the following situation, and the change is asynchronously applied as soon as possible. If there are pending actions that cause a reboot, and the maintenance window is changed to include the current time, then changing this parameter will cause a reboot of the DB instance. If moving this window to the current time, there must be at least 30 minutes between the current time and end of the window to ensure pending changes are applied.
Default: Uses existing setting
Format: ddd:hh24:mi-ddd:hh24:mi
Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
Constraints: Must be at least 30 minutes
", - "ModifyDBInstanceMessage$EngineVersion": " The version number of the database engine to upgrade to. Changing this parameter results in an outage and the change is applied during the next maintenance window unless the ApplyImmediately
parameter is set to true
for this request.
For major version upgrades, if a nondefault DB parameter group is currently in use, a new DB parameter group in the DB parameter group family for the new engine version must be specified. The new DB parameter group can be the default for that DB parameter group family.
", + "ModifyDBInstanceMessage$EngineVersion": "The version number of the database engine to upgrade to. Currently, setting this parameter has no effect. To upgrade your database engine to the most recent release, use the ApplyPendingMaintenanceAction API.
", "ModifyDBInstanceMessage$LicenseModel": "Not supported.
", - "ModifyDBInstanceMessage$OptionGroupName": " Indicates that the DB instance should be associated with the specified option group. Changing this parameter doesn't result in an outage except in the following case and the change is applied during the next maintenance window unless the ApplyImmediately
parameter is set to true
for this request. If the parameter change results in an option group that enables OEM, this change can cause a brief (sub-second) period during which new connections are rejected but existing connections are not interrupted.
Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance
", + "ModifyDBInstanceMessage$OptionGroupName": "(Not supported by Neptune)
", "ModifyDBInstanceMessage$NewDBInstanceIdentifier": " The new DB instance identifier for the DB instance when renaming a DB instance. When you change the DB instance identifier, an instance reboot will occur immediately if you set Apply Immediately
to true, or will occur during the next maintenance window if Apply Immediately
to false. This value is stored as a lowercase string.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens.
The first character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Example: mydbinstance
Not supported.
", "ModifyDBInstanceMessage$TdeCredentialArn": "The ARN from the key store with which to associate the instance for TDE encryption.
", @@ -1826,7 +1834,7 @@ "ModifyDBInstanceMessage$Domain": "Not supported.
", "ModifyDBInstanceMessage$MonitoringRoleArn": "The ARN for the IAM role that permits Neptune to send enhanced monitoring metrics to Amazon CloudWatch Logs. For example, arn:aws:iam:123456789012:role/emaccess
.
If MonitoringInterval
is set to a value other than 0, then you must supply a MonitoringRoleArn
value.
Not supported
", - "ModifyDBInstanceMessage$PerformanceInsightsKMSKeyId": "Not supported.
", + "ModifyDBInstanceMessage$PerformanceInsightsKMSKeyId": "(Not supported by Neptune)
", "ModifyDBParameterGroupMessage$DBParameterGroupName": "The name of the DB parameter group.
Constraints:
If supplied, must match the name of an existing DBParameterGroup.
The name for the DB subnet group. This value is stored as a lowercase string. You can't modify the default subnet group.
Constraints: Must match the name of an existing DBSubnetGroup. Must not be default.
Example: mySubnetgroup
The description for the DB subnet group.
", @@ -1880,14 +1888,14 @@ "RestoreDBClusterFromSnapshotMessage$EngineVersion": "The version of the database engine to use for the new DB cluster.
", "RestoreDBClusterFromSnapshotMessage$DBSubnetGroupName": "The name of the DB subnet group to use for the new DB cluster.
Constraints: If supplied, must match the name of an existing DBSubnetGroup.
Example: mySubnetgroup
Not supported.
", - "RestoreDBClusterFromSnapshotMessage$OptionGroupName": "The name of the option group to use for the restored DB cluster.
", + "RestoreDBClusterFromSnapshotMessage$OptionGroupName": "(Not supported by Neptune)
", "RestoreDBClusterFromSnapshotMessage$KmsKeyId": "The AWS KMS key identifier to use when restoring an encrypted DB cluster from a DB snapshot or DB cluster snapshot.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
If you do not specify a value for the KmsKeyId
parameter, then the following will occur:
If the DB snapshot or DB cluster snapshot in SnapshotIdentifier
is encrypted, then the restored DB cluster is encrypted using the KMS key that was used to encrypt the DB snapshot or DB cluster snapshot.
If the DB snapshot or DB cluster snapshot in SnapshotIdentifier
is not encrypted, then the restored DB cluster is not encrypted.
The name of the DB cluster parameter group to associate with the new DB cluster.
Constraints:
If supplied, must match the name of an existing DBClusterParameterGroup.
The name of the new DB cluster to be created.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens
First character must be a letter
Cannot end with a hyphen or contain two consecutive hyphens
The type of restore to be performed. You can specify one of the following values:
full-copy
- The new DB cluster is restored as a full copy of the source DB cluster.
copy-on-write
- The new DB cluster is restored as a clone of the source DB cluster.
If you don't specify a RestoreType
value, then the new DB cluster is restored as a full copy of the source DB cluster.
The identifier of the source DB cluster from which to restore.
Constraints:
Must match the identifier of an existing DBCluster.
The DB subnet group name to use for the new DB cluster.
Constraints: If supplied, must match the name of an existing DBSubnetGroup.
Example: mySubnetgroup
The name of the option group for the new DB cluster.
", + "RestoreDBClusterToPointInTimeMessage$OptionGroupName": "(Not supported by Neptune)
", "RestoreDBClusterToPointInTimeMessage$KmsKeyId": "The AWS KMS key identifier to use when restoring an encrypted DB cluster from an encrypted DB cluster.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are restoring a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
You can restore to a new DB cluster and encrypt the new DB cluster with a KMS key that is different than the KMS key used to encrypt the source DB cluster. The new DB cluster is encrypted with the KMS key identified by the KmsKeyId
parameter.
If you do not specify a value for the KmsKeyId
parameter, then the following will occur:
If the DB cluster is encrypted, then the restored DB cluster is encrypted using the KMS key that was used to encrypt the source DB cluster.
If the DB cluster is not encrypted, then the restored DB cluster is not encrypted.
If DBClusterIdentifier
refers to a DB cluster that is not encrypted, then the restore request is rejected.
The name of the DB cluster parameter group to associate with the new DB cluster.
Constraints:
If supplied, must match the name of an existing DBClusterParameterGroup.
A list of the character sets supported by this engine for the CharacterSetName
parameter of the CreateDBInstance
action.
(Not supported by Neptune)
" } }, "SupportedTimezonesList": { diff --git a/models/apis/opsworkscm/2016-11-01/api-2.json b/models/apis/opsworkscm/2016-11-01/api-2.json index cdf9aa01e79..665f471ffd4 100644 --- a/models/apis/opsworkscm/2016-11-01/api-2.json +++ b/models/apis/opsworkscm/2016-11-01/api-2.json @@ -178,6 +178,19 @@ {"shape":"InvalidStateException"} ] }, + "ListTagsForResource":{ + "name":"ListTagsForResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTagsForResourceRequest"}, + "output":{"shape":"ListTagsForResourceResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"} + ] + }, "RestoreServer":{ "name":"RestoreServer", "http":{ @@ -206,6 +219,34 @@ {"shape":"ValidationException"} ] }, + "TagResource":{ + "name":"TagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"TagResourceRequest"}, + "output":{"shape":"TagResourceResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"InvalidStateException"} + ] + }, + "UntagResource":{ + "name":"UntagResource", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UntagResourceRequest"}, + "output":{"shape":"UntagResourceResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"ValidationException"}, + {"shape":"InvalidStateException"} + ] + }, "UpdateServer":{ "name":"UpdateServer", "http":{ @@ -236,6 +277,10 @@ } }, "shapes":{ + "AWSOpsWorksCMResourceArn":{ + "type":"string", + "pattern":"arn:aws.*:opsworks-cm:.*:[0-9]{12}:.*" + }, "AccountAttribute":{ "type":"structure", "members":{ @@ -348,7 +393,8 @@ "required":["ServerName"], "members":{ "ServerName":{"shape":"ServerName"}, - "Description":{"shape":"String"} + "Description":{"shape":"String"}, + "Tags":{"shape":"TagList"} } }, "CreateBackupResponse":{ @@ -385,6 +431,7 @@ "SecurityGroupIds":{"shape":"Strings"}, "ServiceRoleArn":{"shape":"ServiceRoleArn"}, "SubnetIds":{"shape":"Strings"}, + "Tags":{"shape":"TagList"}, "BackupId":{"shape":"BackupId"} } }, @@ -601,6 +648,22 @@ }, "exception":true }, + "ListTagsForResourceRequest":{ + "type":"structure", + "required":["ResourceArn"], + "members":{ + "ResourceArn":{"shape":"AWSOpsWorksCMResourceArn"}, + "NextToken":{"shape":"NextToken"}, + "MaxResults":{"shape":"MaxResults"} + } + }, + "ListTagsForResourceResponse":{ + "type":"structure", + "members":{ + "Tags":{"shape":"TagList"}, + "NextToken":{"shape":"NextToken"} + } + }, "MaintenanceStatus":{ "type":"string", "enum":[ @@ -765,12 +828,79 @@ "type":"list", "member":{"shape":"String"} }, + "Tag":{ + "type":"structure", + "required":[ + "Key", + "Value" + ], + "members":{ + "Key":{"shape":"TagKey"}, + "Value":{"shape":"TagValue"} + } + }, + "TagKey":{ + "type":"string", + "max":128, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:\\/=+\\\\\\-@]*)$" + }, + "TagKeyList":{ + "type":"list", + "member":{"shape":"TagKey"}, + "max":200, + "min":0 + }, + "TagList":{ + "type":"list", + "member":{"shape":"Tag"}, + "max":200, + "min":0 + }, + "TagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "Tags" + ], + "members":{ + "ResourceArn":{"shape":"AWSOpsWorksCMResourceArn"}, + "Tags":{"shape":"TagList"} + } + }, + "TagResourceResponse":{ + "type":"structure", + "members":{ + } + }, + "TagValue":{ + "type":"string", + "max":256, + "min":0, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:\\/=+\\\\\\-@]*)$" + }, "TimeWindowDefinition":{ "type":"string", "max":10000, "pattern":"^((Mon|Tue|Wed|Thu|Fri|Sat|Sun):)?([0-1][0-9]|2[0-3]):[0-5][0-9]$" }, "Timestamp":{"type":"timestamp"}, + "UntagResourceRequest":{ + "type":"structure", + "required":[ + "ResourceArn", + "TagKeys" + ], + "members":{ + "ResourceArn":{"shape":"AWSOpsWorksCMResourceArn"}, + "TagKeys":{"shape":"TagKeyList"} + } + }, + "UntagResourceResponse":{ + "type":"structure", + "members":{ + } + }, "UpdateServerEngineAttributesRequest":{ "type":"structure", "required":[ diff --git a/models/apis/opsworkscm/2016-11-01/docs-2.json b/models/apis/opsworkscm/2016-11-01/docs-2.json index 68b2cecaa51..cff19e613f4 100644 --- a/models/apis/opsworkscm/2016-11-01/docs-2.json +++ b/models/apis/opsworkscm/2016-11-01/docs-2.json @@ -14,12 +14,23 @@ "DescribeServers": "Lists all configuration management servers that are identified with your account. Only the stored results from Amazon DynamoDB are returned. AWS OpsWorks CM does not query other services.
This operation is synchronous.
A ResourceNotFoundException
is thrown when the server does not exist. A ValidationException
is raised when parameters of the request are not valid.
Disassociates a node from an AWS OpsWorks CM server, and removes the node from the server's managed nodes. After a node is disassociated, the node key pair is no longer valid for accessing the configuration manager's API. For more information about how to associate a node, see AssociateNode.
A node can can only be disassociated from a server that is in a HEALTHY
state. Otherwise, an InvalidStateException
is thrown. A ResourceNotFoundException
is thrown when the server does not exist. A ValidationException
is raised when parameters of the request are not valid.
Exports a specified server engine attribute as a base64-encoded string. For example, you can export user data that you can use in EC2 to associate nodes with a server.
This operation is synchronous.
A ValidationException
is raised when parameters of the request are not valid. A ResourceNotFoundException
is thrown when the server does not exist. An InvalidStateException
is thrown when the server is in any of the following states: CREATING, TERMINATED, FAILED or DELETING.
Restores a backup to a server that is in a CONNECTION_LOST
, HEALTHY
, RUNNING
, UNHEALTHY
, or TERMINATED
state. When you run RestoreServer, the server's EC2 instance is deleted, and a new EC2 instance is configured. RestoreServer maintains the existing server endpoint, so configuration management of the server's client devices (nodes) should continue to work.
This operation is asynchronous.
An InvalidStateException
is thrown when the server is not in a valid state. A ResourceNotFoundException
is thrown when the server does not exist. A ValidationException
is raised when parameters of the request are not valid.
Returns a list of tags that are applied to the specified AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise servers or backups.
", + "RestoreServer": " Restores a backup to a server that is in a CONNECTION_LOST
, HEALTHY
, RUNNING
, UNHEALTHY
, or TERMINATED
state. When you run RestoreServer, the server's EC2 instance is deleted, and a new EC2 instance is configured. RestoreServer maintains the existing server endpoint, so configuration management of the server's client devices (nodes) should continue to work.
Restoring from a backup is performed by creating a new EC2 instance. If restoration is successful, and the server is in a HEALTHY
state, AWS OpsWorks CM switches traffic over to the new instance. After restoration is finished, the old EC2 instance is maintained in a Running
or Stopped
state, but is eventually terminated.
This operation is asynchronous.
An InvalidStateException
is thrown when the server is not in a valid state. A ResourceNotFoundException
is thrown when the server does not exist. A ValidationException
is raised when parameters of the request are not valid.
Manually starts server maintenance. This command can be useful if an earlier maintenance attempt failed, and the underlying cause of maintenance failure has been resolved. The server is in an UNDER_MAINTENANCE
state while maintenance is in progress.
Maintenance can only be started on servers in HEALTHY
and UNHEALTHY
states. Otherwise, an InvalidStateException
is thrown. A ResourceNotFoundException
is thrown when the server does not exist. A ValidationException
is raised when parameters of the request are not valid.
Applies tags to an AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise server, or to server backups.
", + "UntagResource": "Removes specified tags from an AWS OpsWorks-CM server or backup.
", "UpdateServer": "Updates settings for a server.
This operation is synchronous.
", "UpdateServerEngineAttributes": " Updates engine-specific attributes on a specified server. The server enters the MODIFYING
state when this operation is in progress. Only one update can occur at a time. You can use this command to reset a Chef server's public key (CHEF_PIVOTAL_KEY
) or a Puppet server's admin password (PUPPET_ADMIN_PASSWORD
).
This operation is asynchronous.
This operation can only be called for servers in HEALTHY
or UNHEALTHY
states. Otherwise, an InvalidStateException
is raised. A ResourceNotFoundException
is thrown when the server does not exist. A ValidationException
is raised when parameters of the request are not valid.
The Amazon Resource Number (ARN) of an AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise server for which you want to show applied tags. For example, arn:aws:opsworks-cm:us-west-2:123456789012:server/test-owcm-server/EXAMPLE-66b0-4196-8274-d1a2bEXAMPLE
.
The Amazon Resource Number (ARN) of a resource to which you want to apply tags. For example, arn:aws:opsworks-cm:us-west-2:123456789012:server/test-owcm-server/EXAMPLE-66b0-4196-8274-d1a2bEXAMPLE
.
The Amazon Resource Number (ARN) of a resource from which you want to remove tags. For example, arn:aws:opsworks-cm:us-west-2:123456789012:server/test-owcm-server/EXAMPLE-66b0-4196-8274-d1a2bEXAMPLE
.
Stores account attributes.
", "refs": { @@ -128,20 +139,20 @@ "CustomCertificate": { "base": null, "refs": { - "CreateServerRequest$CustomCertificate": "A PEM-formatted HTTPS certificate. The value can be be a single, self-signed certificate, or a certificate chain. If you specify a custom certificate, you must also specify values for CustomDomain
and CustomPrivateKey
. The following are requirements for the CustomCertificate
value:
You can provide either a self-signed, custom certificate, or the full certificate chain.
The certificate must be a valid X509 certificate, or a certificate chain in PEM format.
The certificate must be valid at the time of upload. A certificate can't be used before its validity period begins (the certificate's NotBefore
date), or after it expires (the certificate's NotAfter
date).
The certificate’s common name or subject alternative names (SANs), if present, must match the value of CustomDomain
.
The certificate must match the value of CustomPrivateKey
.
Supported on servers running Chef Automate 2. A PEM-formatted HTTPS certificate. The value can be be a single, self-signed certificate, or a certificate chain. If you specify a custom certificate, you must also specify values for CustomDomain
and CustomPrivateKey
. The following are requirements for the CustomCertificate
value:
You can provide either a self-signed, custom certificate, or the full certificate chain.
The certificate must be a valid X509 certificate, or a certificate chain in PEM format.
The certificate must be valid at the time of upload. A certificate can't be used before its validity period begins (the certificate's NotBefore
date), or after it expires (the certificate's NotAfter
date).
The certificate’s common name or subject alternative names (SANs), if present, must match the value of CustomDomain
.
The certificate must match the value of CustomPrivateKey
.
An optional public endpoint of a server, such as https://aws.my-company.com
. To access the server, create a CNAME DNS record in your preferred DNS service that points the custom domain to the endpoint that is generated when the server is created (the value of the CreateServer Endpoint attribute). You cannot access the server by using the generated Endpoint
value if the server is using a custom domain. If you specify a custom domain, you must also specify values for CustomCertificate
and CustomPrivateKey
.
Supported on servers running Chef Automate 2. An optional public endpoint of a server, such as https://aws.my-company.com
. To access the server, create a CNAME DNS record in your preferred DNS service that points the custom domain to the endpoint that is generated when the server is created (the value of the CreateServer Endpoint attribute). You cannot access the server by using the generated Endpoint
value if the server is using a custom domain. If you specify a custom domain, you must also specify values for CustomCertificate
and CustomPrivateKey
.
An optional public endpoint of a server, such as https://aws.my-company.com
. You cannot access the server by using the Endpoint
value if the server has a CustomDomain
specified.
A private key in PEM format for connecting to the server by using HTTPS. The private key must not be encrypted; it cannot be protected by a password or passphrase. If you specify a custom private key, you must also specify values for CustomDomain
and CustomCertificate
.
Supported on servers running Chef Automate 2. A private key in PEM format for connecting to the server by using HTTPS. The private key must not be encrypted; it cannot be protected by a password or passphrase. If you specify a custom private key, you must also specify values for CustomDomain
and CustomCertificate
.
Engine attributes that are used for disassociating the node. No attributes are required for Puppet.
Attributes required in a DisassociateNode request for Chef
CHEF_ORGANIZATION
: The Chef organization with which the node was associated. By default only one organization named default
can exist.
The list of engine attributes. The list type is EngineAttribute
. An EngineAttribute
list item is a pair that includes an attribute name and its value. For the Userdata
ExportAttributeName, the following are supported engine attribute names.
RunList In Chef, a list of roles or recipes that are run in the specified order. In Puppet, this parameter is ignored.
OrganizationName In Chef, an organization name. AWS OpsWorks for Chef Automate always creates the organization default
. In Puppet, this parameter is ignored.
NodeEnvironment In Chef, a node environment (for example, development, staging, or one-box). In Puppet, this parameter is ignored.
NodeClientVersion In Chef, the version of the Chef engine (three numbers separated by dots, such as 13.8.5). If this attribute is empty, OpsWorks for Chef Automate uses the most current version. In Puppet, this parameter is ignored.
The response of a createServer() request returns the master credential to access the server in EngineAttributes. These credentials are not stored by AWS OpsWorks CM; they are returned only as part of the result of createServer().
Attributes returned in a createServer response for Chef
CHEF_AUTOMATE_PIVOTAL_KEY
: A base64-encoded RSA private key that is generated by AWS OpsWorks for Chef Automate. This private key is required to access the Chef API.
CHEF_STARTER_KIT
: A base64-encoded ZIP file. The ZIP file contains a Chef starter kit, which includes a README, a configuration file, and the required RSA private key. Save this file, unzip it, and then change to the directory where you've unzipped the file contents. From this directory, you can run Knife commands.
Attributes returned in a createServer response for Puppet
PUPPET_STARTER_KIT
: A base64-encoded ZIP file. The ZIP file contains a Puppet starter kit, including a README and a required private key. Save this file, unzip it, and then change to the directory where you've unzipped the file contents.
PUPPET_ADMIN_PASSWORD
: An administrator password that you can use to sign in to the Puppet Enterprise console after the server is online.
Engine attributes that are specific to the server on which you want to run maintenance.
" + "StartMaintenanceRequest$EngineAttributes": "Engine attributes that are specific to the server on which you want to run maintenance.
Attributes accepted in a StartMaintenance request for Chef
CHEF_MAJOR_UPGRADE
: If a Chef Automate server is eligible for upgrade to Chef Automate 2, add this engine attribute to a StartMaintenance
request and set the value to true
to upgrade the server to Chef Automate 2. For more information, see Upgrade an AWS OpsWorks for Chef Automate Server to Chef Automate 2.
This is not currently implemented for DescribeBackups
requests.
To receive a paginated response, use this parameter to specify the maximum number of results to be returned with a single call. If the number of available results exceeds this maximum, the response includes a NextToken
value that you can assign to the NextToken
request parameter to get the next set of results.
This is not currently implemented for DescribeServers
requests.
This is not currently implemented for DescribeServers
requests.
To receive a paginated response, use this parameter to specify the maximum number of results to be returned with a single call. If the number of available results exceeds this maximum, the response includes a NextToken
value that you can assign to the NextToken
request parameter to get the next set of results.
This is not currently implemented for DescribeBackups
requests.
NextToken is a string that is returned in some command responses. It indicates that not all entries have been returned, and that you must run at least one more request to get remaining items. To get remaining results, call DescribeEvents
again, and assign the token from the previous results as the value of the nextToken
parameter. If there are no more results, the response object's nextToken
parameter value is null
. Setting a nextToken
value that was not returned in your previous results causes an InvalidNextTokenException
to occur.
This is not currently implemented for DescribeServers
requests.
This is not currently implemented for DescribeServers
requests.
NextToken is a string that is returned in some command responses. It indicates that not all entries have been returned, and that you must run at least one more request to get remaining items. To get remaining results, call ListTagsForResource
again, and assign the token from the previous results as the value of the nextToken
parameter. If there are no more results, the response object's nextToken
parameter value is null
. Setting a nextToken
value that was not returned in your previous results causes an InvalidNextTokenException
to occur.
A token that you can use as the value of NextToken
in subsequent calls to the API to show more results.
Contains the response to a DescribeServers
request.
For Puppet Server: DescribeServersResponse$Servers$EngineAttributes
contains PUPPET_API_CA_CERT. This is the PEM-encoded CA certificate that is used by the Puppet API over TCP port number 8140. The CA certificate is also used to sign node certificates.
Contains the response to a DescribeServers
request.
For Chef Automate servers: If DescribeServersResponse$Servers$EngineAttributes
includes CHEF_MAJOR_UPGRADE_AVAILABLE, you can upgrade the Chef Automate server to Chef Automate 2. To be eligible for upgrade, a server running Chef Automate 1 must have had at least one successful maintenance run after November 1, 2019.
For Puppet Server: DescribeServersResponse$Servers$EngineAttributes
contains PUPPET_API_CA_CERT. This is the PEM-encoded CA certificate that is used by the Puppet API over TCP port number 8140. The CA certificate is also used to sign node certificates.
Error or informational message that the maximum allowed number of servers or backups has been exceeded.
", "ResourceAlreadyExistsException$Message": "Error or informational message in response to a CreateServer request that a resource cannot be created because it already exists.
", "ResourceNotFoundException$Message": "Error or informational message that can contain more detail about problems locating or accessing a resource.
", - "RestoreServerRequest$InstanceType": " The type of the instance to create. Valid values must be specified in the following format: ^([cm][34]|t2).*
For example, m5.large
. Valid values are m5.large
, r5.xlarge
, and r5.2xlarge
. If you do not specify this parameter, RestoreServer uses the instance type from the specified backup.
The type of instance to restore. Valid values must be specified in the following format: ^([cm][34]|t2).*
For example, m5.large
. Valid values are m5.large
, r5.xlarge
, and r5.2xlarge
. If you do not specify this parameter, RestoreServer uses the instance type from the specified backup.
The name of the server.
", "Server$CloudFormationStackArn": "The ARN of the CloudFormation stack that was used to create the server.
", "Server$Endpoint": " A DNS name that can be used to access the engine. Example: myserver-asdfghjkl.us-east-1.opsworks.io
. You cannot access the server by using the Endpoint
value if the server has a CustomDomain
specified.
The subnet IDs specified in a CreateServer request.
" } }, + "Tag": { + "base": "A map that contains tag keys and tag values to attach to an AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise server. Leading and trailing white spaces are trimmed from both the key and value. A maximum of 50 user-applied tags is allowed for tag-supported AWS OpsWorks-CM resources.
", + "refs": { + "TagList$member": null + } + }, + "TagKey": { + "base": null, + "refs": { + "Tag$Key": "A tag key, such as Stage
or Name
. A tag key cannot be empty. The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
The keys of tags that you want to remove.
" + } + }, + "TagList": { + "base": null, + "refs": { + "CreateBackupRequest$Tags": "A map that contains tag keys and tag values to attach to an AWS OpsWorks-CM server backup.
The key cannot be empty.
The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
Leading and trailing white spaces are trimmed from both the key and value.
A maximum of 50 user-applied tags is allowed for tag-supported AWS OpsWorks-CM resources.
A map that contains tag keys and tag values to attach to an AWS OpsWorks for Chef Automate or AWS OpsWorks for Puppet Enterprise server.
The key cannot be empty.
The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
Leading and trailing white spaces are trimmed from both the key and value.
A maximum of 50 user-applied tags is allowed for any AWS OpsWorks-CM server.
Tags that have been applied to the resource.
", + "TagResourceRequest$Tags": "A map that contains tag keys and tag values to attach to AWS OpsWorks-CM servers or backups.
The key cannot be empty.
The key can be a maximum of 127 characters, and can contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
The value can be a maximum 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
Leading and trailing white spaces are trimmed from both the key and value.
A maximum of 50 user-applied tags is allowed for any AWS OpsWorks-CM server or backup.
An optional tag value, such as Production
or test-owcm-server
. The value can be a maximum of 255 characters, and contain only Unicode letters, numbers, or separators, or the following special characters: + - = . _ : /
DDD:HH:MM
(weekly start time) or HH:MM
(daily start time).
Time windows always use coordinated universal time (UTC). Valid strings for day of week (DDD
) are: Mon
, Tue
, Wed
, Thr
, Fri
, Sat
, or Sun
.
The time when the event occurred.
" } }, + "UntagResourceRequest": { + "base": null, + "refs": { + } + }, + "UntagResourceResponse": { + "base": null, + "refs": { + } + }, "UpdateServerEngineAttributesRequest": { "base": null, "refs": { diff --git a/models/apis/organizations/2016-11-28/docs-2.json b/models/apis/organizations/2016-11-28/docs-2.json index 16828a2a343..033abb5e527 100644 --- a/models/apis/organizations/2016-11-28/docs-2.json +++ b/models/apis/organizations/2016-11-28/docs-2.json @@ -216,7 +216,7 @@ } }, "ConstraintViolationException": { - "base": "Performing this operation violates a minimum or maximum value limit. Examples include attempting to remove the last service control policy (SCP) from an OU or root, or attaching too many policies to an account, OU, or root. This exception includes a reason that contains additional information about the violated limit.
Some of the reasons in the following list might not be applicable to this specific API or operation:
ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first agree to the AWS Customer Agreement. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first complete phone verification. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CREATION_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of accounts that you can create in one day.
ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number of accounts in an organization. If you need more accounts, contact AWS Support to request an increase in your limit.
Or the number of invitations that you tried to send would cause you to exceed the limit of accounts in your organization. Send fewer invitations or contact AWS Support to request an increase in the number of accounts.
Deleted and closed accounts still count toward your limit.
If you get receive this exception when running a command immediately after creating the organization, wait one hour and try again. If after an hour it continues to fail with this error, contact AWS Support.
HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of handshakes that you can send in one day.
MASTER_ACCOUNT_ADDRESS_DOES_NOT_MATCH_MARKETPLACE: To create an account in this organization, you first must migrate the organization's master account to the marketplace that corresponds to the master account's address. For example, accounts with India addresses must be associated with the AISPL marketplace. All accounts in an organization must be associated with the same marketplace.
MASTER_ACCOUNT_MISSING_CONTACT_INFO: To complete this operation, you must first provide contact a valid address and phone number for the master account. Then try the operation again.
MASTER_ACCOUNT_NOT_GOVCLOUD_ENABLED: To complete this operation, the master account must have an associated account in the AWS GovCloud (US-West) Region. For more information, see AWS Organizations in the AWS GovCloud User Guide.
MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization with this master account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MAX_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to exceed the number of policies of a certain type that can be attached to an entity at one time.
MAX_TAG_LIMIT_EXCEEDED: You have exceeded the number of tags allowed on this resource.
MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation with this member account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MIN_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to detach a policy from an entity, which would cause the entity to have fewer than the minimum number of policies of the required type.
OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is too many levels deep.
ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation that requires the organization to be configured to support all features. An organization that supports only consolidated billing features can't perform this operation.
OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs that you can have in an organization.
POLICY_NUMBER_LIMIT_EXCEEDED. You attempted to exceed the number of policies that you can have in an organization.
Performing this operation violates a minimum or maximum value limit. Examples include attempting to remove the last service control policy (SCP) from an OU or root, or attaching too many policies to an account, OU, or root. This exception includes a reason that contains additional information about the violated limit.
Some of the reasons in the following list might not be applicable to this specific API or operation:
ACCOUNT_CANNOT_LEAVE_WITHOUT_EULA: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first agree to the AWS Customer Agreement. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CANNOT_LEAVE_WITHOUT_PHONE_VERIFICATION: You attempted to remove an account from the organization that doesn't yet have enough information to exist as a standalone account. This account requires you to first complete phone verification. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
ACCOUNT_CREATION_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of accounts that you can create in one day.
ACCOUNT_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the limit on the number of accounts in an organization. If you need more accounts, contact AWS Support to request an increase in your limit.
Or the number of invitations that you tried to send would cause you to exceed the limit of accounts in your organization. Send fewer invitations or contact AWS Support to request an increase in the number of accounts.
Deleted and closed accounts still count toward your limit.
If you get receive this exception when running a command immediately after creating the organization, wait one hour and try again. If after an hour it continues to fail with this error, contact AWS Support.
HANDSHAKE_RATE_LIMIT_EXCEEDED: You attempted to exceed the number of handshakes that you can send in one day.
MASTER_ACCOUNT_ADDRESS_DOES_NOT_MATCH_MARKETPLACE: To create an account in this organization, you first must migrate the organization's master account to the marketplace that corresponds to the master account's address. For example, accounts with India addresses must be associated with the AISPL marketplace. All accounts in an organization must be associated with the same marketplace.
MASTER_ACCOUNT_MISSING_CONTACT_INFO: To complete this operation, you must first provide contact a valid address and phone number for the master account. Then try the operation again.
MASTER_ACCOUNT_NOT_GOVCLOUD_ENABLED: To complete this operation, the master account must have an associated account in the AWS GovCloud (US-West) Region. For more information, see AWS Organizations in the AWS GovCloud User Guide.
MASTER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To create an organization with this master account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MAX_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to exceed the number of policies of a certain type that can be attached to an entity at one time.
MAX_TAG_LIMIT_EXCEEDED: You have exceeded the number of tags allowed on this resource.
MEMBER_ACCOUNT_PAYMENT_INSTRUMENT_REQUIRED: To complete this operation with this member account, you first must associate a valid payment instrument, such as a credit card, with the account. Follow the steps at To leave an organization when all required account information has not yet been provided in the AWS Organizations User Guide.
MIN_POLICY_TYPE_ATTACHMENT_LIMIT_EXCEEDED: You attempted to detach a policy from an entity, which would cause the entity to have fewer than the minimum number of policies of the required type.
OU_DEPTH_LIMIT_EXCEEDED: You attempted to create an OU tree that is too many levels deep.
ORGANIZATION_NOT_IN_ALL_FEATURES_MODE: You attempted to perform an operation that requires the organization to be configured to support all features. An organization that supports only consolidated billing features can't perform this operation.
OU_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of OUs that you can have in an organization.
POLICY_NUMBER_LIMIT_EXCEEDED: You attempted to exceed the number of policies that you can have in an organization.
TAG_POLICY_VIOLATION: Tags associated with the resource must be compliant with the tag policy that’s in effect for the account. For more information, see Tag Policies in the AWS Organizations User Guide.
The unique identifier (ID) of the policy that you want to attach to the target. You can get the ID for the policy by calling the ListPolicies operation.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase letters or digits.
", - "DeletePolicyRequest$PolicyId": "The unique identifier (ID) of the policy that you want to delete. You can get the ID from the ListPolicies or ListPoliciesForTarget operations.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase letters or digits.
", - "DescribePolicyRequest$PolicyId": "The unique identifier (ID) of the policy that you want details about. You can get the ID from the ListPolicies or ListPoliciesForTarget operations.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase letters or digits.
", - "DetachPolicyRequest$PolicyId": "The unique identifier (ID) of the policy you want to detach. You can get the ID from the ListPolicies or ListPoliciesForTarget operations.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase letters or digits.
", - "ListTargetsForPolicyRequest$PolicyId": "The unique identifier (ID) of the policy whose attachments you want to know.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase letters or digits.
", + "AttachPolicyRequest$PolicyId": "The unique identifier (ID) of the policy that you want to attach to the target. You can get the ID for the policy by calling the ListPolicies operation.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase or uppercase letters, digits, or the underscore character (_).
", + "DeletePolicyRequest$PolicyId": "The unique identifier (ID) of the policy that you want to delete. You can get the ID from the ListPolicies or ListPoliciesForTarget operations.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase or uppercase letters, digits, or the underscore character (_).
", + "DescribePolicyRequest$PolicyId": "The unique identifier (ID) of the policy that you want details about. You can get the ID from the ListPolicies or ListPoliciesForTarget operations.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase or uppercase letters, digits, or the underscore character (_).
", + "DetachPolicyRequest$PolicyId": "The unique identifier (ID) of the policy you want to detach. You can get the ID from the ListPolicies or ListPoliciesForTarget operations.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase or uppercase letters, digits, or the underscore character (_).
", + "ListTargetsForPolicyRequest$PolicyId": "The unique identifier (ID) of the policy whose attachments you want to know.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase or uppercase letters, digits, or the underscore character (_).
", "PolicySummary$Id": "The unique identifier (ID) of the policy.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lower-case letters or digits.
", - "UpdatePolicyRequest$PolicyId": "The unique identifier (ID) of the policy that you want to update.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase letters or digits.
" + "UpdatePolicyRequest$PolicyId": "The unique identifier (ID) of the policy that you want to update.
The regex pattern for a policy ID string requires \"p-\" followed by from 8 to 128 lowercase or uppercase letters, digits, or the underscore character (_).
" } }, "PolicyInUseException": { diff --git a/models/apis/personalize-runtime/2018-05-22/api-2.json b/models/apis/personalize-runtime/2018-05-22/api-2.json index d5a41d778e3..35577bb1c4a 100644 --- a/models/apis/personalize-runtime/2018-05-22/api-2.json +++ b/models/apis/personalize-runtime/2018-05-22/api-2.json @@ -47,6 +47,22 @@ "max":256, "pattern":"arn:([a-z\\d-]+):personalize:.*:.*:.+" }, + "AttributeName":{ + "type":"string", + "max":150, + "pattern":"[A-Za-z\\d_]+" + }, + "AttributeValue":{ + "type":"string", + "max":1000, + "sensitive":true + }, + "Context":{ + "type":"map", + "key":{"shape":"AttributeName"}, + "value":{"shape":"AttributeValue"}, + "max":150 + }, "ErrorMessage":{"type":"string"}, "GetPersonalizedRankingRequest":{ "type":"structure", @@ -58,7 +74,8 @@ "members":{ "campaignArn":{"shape":"Arn"}, "inputList":{"shape":"InputList"}, - "userId":{"shape":"UserID"} + "userId":{"shape":"UserID"}, + "context":{"shape":"Context"} } }, "GetPersonalizedRankingResponse":{ @@ -74,7 +91,8 @@ "campaignArn":{"shape":"Arn"}, "itemId":{"shape":"ItemID"}, "userId":{"shape":"UserID"}, - "numResults":{"shape":"NumResults"} + "numResults":{"shape":"NumResults"}, + "context":{"shape":"Context"} } }, "GetRecommendationsResponse":{ diff --git a/models/apis/personalize-runtime/2018-05-22/docs-2.json b/models/apis/personalize-runtime/2018-05-22/docs-2.json index acef16c1d38..678ea18f981 100644 --- a/models/apis/personalize-runtime/2018-05-22/docs-2.json +++ b/models/apis/personalize-runtime/2018-05-22/docs-2.json @@ -13,6 +13,25 @@ "GetRecommendationsRequest$campaignArn": "The Amazon Resource Name (ARN) of the campaign to use for getting recommendations.
" } }, + "AttributeName": { + "base": null, + "refs": { + "Context$key": null + } + }, + "AttributeValue": { + "base": null, + "refs": { + "Context$value": null + } + }, + "Context": { + "base": null, + "refs": { + "GetPersonalizedRankingRequest$context": "The contextual metadata to use when getting recommendations. Contextual metadata includes any interaction information that might be relevant when getting a user's recommendations, such as the user's current location or device type. For more information, see Contextual Metadata.
", + "GetRecommendationsRequest$context": "The contextual metadata to use when getting recommendations. Contextual metadata includes any interaction information that might be relevant when getting a user's recommendations, such as the user's current location or device type. For more information, see Contextual Metadata.
" + } + }, "ErrorMessage": { "base": null, "refs": { @@ -43,7 +62,7 @@ "InputList": { "base": null, "refs": { - "GetPersonalizedRankingRequest$inputList": "A list of items (itemId's) to rank. If an item was not included in the training dataset, the item is appended to the end of the reranked list.
" + "GetPersonalizedRankingRequest$inputList": "A list of items (itemId's) to rank. If an item was not included in the training dataset, the item is appended to the end of the reranked list. The maximum is 500.
" } }, "InvalidInputException": { @@ -62,14 +81,14 @@ "ItemList": { "base": null, "refs": { - "GetPersonalizedRankingResponse$personalizedRanking": "A list of items in order of most likely interest to the user.
", - "GetRecommendationsResponse$itemList": "A list of recommendations.
" + "GetPersonalizedRankingResponse$personalizedRanking": "A list of items in order of most likely interest to the user. The maximum is 500.
", + "GetRecommendationsResponse$itemList": "A list of recommendations sorted in ascending order by prediction score. There can be a maximum of 500 items in the list.
" } }, "NumResults": { "base": null, "refs": { - "GetRecommendationsRequest$numResults": "The number of results to return. The default is 25. The maximum is 100.
" + "GetRecommendationsRequest$numResults": "The number of results to return. The default is 25. The maximum is 500.
" } }, "PredictedItem": { diff --git a/models/apis/pinpoint/2016-12-01/api-2.json b/models/apis/pinpoint/2016-12-01/api-2.json index 78536558c79..bba246ea169 100644 --- a/models/apis/pinpoint/2016-12-01/api-2.json +++ b/models/apis/pinpoint/2016-12-01/api-2.json @@ -2528,8 +2528,40 @@ "output": { "shape": "ListTagsForResourceResponse" }, + "errors": [] + }, + "ListTemplateVersions": { + "name": "ListTemplateVersions", + "http": { + "method": "GET", + "requestUri": "/v1/templates/{template-name}/{template-type}/versions", + "responseCode": 200 + }, + "input": { + "shape": "ListTemplateVersionsRequest" + }, + "output": { + "shape": "ListTemplateVersionsResponse" + }, "errors": [ - + { + "shape": "BadRequestException" + }, + { + "shape": "InternalServerErrorException" + }, + { + "shape": "ForbiddenException" + }, + { + "shape": "NotFoundException" + }, + { + "shape": "MethodNotAllowedException" + }, + { + "shape": "TooManyRequestsException" + } ] }, "ListTemplates": { @@ -2777,9 +2809,7 @@ "input": { "shape": "TagResourceRequest" }, - "errors": [ - - ] + "errors": [] }, "UntagResource": { "name": "UntagResource", @@ -2791,9 +2821,7 @@ "input": { "shape": "UntagResourceRequest" }, - "errors": [ - - ] + "errors": [] }, "UpdateAdmChannel": { "name": "UpdateAdmChannel", @@ -3441,6 +3469,40 @@ } ] }, + "UpdateTemplateActiveVersion": { + "name": "UpdateTemplateActiveVersion", + "http": { + "method": "PUT", + "requestUri": "/v1/templates/{template-name}/{template-type}/active-version", + "responseCode": 200 + }, + "input": { + "shape": "UpdateTemplateActiveVersionRequest" + }, + "output": { + "shape": "UpdateTemplateActiveVersionResponse" + }, + "errors": [ + { + "shape": "BadRequestException" + }, + { + "shape": "InternalServerErrorException" + }, + { + "shape": "ForbiddenException" + }, + { + "shape": "NotFoundException" + }, + { + "shape": "MethodNotAllowedException" + }, + { + "shape": "TooManyRequestsException" + } + ] + }, "UpdateVoiceChannel": { "name": "UpdateVoiceChannel", "http": { @@ -5344,6 +5406,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -5481,6 +5548,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -5562,6 +5634,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -5643,6 +5720,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -5835,6 +5917,9 @@ }, "TemplateName": { "shape": "__string" + }, + "TemplateVersion": { + "shape": "__string" } } }, @@ -5898,6 +5983,9 @@ }, "TextPart": { "shape": "__string" + }, + "Version": { + "shape": "__string" } }, "required": [ @@ -7185,6 +7273,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -7599,6 +7692,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -7875,6 +7973,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -7956,6 +8059,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -8432,6 +8540,47 @@ ], "payload": "TagsModel" }, + "ListTemplateVersionsRequest": { + "type": "structure", + "members": { + "NextToken": { + "shape": "__string", + "location": "querystring", + "locationName": "next-token" + }, + "PageSize": { + "shape": "__string", + "location": "querystring", + "locationName": "page-size" + }, + "TemplateName": { + "shape": "__string", + "location": "uri", + "locationName": "template-name" + }, + "TemplateType": { + "shape": "__string", + "location": "uri", + "locationName": "template-type" + } + }, + "required": [ + "TemplateName", + "TemplateType" + ] + }, + "ListTemplateVersionsResponse": { + "type": "structure", + "members": { + "TemplateVersionsResponse": { + "shape": "TemplateVersionsResponse" + } + }, + "required": [ + "TemplateVersionsResponse" + ], + "payload": "TemplateVersionsResponse" + }, "ListTemplatesRequest": { "type": "structure", "members": { @@ -8900,13 +9049,16 @@ }, "TemplateType": { "shape": "TemplateType" + }, + "Version": { + "shape": "__string" } }, "required": [ "LastModifiedDate", "CreationDate", - "TemplateName", - "TemplateType" + "TemplateType", + "TemplateName" ] }, "PutEventStreamRequest": { @@ -9237,6 +9389,9 @@ }, "TemplateType": { "shape": "TemplateType" + }, + "Version": { + "shape": "__string" } }, "required": [ @@ -9733,6 +9888,17 @@ "members": { "Name": { "shape": "__string" + }, + "Version": { + "shape": "__string" + } + } + }, + "TemplateActiveVersionRequest": { + "type": "structure", + "members": { + "Version": { + "shape": "__string" } } }, @@ -9780,6 +9946,9 @@ }, "TemplateType": { "shape": "TemplateType" + }, + "Version": { + "shape": "__string" } }, "required": [ @@ -9798,6 +9967,58 @@ "PUSH" ] }, + "TemplateVersionResponse": { + "type": "structure", + "members": { + "CreationDate": { + "shape": "__string" + }, + "DefaultSubstitutions": { + "shape": "__string" + }, + "LastModifiedDate": { + "shape": "__string" + }, + "TemplateDescription": { + "shape": "__string" + }, + "TemplateName": { + "shape": "__string" + }, + "TemplateType": { + "shape": "__string" + }, + "Version": { + "shape": "__string" + } + }, + "required": [ + "LastModifiedDate", + "CreationDate", + "TemplateName", + "TemplateType" + ] + }, + "TemplateVersionsResponse": { + "type": "structure", + "members": { + "Item": { + "shape": "ListOfTemplateVersionResponse" + }, + "Message": { + "shape": "__string" + }, + "NextToken": { + "shape": "__string" + }, + "RequestID": { + "shape": "__string" + } + }, + "required": [ + "Item" + ] + }, "TemplatesResponse": { "type": "structure", "members": { @@ -10174,6 +10395,11 @@ "UpdateEmailTemplateRequest": { "type": "structure", "members": { + "CreateNewVersion": { + "shape": "__boolean", + "location": "querystring", + "locationName": "create-new-version" + }, "EmailTemplateRequest": { "shape": "EmailTemplateRequest" }, @@ -10181,6 +10407,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -10372,6 +10603,11 @@ "UpdatePushTemplateRequest": { "type": "structure", "members": { + "CreateNewVersion": { + "shape": "__boolean", + "location": "querystring", + "locationName": "create-new-version" + }, "PushNotificationTemplateRequest": { "shape": "PushNotificationTemplateRequest" }, @@ -10379,6 +10615,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -10468,6 +10709,11 @@ "UpdateSmsTemplateRequest": { "type": "structure", "members": { + "CreateNewVersion": { + "shape": "__boolean", + "location": "querystring", + "locationName": "create-new-version" + }, "SMSTemplateRequest": { "shape": "SMSTemplateRequest" }, @@ -10475,6 +10721,11 @@ "shape": "__string", "location": "uri", "locationName": "template-name" + }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" } }, "required": [ @@ -10495,6 +10746,42 @@ ], "payload": "MessageBody" }, + "UpdateTemplateActiveVersionRequest": { + "type": "structure", + "members": { + "TemplateActiveVersionRequest": { + "shape": "TemplateActiveVersionRequest" + }, + "TemplateName": { + "shape": "__string", + "location": "uri", + "locationName": "template-name" + }, + "TemplateType": { + "shape": "__string", + "location": "uri", + "locationName": "template-type" + } + }, + "required": [ + "TemplateName", + "TemplateType", + "TemplateActiveVersionRequest" + ], + "payload": "TemplateActiveVersionRequest" + }, + "UpdateTemplateActiveVersionResponse": { + "type": "structure", + "members": { + "MessageBody": { + "shape": "MessageBody" + } + }, + "required": [ + "MessageBody" + ], + "payload": "MessageBody" + }, "UpdateVoiceChannelRequest": { "type": "structure", "members": { @@ -10528,11 +10815,21 @@ "UpdateVoiceTemplateRequest": { "type": "structure", "members": { + "CreateNewVersion": { + "shape": "__boolean", + "location": "querystring", + "locationName": "create-new-version" + }, "TemplateName": { "shape": "__string", "location": "uri", "locationName": "template-name" }, + "Version": { + "shape": "__string", + "location": "querystring", + "locationName": "version" + }, "VoiceTemplateRequest": { "shape": "VoiceTemplateRequest" } @@ -10679,6 +10976,9 @@ "TemplateType": { "shape": "TemplateType" }, + "Version": { + "shape": "__string" + }, "VoiceId": { "shape": "__string" } @@ -11000,6 +11300,12 @@ "shape": "TemplateResponse" } }, + "ListOfTemplateVersionResponse": { + "type": "list", + "member": { + "shape": "TemplateVersionResponse" + } + }, "ListOfTreatmentResource": { "type": "list", "member": { @@ -11186,4 +11492,4 @@ "timestampFormat": "unixTimestamp" } } -} \ No newline at end of file +} diff --git a/models/apis/pinpoint/2016-12-01/docs-2.json b/models/apis/pinpoint/2016-12-01/docs-2.json index d1a0ee431d1..0d6db24d147 100644 --- a/models/apis/pinpoint/2016-12-01/docs-2.json +++ b/models/apis/pinpoint/2016-12-01/docs-2.json @@ -4,14 +4,14 @@ "operations" : { "CreateApp" : "Creates an application.
", "CreateCampaign" : "Creates a new campaign for an application or updates the settings of an existing campaign for an application.
", - "CreateEmailTemplate" : "Creates a message template that you can use in messages that are sent through the email channel.
", + "CreateEmailTemplate" : "Creates a message template for messages that are sent through the email channel.
", "CreateExportJob" : "Creates an export job for an application.
", "CreateImportJob" : "Creates an import job for an application.
", "CreateJourney" : "Creates a journey for an application.
", - "CreatePushTemplate" : "Creates a message template that you can use in messages that are sent through a push notification channel.
", + "CreatePushTemplate" : "Creates a message template for messages that are sent through a push notification channel.
", "CreateSegment" : "Creates a new segment for an application or updates the configuration, dimension, and other settings for an existing segment that's associated with an application.
", - "CreateSmsTemplate" : "Creates a message template that you can use in messages that are sent through the SMS channel.
", - "CreateVoiceTemplate" : "Creates a message template that you can use in messages that are sent through the voice channel.
", + "CreateSmsTemplate" : "Creates a message template for messages that are sent through the SMS channel.
", + "CreateVoiceTemplate" : "Creates a message template for messages that are sent through the voice channel.
", "DeleteAdmChannel" : "Disables the ADM channel for an application and deletes any existing settings for the channel.
", "DeleteApnsChannel" : "Disables the APNs channel for an application and deletes any existing settings for the channel.
", "DeleteApnsSandboxChannel" : "Disables the APNs sandbox channel for an application and deletes any existing settings for the channel.
", @@ -21,18 +21,18 @@ "DeleteBaiduChannel" : "Disables the Baidu channel for an application and deletes any existing settings for the channel.
", "DeleteCampaign" : "Deletes a campaign from an application.
", "DeleteEmailChannel" : "Disables the email channel for an application and deletes any existing settings for the channel.
", - "DeleteEmailTemplate" : "Deletes a message template that was designed for use in messages that were sent through the email channel.
", + "DeleteEmailTemplate" : "Deletes a message template for messages that were sent through the email channel.
", "DeleteEndpoint" : "Deletes an endpoint from an application.
", "DeleteEventStream" : "Deletes the event stream for an application.
", "DeleteGcmChannel" : "Disables the GCM channel for an application and deletes any existing settings for the channel.
", "DeleteJourney" : "Deletes a journey from an application.
", - "DeletePushTemplate" : "Deletes a message template that was designed for use in messages that were sent through a push notification channel.
", + "DeletePushTemplate" : "Deletes a message template for messages that were sent through a push notification channel.
", "DeleteSegment" : "Deletes a segment from an application.
", "DeleteSmsChannel" : "Disables the SMS channel for an application and deletes any existing settings for the channel.
", - "DeleteSmsTemplate" : "Deletes a message template that was designed for use in messages that were sent through the SMS channel.
", + "DeleteSmsTemplate" : "Deletes a message template for messages that were sent through the SMS channel.
", "DeleteUserEndpoints" : "Deletes all the endpoints that are associated with a specific user ID.
", "DeleteVoiceChannel" : "Disables the voice channel for an application and deletes any existing settings for the channel.
", - "DeleteVoiceTemplate" : "Deletes a message template that was designed for use in messages that were sent through the voice channel.
", + "DeleteVoiceTemplate" : "Deletes a message template for messages that were sent through the voice channel.
", "GetAdmChannel" : "Retrieves information about the status and settings of the ADM channel for an application.
", "GetApnsChannel" : "Retrieves information about the status and settings of the APNs channel for an application.
", "GetApnsSandboxChannel" : "Retrieves information about the status and settings of the APNs sandbox channel for an application.
", @@ -41,7 +41,7 @@ "GetApp" : "Retrieves information about an application.
", "GetApplicationDateRangeKpi" : "Retrieves (queries) pre-aggregated data for a standard metric that applies to an application.
", "GetApplicationSettings" : "Retrieves information about the settings for an application.
", - "GetApps" : "Retrieves information about all of your applications.
", + "GetApps" : "Retrieves information about all the applications that are associated with your Amazon Pinpoint account.
", "GetBaiduChannel" : "Retrieves information about the status and settings of the Baidu channel for an application.
", "GetCampaign" : "Retrieves information about the status, configuration, and other settings for a campaign.
", "GetCampaignActivities" : "Retrieves information about all the activities for a campaign.
", @@ -51,7 +51,7 @@ "GetCampaigns" : "Retrieves information about the status, configuration, and other settings for all the campaigns that are associated with an application.
", "GetChannels" : "Retrieves information about the history and status of each channel for an application.
", "GetEmailChannel" : "Retrieves information about the status and settings of the email channel for an application.
", - "GetEmailTemplate" : "Retrieves the content and settings for a message template that you can use in messages that are sent through the email channel.
", + "GetEmailTemplate" : "Retrieves the content and settings of a message template for messages that are sent through the email channel.
", "GetEndpoint" : "Retrieves information about the settings and attributes of a specific endpoint for an application.
", "GetEventStream" : "Retrieves information about the event stream settings for an application.
", "GetExportJob" : "Retrieves information about the status and settings of a specific export job for an application.
", @@ -63,20 +63,21 @@ "GetJourneyDateRangeKpi" : "Retrieves (queries) pre-aggregated data for a standard engagement metric that applies to a journey.
", "GetJourneyExecutionActivityMetrics" : "Retrieves (queries) pre-aggregated data for a standard execution metric that applies to a journey activity.
", "GetJourneyExecutionMetrics" : "Retrieves (queries) pre-aggregated data for a standard execution metric that applies to a journey.
", - "GetPushTemplate" : "Retrieves the content and settings for a message template that you can use in messages that are sent through a push notification channel.
", + "GetPushTemplate" : "Retrieves the content and settings of a message template for messages that are sent through a push notification channel.
", "GetSegment" : "Retrieves information about the configuration, dimension, and other settings for a specific segment that's associated with an application.
", "GetSegmentExportJobs" : "Retrieves information about the status and settings of the export jobs for a segment.
", "GetSegmentImportJobs" : "Retrieves information about the status and settings of the import jobs for a segment.
", "GetSegmentVersion" : "Retrieves information about the configuration, dimension, and other settings for a specific version of a segment that's associated with an application.
", - "GetSegmentVersions" : "Retrieves information about the configuration, dimension, and other settings for all versions of a specific segment that's associated with an application.
", + "GetSegmentVersions" : "Retrieves information about the configuration, dimension, and other settings for all the versions of a specific segment that's associated with an application.
", "GetSegments" : "Retrieves information about the configuration, dimension, and other settings for all the segments that are associated with an application.
", "GetSmsChannel" : "Retrieves information about the status and settings of the SMS channel for an application.
", - "GetSmsTemplate" : "Retrieves the content and settings for a message template that you can use in messages that are sent through the SMS channel.
", + "GetSmsTemplate" : "Retrieves the content and settings of a message template for messages that are sent through the SMS channel.
", "GetUserEndpoints" : "Retrieves information about all the endpoints that are associated with a specific user ID.
", "GetVoiceChannel" : "Retrieves information about the status and settings of the voice channel for an application.
", - "GetVoiceTemplate" : "Retrieves the content and settings for a message template that you can use in messages that are sent through the voice channel.
", + "GetVoiceTemplate" : "Retrieves the content and settings of a message template for messages that are sent through the voice channel.
", "ListJourneys" : "Retrieves information about the status, configuration, and other settings for all the journeys that are associated with an application.
", "ListTagsForResource" : "Retrieves all the tags (keys and values) that are associated with an application, campaign, journey, message template, or segment.
", + "ListTemplateVersions" : "Retrieves information about all the versions of a specific message template.
", "ListTemplates" : "Retrieves information about all the message templates that are associated with your Amazon Pinpoint account.
", "PhoneNumberValidate" : "Retrieves information about a phone number.
", "PutEventStream" : "Creates a new event stream for an application or updates the settings of an existing event stream for an application.
", @@ -95,18 +96,19 @@ "UpdateBaiduChannel" : "Enables the Baidu channel for an application or updates the status and settings of the Baidu channel for an application.
", "UpdateCampaign" : "Updates the configuration and other settings for a campaign.
", "UpdateEmailChannel" : "Enables the email channel for an application or updates the status and settings of the email channel for an application.
", - "UpdateEmailTemplate" : "Updates an existing message template that you can use in messages that are sent through the email channel.
", + "UpdateEmailTemplate" : "Updates an existing message template for messages that are sent through the email channel.
", "UpdateEndpoint" : "Creates a new endpoint for an application or updates the settings and attributes of an existing endpoint for an application. You can also use this operation to define custom attributes (Attributes, Metrics, and UserAttributes properties) for an endpoint.
", "UpdateEndpointsBatch" : "Creates a new batch of endpoints for an application or updates the settings and attributes of a batch of existing endpoints for an application. You can also use this operation to define custom attributes (Attributes, Metrics, and UserAttributes properties) for a batch of endpoints.
", "UpdateGcmChannel" : "Enables the GCM channel for an application or updates the status and settings of the GCM channel for an application.
", "UpdateJourney" : "Updates the configuration and other settings for a journey.
", - "UpdateJourneyState" : "Cancels an active journey.
", - "UpdatePushTemplate" : "Updates an existing message template that you can use in messages that are sent through a push notification channel.
", + "UpdateJourneyState" : "Cancels (stops) an active journey.
", + "UpdatePushTemplate" : "Updates an existing message template for messages that are sent through a push notification channel.
", "UpdateSegment" : "Creates a new segment for an application or updates the configuration, dimension, and other settings for an existing segment that's associated with an application.
", "UpdateSmsChannel" : "Enables the SMS channel for an application or updates the status and settings of the SMS channel for an application.
", - "UpdateSmsTemplate" : "Updates an existing message template that you can use in messages that are sent through the SMS channel.
", + "UpdateSmsTemplate" : "Updates an existing message template for messages that are sent through the SMS channel.
", + "UpdateTemplateActiveVersion" : "Changes the status of a specific version of a message template to active.
", "UpdateVoiceChannel" : "Enables the voice channel for an application or updates the status and settings of the voice channel for an application.
", - "UpdateVoiceTemplate" : "Updates an existing message template that you can use in messages that are sent through the voice channel.
" + "UpdateVoiceTemplate" : "Updates an existing message template for messages that are sent through the voice channel.
" }, "shapes" : { "ADMChannelRequest" : { @@ -698,7 +700,7 @@ "refs" : { } }, "JourneyExecutionMetricsResponse" : { - "base" : "Provides the results of a query that retrieved the data for a standard execution metric that applies to a journey.
", + "base" : "Provides the results of a query that retrieved the data for a standard execution metric that applies to a journey, and provides information about that query.
", "refs" : { } }, "JourneyLimits" : { @@ -796,9 +798,9 @@ } }, "MultiConditionalSplitActivity" : { - "base" : "Specifies the settings for a multivariate split activity in a journey. This type of activity sends participants down one of as many as five paths in a journey, based on conditions that you specify.
", + "base" : "Specifies the settings for a multivariate split activity in a journey. This type of activity sends participants down one of as many as five paths (including a default Else path) in a journey, based on conditions that you specify.
", "refs" : { - "Activity$MultiCondition" : "The settings for a multivariate split activity. This type of activity sends participants down one of as many as five paths in a journey, based on conditions that you specify.
" + "Activity$MultiCondition" : "The settings for a multivariate split activity. This type of activity sends participants down one of as many as five paths (including a default Else path) in a journey, based on conditions that you specify.
" } }, "NotFoundException" : { @@ -1055,7 +1057,7 @@ "State" : { "base" : null, "refs" : { - "JourneyResponse$State" : "The current status of the journey. Possible values are:
DRAFT - The journey is being developed and hasn't been published yet.
ACTIVE - The journey has been developed and published. Depending on the journey's schedule, the journey may currently be running or scheduled to start running at a later time. If a journey's status is ACTIVE, you can't add, change, or remove activities from it.
COMPLETED - The journey has been published and has finished running. All participants have entered the journey and no participants are waiting to complete the journey or any activities in the journey.
CANCELLED - The journey has been stopped. If a journey's status is CANCELLED, you can't add, change, or remove activities or segment settings from the journey.
CLOSED - The journey has been published and has started running. It may have also passed its scheduled end time, or passed its scheduled start time and a refresh frequency hasn't been specified for it. If a journey's status is CLOSED, you can't add participants to it, and no existing participants can enter the journey for the first time. However, any existing participants who are currently waiting to start an activity may resume the journey.
The current status of the journey. Possible values are:
DRAFT - The journey is being developed and hasn't been published yet.
ACTIVE - The journey has been developed and published. Depending on the journey's schedule, the journey may currently be running or scheduled to start running at a later time. If a journey's status is ACTIVE, you can't add, change, or remove activities from it.
COMPLETED - The journey has been published and has finished running. All participants have entered the journey and no participants are waiting to complete the journey or any activities in the journey.
CANCELLED - The journey has been stopped. If a journey's status is CANCELLED, you can't add, change, or remove activities or segment settings from the journey.
CLOSED - The journey has been published and has started running. It may have also passed its scheduled end time, or passed its scheduled start time and a refresh frequency hasn't been specified for it. If a journey's status is CLOSED, you can't add participants to it, and no existing participants can enter the journey for the first time. However, any existing participants who are currently waiting to start an activity may continue the journey.
The status of the journey. Currently, the only supported value is CANCELLED.
If you cancel a journey, Amazon Pinpoint continues to perform activities that are currently in progress, until those activities are complete. Amazon Pinpoint also continues to collect and aggregate analytics data for those activities, until they are complete, and any activities that were complete when you cancelled the journey.
After you cancel a journey, you can't add, change, or remove any activities from the journey. In addition, Amazon Pinpoint stops evaluating the journey and doesn't perform any activities that haven't started.
", "WriteJourneyRequest$State" : "The status of the journey. Valid values are:
DRAFT - Saves the journey and doesn't publish it.
ACTIVE - Saves and publishes the journey. Depending on the journey's schedule, the journey starts running immediately or at the scheduled start time. If a journey's status is ACTIVE, you can't add, change, or remove activities from it.
The CANCELLED, COMPLETED, and CLOSED values are not supported in requests to create or update a journey. To cancel a journey, use the Journey State resource.
" } @@ -1065,7 +1067,7 @@ "refs" : { } }, "Template" : { - "base" : "Specifies the name of the message template to use for the message.
", + "base" : "Specifies the name and version of the message template to use for the message.
", "refs" : { "TemplateConfiguration$EmailTemplate" : "The email template to use for the message.
", "TemplateConfiguration$PushTemplate" : "The push notification template to use for the message.
", @@ -1073,6 +1075,10 @@ "TemplateConfiguration$VoiceTemplate" : "The voice template to use for the message.
" } }, + "TemplateActiveVersionRequest" : { + "base" : "Specifies which version of a message template to use as the active version of the template.
", + "refs" : { } + }, "TemplateConfiguration" : { "base" : "Specifies the message template to use for the message, for each type of channel.
", "refs" : { @@ -1096,10 +1102,20 @@ "EmailTemplateResponse$TemplateType" : "The type of channel that the message template is designed for. For an email template, this value is EMAIL.
", "PushNotificationTemplateResponse$TemplateType" : "The type of channel that the message template is designed for. For a push notification template, this value is PUSH.
", "SMSTemplateResponse$TemplateType" : "The type of channel that the message template is designed for. For an SMS template, this value is SMS.
", - "TemplateResponse$TemplateType" : "The type of channel that the message template is designed for.
", + "TemplateResponse$TemplateType" : "The type of channel that the message template is designed for. Possible values are: EMAIL, PUSH, SMS, and VOICE.
", "VoiceTemplateResponse$TemplateType" : "The type of channel that the message template is designed for. For a voice template, this value is VOICE.
" } }, + "TemplateVersionResponse" : { + "base" : "Provides information about a specific version of a message template.
", + "refs" : { + "ListOfTemplateVersionResponse$member" : null + } + }, + "TemplateVersionsResponse" : { + "base" : "Provides information about all the versions of a specific message template.
", + "refs" : { } + }, "TemplatesResponse" : { "base" : "Provides information about all the message templates that are associated with your Amazon Pinpoint account.
", "refs" : { } @@ -1304,7 +1320,7 @@ "ExportJobResponse$TotalProcessed" : "The total number of endpoint definitions that were processed by the export job.
", "GCMChannelResponse$Version" : "The current version of the GCM channel.
", "GCMMessage$TimeToLive" : "The amount of time, in seconds, that FCM should store and attempt to deliver the push notification, if the service is unable to deliver the notification the first time. If you don't specify this value, FCM defaults to the maximum value, which is 2,419,200 seconds (28 days).
Amazon Pinpoint specifies this value in the FCM time_to_live parameter when it sends the notification message to FCM.
", - "HoldoutActivity$Percentage" : "The percentage of participants who shouldn't continue the journey.
", + "HoldoutActivity$Percentage" : "The percentage of participants who shouldn't continue the journey.
To determine which participants are held out, Amazon Pinpoint applies a probability-based algorithm to the percentage that you specify. Therefore, the actual percentage of participants who are held out may not be equal to the percentage that you specify.
", "ImportJobResponse$CompletedPieces" : "The number of pieces that were processed successfully (completed) by the import job, as of the time of the request.
", "ImportJobResponse$FailedPieces" : "The number of pieces that weren't processed successfully (failed) by the import job, as of the time of the request.
", "ImportJobResponse$TotalFailures" : "The total number of endpoint definitions that weren't processed successfully (failed) by the import job, typically because an error, such as a syntax error, occurred.
", @@ -1316,7 +1332,7 @@ "Message$TimeToLive" : "The number of seconds that the push-notification service should keep the message, if the service is unable to deliver the notification the first time. This value is converted to an expiration value when it's sent to a push-notification service. If this value is 0, the service treats the notification as if it expires immediately and the service doesn't store or try to deliver the notification again.
This value doesn't apply to messages that are sent through the Amazon Device Messaging (ADM) service.
", "MessageResult$StatusCode" : "The downstream service status code for delivering the message.
", "NumberValidateResponse$PhoneTypeCode" : "The phone type, represented by an integer. Valid values are: 0 (mobile), 1 (landline), 2 (VoIP), 3 (invalid), 4 (other), and 5 (prepaid).
", - "RandomSplitEntry$Percentage" : "The percentage of participants to send down the activity path.
", + "RandomSplitEntry$Percentage" : "The percentage of participants to send down the activity path.
To determine which participants are sent down each path, Amazon Pinpoint applies a probability-based algorithm to the percentages that you specify for the paths. Therefore, the actual percentage of participants who are sent down a path may not be equal to the percentage that you specify.
", "SMSChannelResponse$PromotionalMessagesPerSecond" : "The maximum number of promotional messages that you can send through the SMS channel each second.
", "SMSChannelResponse$TransactionalMessagesPerSecond" : "The maximum number of transactional messages that you can send through the SMS channel each second.
", "SMSChannelResponse$Version" : "The current version of the SMS channel.
", @@ -1441,6 +1457,12 @@ "TemplatesResponse$Item" : "An array of responses, one for each message template that's associated with your Amazon Pinpoint account and meets any filter criteria that you specified in the request.
" } }, + "ListOfTemplateVersionResponse" : { + "base" : null, + "refs" : { + "TemplateVersionsResponse$Item" : "An array of responses, one for each version of the message template.
" + } + }, "ListOfTreatmentResource" : { "base" : null, "refs" : { @@ -1469,8 +1491,8 @@ "MapOfActivity" : { "base" : null, "refs" : { - "JourneyResponse$Activities" : "The configuration and other settings for the activities that comprise the journey.
", - "WriteJourneyRequest$Activities" : "The configuration and other settings for the activities that comprise the journey.
" + "JourneyResponse$Activities" : "A map that contains a set of Activity objects, one object for each activity in the journey. For each Activity object, the key is the unique identifier (string) for an activity and the value is the settings for the activity.
", + "WriteJourneyRequest$Activities" : "A map that contains a set of Activity objects, one object for each activity in the journey. For each Activity object, the key is the unique identifier (string) for an activity and the value is the settings for the activity. An activity identifier can contain a maximum of 128 characters. The characters must be alphanumeric characters.
" } }, "MapOfAddressConfiguration" : { @@ -1785,7 +1807,7 @@ "CampaignHook$WebUrl" : "The web URL that Amazon Pinpoint calls to invoke the AWS Lambda function over HTTPS.
", "CampaignResponse$ApplicationId" : "The unique identifier for the application that the campaign applies to.
", "CampaignResponse$Arn" : "The Amazon Resource Name (ARN) of the campaign.
", - "CampaignResponse$CreationDate" : "The date, ISO 8601 format, when the campaign was created.
", + "CampaignResponse$CreationDate" : "The date, in ISO 8601 format, when the campaign was created.
", "CampaignResponse$Description" : "The custom description of the campaign.
", "CampaignResponse$Id" : "The unique identifier for the campaign.
", "CampaignResponse$LastModifiedDate" : "The date, in ISO 8601 format, when the campaign was last modified.
", @@ -1801,8 +1823,8 @@ "ChannelResponse$Id" : "(Deprecated) An identifier for the channel. This property is retained only for backward compatibility.
", "ChannelResponse$LastModifiedBy" : "The user who last modified the channel.
", "ChannelResponse$LastModifiedDate" : "The date and time, in ISO 8601 format, when the channel was last modified.
", - "ConditionalSplitActivity$FalseActivity" : "The unique identifier for the activity to perform if the condition isn't met.
", - "ConditionalSplitActivity$TrueActivity" : "The unique identifier for the activity to perform if the condition is met.
", + "ConditionalSplitActivity$FalseActivity" : "The unique identifier for the activity to perform if the conditions aren't met.
", + "ConditionalSplitActivity$TrueActivity" : "The unique identifier for the activity to perform if the conditions are met.
", "CreateApplicationRequest$Name" : "The display name of the application. This name is displayed as the Project name on the Amazon Pinpoint console.
", "CreateTemplateMessageBody$Arn" : "The Amazon Resource Name (ARN) of the message template that was created.
", "CreateTemplateMessageBody$Message" : "The message that's returned from the API for the request to create the message template.
", @@ -1834,20 +1856,22 @@ "EmailMessage$FromAddress" : "The verified email address to send the email message from. The default value is the FromAddress specified for the email channel.
", "EmailMessageActivity$NextActivity" : "The unique identifier for the next activity to perform, after the message is sent.
", "EmailMessageActivity$TemplateName" : "The name of the email template to use for the message.
", + "EmailMessageActivity$TemplateVersion" : "The unique identifier for the version of the email template to use for the message. If specified, this value must match the identifier for an existing template version. To retrieve a list of versions and version identifiers for a template, use the Template Versions resource.
If you don't specify a value for this property, Amazon Pinpoint uses the active version of the template. The active version is typically the version of a template that's been most recently reviewed and approved for use, depending on your workflow. It isn't necessarily the latest version of a template.
", "EmailTemplateRequest$DefaultSubstitutions" : "A JSON object that specifies the default values to use for message variables in the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable. When you create a message that's based on the template, you can override these defaults with message-specific and address-specific variables and values.
", "EmailTemplateRequest$HtmlPart" : "The message body, in HTML format, to use in email messages that are based on the message template. We recommend using HTML format for email clients that render HTML content. You can include links, formatted text, and more in an HTML message.
", "EmailTemplateRequest$Subject" : "The subject line, or title, to use in email messages that are based on the message template.
", "EmailTemplateRequest$TemplateDescription" : "A custom description of the message template.
", "EmailTemplateRequest$TextPart" : "The message body, in plain text format, to use in email messages that are based on the message template. We recommend using plain text format for email clients that don't render HTML content and clients that are connected to high-latency networks, such as mobile devices.
", "EmailTemplateResponse$Arn" : "The Amazon Resource Name (ARN) of the message template.
", - "EmailTemplateResponse$CreationDate" : "The date when the message template was created.
", + "EmailTemplateResponse$CreationDate" : "The date, in ISO 8601 format, when the message template was created.
", "EmailTemplateResponse$DefaultSubstitutions" : "The JSON object that specifies the default values that are used for message variables in the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable.
", "EmailTemplateResponse$HtmlPart" : "The message body, in HTML format, that's used in email messages that are based on the message template.
", - "EmailTemplateResponse$LastModifiedDate" : "The date when the message template was last modified.
", + "EmailTemplateResponse$LastModifiedDate" : "The date, in ISO 8601 format, when the message template was last modified.
", "EmailTemplateResponse$Subject" : "The subject line, or title, that's used in email messages that are based on the message template.
", "EmailTemplateResponse$TemplateDescription" : "The custom description of the message template.
", "EmailTemplateResponse$TemplateName" : "The name of the message template.
", "EmailTemplateResponse$TextPart" : "The message body, in plain text format, that's used in email messages that are based on the message template.
", + "EmailTemplateResponse$Version" : "The unique identifier, as an integer, for the active version of the message template, or the version of the template that you specified by using the version parameter in your request.
", "EndpointBatchItem$Address" : "The destination address for messages or push notifications that you send to the endpoint. The address varies by channel. For a push-notification channel, use the token provided by the push notification service, such as an Apple Push Notification service (APNs) device token or a Firebase Cloud Messaging (FCM) registration token. For the SMS channel, use a phone number in E.164 format, such as +12065550100. For the email channel, use an email address.
", "EndpointBatchItem$EffectiveDate" : "The date and time, in ISO 8601 format, when the endpoint was created or updated.
", "EndpointBatchItem$EndpointStatus" : "Specifies whether to send messages or push notifications to the endpoint. Valid values are: ACTIVE, messages are sent to the endpoint; and, INACTIVE, messages aren’t sent to the endpoint.
Amazon Pinpoint automatically sets this value to ACTIVE when you create an endpoint or update an existing endpoint. Amazon Pinpoint automatically sets this value to INACTIVE if you update another endpoint that has the same address specified by the Address property.
", @@ -1856,10 +1880,10 @@ "EndpointBatchItem$RequestId" : "The unique identifier for the request to create or update the endpoint.
", "EndpointDemographic$AppVersion" : "The version of the app that's associated with the endpoint.
", "EndpointDemographic$Locale" : "The locale of the endpoint, in the following format: the ISO 639-1 alpha-2 code, followed by an underscore (_), followed by an ISO 3166-1 alpha-2 value.
", - "EndpointDemographic$Make" : "The manufacturer of the endpoint device, such as Apple or Samsung.
", - "EndpointDemographic$Model" : "The model name or number of the endpoint device, such as iPhone.
", + "EndpointDemographic$Make" : "The manufacturer of the endpoint device, such as apple or samsung.
", + "EndpointDemographic$Model" : "The model name or number of the endpoint device, such as iPhone or SM-G900F.
", "EndpointDemographic$ModelVersion" : "The model version of the endpoint device.
", - "EndpointDemographic$Platform" : "The platform of the endpoint device, such as iOS or Android.
", + "EndpointDemographic$Platform" : "The platform of the endpoint device, such as ios.
", "EndpointDemographic$PlatformVersion" : "The platform version of the endpoint device.
", "EndpointDemographic$Timezone" : "The time zone of the endpoint, specified as a tz database name value, such as America/Los_Angeles.
", "EndpointItemResponse$Message" : "The custom message that's returned in the response as a result of processing the endpoint data.
", @@ -1998,7 +2022,7 @@ "MessageResult$UpdatedToken" : "For push notifications that are sent through the GCM channel, specifies whether the endpoint's device registration token was updated as part of delivering the message.
", "MetricDimension$ComparisonOperator" : "The operator to use when comparing metric values. Valid values are: GREATER_THAN, LESS_THAN, GREATER_THAN_OR_EQUAL, LESS_THAN_OR_EQUAL, and EQUAL.
", "MultiConditionalBranch$NextActivity" : "The unique identifier for the next activity to perform, after completing the activity for the path.
", - "MultiConditionalSplitActivity$DefaultActivity" : "The activity to perform by default for any path in the activity.
", + "MultiConditionalSplitActivity$DefaultActivity" : "The unique identifier for the activity to perform for participants who don't meet any of the conditions specified for other paths in the activity.
", "NumberValidateRequest$IsoCountryCode" : "The two-character code, in ISO 3166-1 alpha-2 format, for the country or region where the phone number was originally registered.
", "NumberValidateRequest$PhoneNumber" : "The phone number to retrieve information about. The phone number that you provide should include a valid numeric country code. Otherwise, the operation might result in an error.
", "NumberValidateResponse$Carrier" : "The carrier or service provider that the phone number is currently registered with. In some countries and regions, this value may be the carrier or service provider that the phone number was originally registered with.
", @@ -2022,11 +2046,12 @@ "PushNotificationTemplateRequest$DefaultSubstitutions" : "A JSON object that specifies the default values to use for message variables in the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable. When you create a message that's based on the template, you can override these defaults with message-specific and address-specific variables and values.
", "PushNotificationTemplateRequest$TemplateDescription" : "A custom description of the message template.
", "PushNotificationTemplateResponse$Arn" : "The Amazon Resource Name (ARN) of the message template.
", - "PushNotificationTemplateResponse$CreationDate" : "The date when the message template was created.
", + "PushNotificationTemplateResponse$CreationDate" : "The date, in ISO 8601 format, when the message template was created.
", "PushNotificationTemplateResponse$DefaultSubstitutions" : "The JSON object that specifies the default values that are used for message variables in the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable.
", - "PushNotificationTemplateResponse$LastModifiedDate" : "The date when the message template was last modified.
", + "PushNotificationTemplateResponse$LastModifiedDate" : "The date, in ISO 8601 format, when the message template was last modified.
", "PushNotificationTemplateResponse$TemplateDescription" : "The custom description of the message template.
", "PushNotificationTemplateResponse$TemplateName" : "The name of the message template.
", + "PushNotificationTemplateResponse$Version" : "The unique identifier, as an integer, for the active version of the message template, or the version of the template that you specified by using the version parameter in your request.
", "QuietTime$End" : "The specific time when quiet time ends. This value has to use 24-hour notation and be in HH:MM format, where HH is the hour (with a leading zero, if applicable) and MM is the minutes. For example, use 02:30 to represent 2:30 AM, or 14:30 to represent 2:30 PM.
", "QuietTime$Start" : "The specific time when quiet time begins. This value has to use 24-hour notation and be in HH:MM format, where HH is the hour (with a leading zero, if applicable) and MM is the minutes. For example, use 02:30 to represent 2:30 AM, or 14:30 to represent 2:30 PM.
", "RandomSplitEntry$NextActivity" : "The unique identifier for the next activity to perform, after completing the activity for the path.
", @@ -2053,11 +2078,12 @@ "SMSTemplateRequest$TemplateDescription" : "A custom description of the message template.
", "SMSTemplateResponse$Arn" : "The Amazon Resource Name (ARN) of the message template.
", "SMSTemplateResponse$Body" : "The message body that's used in text messages that are based on the message template.
", - "SMSTemplateResponse$CreationDate" : "The date when the message template was created.
", + "SMSTemplateResponse$CreationDate" : "The date, in ISO 8601 format, when the message template was created.
", "SMSTemplateResponse$DefaultSubstitutions" : "The JSON object that specifies the default values that are used for message variables in the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable.
", - "SMSTemplateResponse$LastModifiedDate" : "The date when the message template was last modified.
", + "SMSTemplateResponse$LastModifiedDate" : "The date, in ISO 8601 format, when the message template was last modified.
", "SMSTemplateResponse$TemplateDescription" : "The custom description of the message template.
", "SMSTemplateResponse$TemplateName" : "The name of the message template.
", + "SMSTemplateResponse$Version" : "The unique identifier, as an integer, for the active version of the message template, or the version of the template that you specified by using the version parameter in your request.
", "Schedule$EndTime" : "The scheduled time, in ISO 8601 format, when the campaign ended or will end.
", "Schedule$StartTime" : "The scheduled time, in ISO 8601 format, when the campaign began or will begin.
", "Schedule$Timezone" : "The starting UTC offset for the campaign schedule, if the value of the IsLocalTime property is true. Valid values are: UTC, UTC+01, UTC+02, UTC+03, UTC+03:30, UTC+04, UTC+04:30, UTC+05,\n UTC+05:30, UTC+05:45, UTC+06, UTC+06:30, UTC+07, UTC+08, UTC+09, UTC+09:30,\n UTC+10, UTC+10:30, UTC+11, UTC+12, UTC+13, UTC-02, UTC-03, UTC-04, UTC-05, UTC-06,\n UTC-07, UTC-08, UTC-09, UTC-10, and UTC-11.
", @@ -2083,12 +2109,25 @@ "SimpleEmailPart$Data" : "The textual data of the message content.
", "StartCondition$Description" : "The custom description of the condition.
", "Template$Name" : "The name of the message template to use for the message. If specified, this value must match the name of an existing message template.
", + "Template$Version" : "The unique identifier for the version of the message template to use for the message. If specified, this value must match the identifier for an existing template version. To retrieve a list of versions and version identifiers for a template, use the Template Versions resource.
If you don't specify a value for this property, Amazon Pinpoint uses the active version of the template. The active version is typically the version of a template that's been most recently reviewed and approved for use, depending on your workflow. It isn't necessarily the latest version of a template.
", + "TemplateActiveVersionRequest$Version" : "The unique identifier for the version of the message template to use as the active version of the template. If specified, this value must match the identifier for an existing template version. To retrieve a list of versions and version identifiers for a template, use the Template Versions resource.
", "TemplateResponse$Arn" : "The Amazon Resource Name (ARN) of the message template.
", - "TemplateResponse$CreationDate" : "The date when the message template was created.
", + "TemplateResponse$CreationDate" : "The date, in ISO 8601 format, when the message template was created.
", "TemplateResponse$DefaultSubstitutions" : "The JSON object that specifies the default values that are used for message variables in the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable.
", - "TemplateResponse$LastModifiedDate" : "The date when the message template was last modified.
", + "TemplateResponse$LastModifiedDate" : "The date, in ISO 8601 format, when the message template was last modified.
", "TemplateResponse$TemplateDescription" : "The custom description of the message template.
", "TemplateResponse$TemplateName" : "The name of the message template.
", + "TemplateResponse$Version" : "The unique identifier, as an integer, for the active version of the message template.
", + "TemplateVersionResponse$CreationDate" : "The date, in ISO 8601 format, when the version of the message template was created.
", + "TemplateVersionResponse$DefaultSubstitutions" : "A JSON object that specifies the default values that are used for message variables in the version of the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable.
", + "TemplateVersionResponse$LastModifiedDate" : "The date, in ISO 8601 format, when the version of the message template was last modified.
", + "TemplateVersionResponse$TemplateDescription" : "The custom description of the version of the message template.
", + "TemplateVersionResponse$TemplateName" : "The name of the message template.
", + "TemplateVersionResponse$TemplateType" : "The type of channel that the message template is designed for. Possible values are: EMAIL, PUSH, SMS, and VOICE.
", + "TemplateVersionResponse$Version" : "The unique identifier for the version of the message template. This value is an integer that Amazon Pinpoint automatically increments and assigns to each new version of a template.
", + "TemplateVersionsResponse$Message" : "The message that's returned from the API for the request to retrieve information about all the versions of the message template.
", + "TemplateVersionsResponse$NextToken" : "The string to use in a subsequent request to get the next page of results in a paginated response. This value is null if there are no additional pages.
", + "TemplateVersionsResponse$RequestID" : "The unique identifier for the request to retrieve information about all the versions of the message template.
", "TemplatesResponse$NextToken" : "The string to use in a subsequent request to get the next page of results in a paginated response. This value is null if there are no additional pages.
", "TreatmentResource$Id" : "The unique identifier for the treatment.
", "TreatmentResource$TreatmentDescription" : "The custom description of the treatment.
", @@ -2110,12 +2149,13 @@ "VoiceTemplateRequest$VoiceId" : "The name of the voice to use when delivering messages that are based on the message template. For a list of supported voices, see the Amazon Polly Developer Guide.
", "VoiceTemplateResponse$Arn" : "The Amazon Resource Name (ARN) of the message template.
", "VoiceTemplateResponse$Body" : "The text of the script that's used in messages that are based on the message template, in plain text format.
", - "VoiceTemplateResponse$CreationDate" : "The date when the message template was created.
", + "VoiceTemplateResponse$CreationDate" : "The date, in ISO 8601 format, when the message template was created.
", "VoiceTemplateResponse$DefaultSubstitutions" : "The JSON object that specifies the default values that are used for message variables in the message template. This object is a set of key-value pairs. Each key defines a message variable in the template. The corresponding value defines the default value for that variable.
", "VoiceTemplateResponse$LanguageCode" : "The code for the language that's used when synthesizing the text of the script in messages that are based on the message template. For a list of supported languages and the code for each one, see the Amazon Polly Developer Guide.
", - "VoiceTemplateResponse$LastModifiedDate" : "The date when the message template was last modified.
", + "VoiceTemplateResponse$LastModifiedDate" : "The date, in ISO 8601 format, when the message template was last modified.
", "VoiceTemplateResponse$TemplateDescription" : "The custom description of the message template.
", "VoiceTemplateResponse$TemplateName" : "The name of the message template.
", + "VoiceTemplateResponse$Version" : "The unique identifier, as an integer, for the active version of the message template, or the version of the template that you specified by using the version parameter in your request.
", "VoiceTemplateResponse$VoiceId" : "The name of the voice that's used when delivering messages that are based on the message template. For a list of supported voices, see the Amazon Polly Developer Guide.
", "WaitActivity$NextActivity" : "The unique identifier for the next activity to perform, after performing the wait activity.
", "WaitTime$WaitFor" : "The amount of time to wait, as a duration in ISO 8601 format, before determining whether the activity's conditions have been met or moving participants to the next activity in the journey.
", @@ -2131,7 +2171,7 @@ "WriteJourneyRequest$LastModifiedDate" : "The date, in ISO 8601 format, when the journey was last modified.
", "WriteJourneyRequest$Name" : "The name of the journey. A journey name can contain a maximum of 150 characters. The characters can be alphanumeric characters or symbols, such as underscores (_) or hyphens (-). A journey name can't contain any spaces.
", "WriteJourneyRequest$RefreshFrequency" : "The frequency with which Amazon Pinpoint evaluates segment and event data for the journey, as a duration in ISO 8601 format.
", - "WriteJourneyRequest$StartActivity" : "The unique identifier for the first activity in the journey.
", + "WriteJourneyRequest$StartActivity" : "The unique identifier for the first activity in the journey. An activity identifier can contain a maximum of 128 characters. The characters must be alphanumeric characters.
", "WriteSegmentRequest$Name" : "The name of the segment.
", "WriteTreatmentResource$TreatmentDescription" : "A custom description of the treatment.
", "WriteTreatmentResource$TreatmentName" : "The custom name of the treatment. A treatment is a variation of a campaign that's used for A/B testing of a campaign.
", diff --git a/models/apis/rds/2014-10-31/api-2.json b/models/apis/rds/2014-10-31/api-2.json index e9d65b6a2a0..3504deff3ff 100644 --- a/models/apis/rds/2014-10-31/api-2.json +++ b/models/apis/rds/2014-10-31/api-2.json @@ -120,6 +120,22 @@ {"shape":"InvalidDBClusterStateFault"} ] }, + "CancelExportTask":{ + "name":"CancelExportTask", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CancelExportTaskMessage"}, + "output":{ + "shape":"ExportTask", + "resultWrapper":"CancelExportTaskResult" + }, + "errors":[ + {"shape":"ExportTaskNotFoundFault"}, + {"shape":"InvalidExportTaskStateFault"} + ] + }, "CopyDBClusterParameterGroup":{ "name":"CopyDBClusterParameterGroup", "http":{ @@ -1188,6 +1204,21 @@ "resultWrapper":"DescribeEventsResult" } }, + "DescribeExportTasks":{ + "name":"DescribeExportTasks", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeExportTasksMessage"}, + "output":{ + "shape":"ExportTasksMessage", + "resultWrapper":"DescribeExportTasksResult" + }, + "errors":[ + {"shape":"ExportTaskNotFoundFault"} + ] + }, "DescribeGlobalClusters":{ "name":"DescribeGlobalClusters", "http":{ @@ -1396,6 +1427,21 @@ {"shape":"DBClusterNotFoundFault"} ] }, + "ModifyCertificates":{ + "name":"ModifyCertificates", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ModifyCertificatesMessage"}, + "output":{ + "shape":"ModifyCertificatesResult", + "resultWrapper":"ModifyCertificatesResult" + }, + "errors":[ + {"shape":"CertificateNotFoundFault"} + ] + }, "ModifyCurrentDBClusterCapacity":{ "name":"ModifyCurrentDBClusterCapacity", "http":{ @@ -2137,6 +2183,29 @@ {"shape":"KMSKeyNotAccessibleFault"} ] }, + "StartExportTask":{ + "name":"StartExportTask", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartExportTaskMessage"}, + "output":{ + "shape":"ExportTask", + "resultWrapper":"StartExportTaskResult" + }, + "errors":[ + {"shape":"DBSnapshotNotFoundFault"}, + {"shape":"DBClusterSnapshotNotFoundFault"}, + {"shape":"ExportTaskAlreadyExistsFault"}, + {"shape":"InvalidS3BucketFault"}, + {"shape":"IamRoleNotFoundFault"}, + {"shape":"IamRoleMissingPermissionsFault"}, + {"shape":"InvalidExportOnlyFault"}, + {"shape":"KMSKeyNotAccessibleFault"}, + {"shape":"InvalidExportSourceStateFault"} + ] + }, "StopActivityStream":{ "name":"StopActivityStream", "http":{ @@ -2436,6 +2505,13 @@ }, "Boolean":{"type":"boolean"}, "BooleanOptional":{"type":"boolean"}, + "CancelExportTaskMessage":{ + "type":"structure", + "required":["ExportTaskIdentifier"], + "members":{ + "ExportTaskIdentifier":{"shape":"String"} + } + }, "Certificate":{ "type":"structure", "members":{ @@ -2444,7 +2520,9 @@ "Thumbprint":{"shape":"String"}, "ValidFrom":{"shape":"TStamp"}, "ValidTill":{"shape":"TStamp"}, - "CertificateArn":{"shape":"String"} + "CertificateArn":{"shape":"String"}, + "CustomerOverride":{"shape":"BooleanOptional"}, + "CustomerOverrideValidTill":{"shape":"TStamp"} }, "wrapper":true }, @@ -4782,6 +4860,16 @@ "Marker":{"shape":"String"} } }, + "DescribeExportTasksMessage":{ + "type":"structure", + "members":{ + "ExportTaskIdentifier":{"shape":"String"}, + "SourceArn":{"shape":"String"}, + "Filters":{"shape":"FilterList"}, + "Marker":{"shape":"String"}, + "MaxRecords":{"shape":"String"} + } + }, "DescribeGlobalClustersMessage":{ "type":"structure", "members":{ @@ -5096,6 +5184,62 @@ "Events":{"shape":"EventList"} } }, + "ExportTask":{ + "type":"structure", + "members":{ + "ExportTaskIdentifier":{"shape":"String"}, + "SourceArn":{"shape":"String"}, + "ExportOnly":{"shape":"StringList"}, + "SnapshotTime":{"shape":"TStamp"}, + "TaskStartTime":{"shape":"TStamp"}, + "TaskEndTime":{"shape":"TStamp"}, + "S3Bucket":{"shape":"String"}, + "S3Prefix":{"shape":"String"}, + "IamRoleArn":{"shape":"String"}, + "KmsKeyId":{"shape":"String"}, + "Status":{"shape":"String"}, + "PercentProgress":{"shape":"Integer"}, + "TotalExtractedDataInGB":{"shape":"Integer"}, + "FailureCause":{"shape":"String"}, + "WarningMessage":{"shape":"String"} + } + }, + "ExportTaskAlreadyExistsFault":{ + "type":"structure", + "members":{ + }, + "error":{ + "code":"ExportTaskAlreadyExists", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "ExportTaskNotFoundFault":{ + "type":"structure", + "members":{ + }, + "error":{ + "code":"ExportTaskNotFound", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, + "ExportTasksList":{ + "type":"list", + "member":{ + "shape":"ExportTask", + "locationName":"ExportTask" + } + }, + "ExportTasksMessage":{ + "type":"structure", + "members":{ + "Marker":{"shape":"String"}, + "ExportTasks":{"shape":"ExportTasksList"} + } + }, "FailoverDBClusterMessage":{ "type":"structure", "required":["DBClusterIdentifier"], @@ -5239,6 +5383,28 @@ "locationName":"IPRange" } }, + "IamRoleMissingPermissionsFault":{ + "type":"structure", + "members":{ + }, + "error":{ + "code":"IamRoleMissingPermissions", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "IamRoleNotFoundFault":{ + "type":"structure", + "members":{ + }, + "error":{ + "code":"IamRoleNotFound", + "httpStatusCode":404, + "senderFault":true + }, + "exception":true + }, "ImportInstallationMediaMessage":{ "type":"structure", "required":[ @@ -5511,6 +5677,39 @@ }, "exception":true }, + "InvalidExportOnlyFault":{ + "type":"structure", + "members":{ + }, + "error":{ + "code":"InvalidExportOnly", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "InvalidExportSourceStateFault":{ + "type":"structure", + "members":{ + }, + "error":{ + "code":"InvalidExportSourceState", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, + "InvalidExportTaskStateFault":{ + "type":"structure", + "members":{ + }, + "error":{ + "code":"InvalidExportTaskStateFault", + "httpStatusCode":400, + "senderFault":true + }, + "exception":true + }, "InvalidGlobalClusterStateFault":{ "type":"structure", "members":{ @@ -5625,6 +5824,19 @@ "locationName":"MinimumEngineVersionPerAllowedValue" } }, + "ModifyCertificatesMessage":{ + "type":"structure", + "members":{ + "CertificateIdentifier":{"shape":"String"}, + "RemoveCustomerOverride":{"shape":"BooleanOptional"} + } + }, + "ModifyCertificatesResult":{ + "type":"structure", + "members":{ + "Certificate":{"shape":"Certificate"} + } + }, "ModifyCurrentDBClusterCapacityMessage":{ "type":"structure", "required":["DBClusterIdentifier"], @@ -5752,7 +5964,8 @@ "ProcessorFeatures":{"shape":"ProcessorFeatureList"}, "UseDefaultProcessorFeatures":{"shape":"BooleanOptional"}, "DeletionProtection":{"shape":"BooleanOptional"}, - "MaxAllocatedStorage":{"shape":"IntegerOptional"} + "MaxAllocatedStorage":{"shape":"IntegerOptional"}, + "CertificateRotationRestart":{"shape":"BooleanOptional"} } }, "ModifyDBInstanceResult":{ @@ -7111,6 +7324,25 @@ "DBInstance":{"shape":"DBInstance"} } }, + "StartExportTaskMessage":{ + "type":"structure", + "required":[ + "ExportTaskIdentifier", + "SourceArn", + "S3BucketName", + "IamRoleArn", + "KmsKeyId" + ], + "members":{ + "ExportTaskIdentifier":{"shape":"String"}, + "SourceArn":{"shape":"String"}, + "S3BucketName":{"shape":"String"}, + "IamRoleArn":{"shape":"String"}, + "KmsKeyId":{"shape":"String"}, + "S3Prefix":{"shape":"String"}, + "ExportOnly":{"shape":"StringList"} + } + }, "StopActivityStreamRequest":{ "type":"structure", "required":["ResourceArn"], diff --git a/models/apis/rds/2014-10-31/docs-2.json b/models/apis/rds/2014-10-31/docs-2.json index ec4a38eb1ac..5c4cddd5635 100644 --- a/models/apis/rds/2014-10-31/docs-2.json +++ b/models/apis/rds/2014-10-31/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique.
Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use.
This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Note that Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.
Amazon RDS API Reference
For the alphabetical list of API actions, see API Actions.
For the alphabetical list of data types, see Data Types.
For a list of common query parameters, see Common Parameters.
For descriptions of the error codes, see Common Errors.
Amazon RDS User Guide
For a summary of the Amazon RDS interfaces, see Available RDS Interfaces.
For more information about how to use the Query API, see Using the Query API.
Amazon Relational Database Service (Amazon RDS) is a web service that makes it easier to set up, operate, and scale a relational database in the cloud. It provides cost-efficient, resizeable capacity for an industry-standard relational database and manages common database administration tasks, freeing up developers to focus on what makes their applications and businesses unique.
Amazon RDS gives you access to the capabilities of a MySQL, MariaDB, PostgreSQL, Microsoft SQL Server, Oracle, or Amazon Aurora database server. These capabilities mean that the code, applications, and tools you already use today with your existing databases work with Amazon RDS without modification. Amazon RDS automatically backs up your database and maintains the database software that powers your DB instance. Amazon RDS is flexible: you can scale your DB instance's compute resources and storage capacity to meet your application's demand. As with all Amazon Web Services, there are no up-front investments, and you pay only for the resources you use.
This interface reference for Amazon RDS contains documentation for a programming or command line interface you can use to manage Amazon RDS. Amazon RDS is asynchronous, which means that some interfaces might require techniques such as polling or callback functions to determine when a command has been applied. In this reference, the parameter descriptions indicate whether a command is applied immediately, on the next instance reboot, or during the maintenance window. The reference structure is as follows, and we list following some related topics from the user guide.
Amazon RDS API Reference
For the alphabetical list of API actions, see API Actions.
For the alphabetical list of data types, see Data Types.
For a list of common query parameters, see Common Parameters.
For descriptions of the error codes, see Common Errors.
Amazon RDS User Guide
For a summary of the Amazon RDS interfaces, see Available RDS Interfaces.
For more information about how to use the Query API, see Using the Query API.
Associates an Identity and Access Management (IAM) role from an Amazon Aurora DB cluster. For more information, see Authorizing Amazon Aurora MySQL to Access Other AWS Services on Your Behalf in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Associates an AWS Identity and Access Management (IAM) role with a DB instance.
To add a role to a DB instance, the status of the DB instance must be available
.
Applies a pending maintenance action to a resource (for example, to a DB instance).
", "AuthorizeDBSecurityGroupIngress": "Enables ingress to a DBSecurityGroup using one of two forms of authorization. First, EC2 or VPC security groups can be added to the DBSecurityGroup if the application using the database is running on EC2 or VPC instances. Second, IP ranges are available if the application accessing your database is running on the Internet. Required parameters for this API are one of CIDR range, EC2SecurityGroupId for VPC, or (EC2SecurityGroupOwnerId and either EC2SecurityGroupName or EC2SecurityGroupId for non-VPC).
You can't authorize ingress from an EC2 security group in one AWS Region to an Amazon RDS DB instance in another. You can't authorize ingress from a VPC security group in one VPC to an Amazon RDS DB instance in another.
For an overview of CIDR ranges, go to the Wikipedia Tutorial.
", "BacktrackDBCluster": "Backtracks a DB cluster to a specific time, without creating a new DB cluster.
For more information on backtracking, see Backtracking an Aurora DB Cluster in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Cancels an export task in progress that is exporting a snapshot to Amazon S3. Any data that has already been written to the S3 bucket isn't removed.
", "CopyDBClusterParameterGroup": "Copies the specified DB cluster parameter group.
This action only applies to Aurora DB clusters.
Copies a snapshot of a DB cluster.
To copy a DB cluster snapshot from a shared manual DB cluster snapshot, SourceDBClusterSnapshotIdentifier
must be the Amazon Resource Name (ARN) of the shared DB cluster snapshot.
You can copy an encrypted DB cluster snapshot from another AWS Region. In that case, the AWS Region where you call the CopyDBClusterSnapshot
action is the destination AWS Region for the encrypted DB cluster snapshot to be copied to. To copy an encrypted DB cluster snapshot from another AWS Region, you must provide the following values:
KmsKeyId
- The AWS Key Management System (AWS KMS) key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region.
PreSignedUrl
- A URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot
action to be called in the source AWS Region where the DB cluster snapshot is copied from. The pre-signed URL must be a valid request for the CopyDBClusterSnapshot
API action that can be executed in the source AWS Region that contains the encrypted DB cluster snapshot to be copied.
The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The KMS key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. This is the same identifier for both the CopyDBClusterSnapshot
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that the DB cluster snapshot will be created in.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster snapshot from the us-west-2 AWS Region, then your SourceDBClusterSnapshotIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:cluster-snapshot:aurora-cluster1-snapshot-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
TargetDBClusterSnapshotIdentifier
- The identifier for the new copy of the DB cluster snapshot in the destination AWS Region.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the ARN format for the source AWS Region and is the same value as the SourceDBClusterSnapshotIdentifier
in the pre-signed URL.
To cancel the copy operation once it is in progress, delete the target DB cluster snapshot identified by TargetDBClusterSnapshotIdentifier
while that DB cluster snapshot is in \"copying\" status.
For more information on copying encrypted DB cluster snapshots from one AWS Region to another, see Copying a Snapshot in the Amazon Aurora User Guide.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Copies a snapshot of a DB cluster.
To copy a DB cluster snapshot from a shared manual DB cluster snapshot, SourceDBClusterSnapshotIdentifier
must be the Amazon Resource Name (ARN) of the shared DB cluster snapshot.
You can copy an encrypted DB cluster snapshot from another AWS Region. In that case, the AWS Region where you call the CopyDBClusterSnapshot
action is the destination AWS Region for the encrypted DB cluster snapshot to be copied to. To copy an encrypted DB cluster snapshot from another AWS Region, you must provide the following values:
KmsKeyId
- The AWS Key Management System (AWS KMS) key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region.
PreSignedUrl
- A URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot
action to be called in the source AWS Region where the DB cluster snapshot is copied from. The pre-signed URL must be a valid request for the CopyDBClusterSnapshot
API action that can be executed in the source AWS Region that contains the encrypted DB cluster snapshot to be copied.
The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The KMS key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. This is the same identifier for both the CopyDBClusterSnapshot
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that the DB cluster snapshot is to be created in.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster snapshot from the us-west-2 AWS Region, then your SourceDBClusterSnapshotIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:cluster-snapshot:aurora-cluster1-snapshot-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
TargetDBClusterSnapshotIdentifier
- The identifier for the new copy of the DB cluster snapshot in the destination AWS Region.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the ARN format for the source AWS Region and is the same value as the SourceDBClusterSnapshotIdentifier
in the pre-signed URL.
To cancel the copy operation once it is in progress, delete the target DB cluster snapshot identified by TargetDBClusterSnapshotIdentifier
while that DB cluster snapshot is in \"copying\" status.
For more information on copying encrypted DB cluster snapshots from one AWS Region to another, see Copying a Snapshot in the Amazon Aurora User Guide.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Copies the specified DB parameter group.
", - "CopyDBSnapshot": "Copies the specified DB snapshot. The source DB snapshot must be in the \"available\" state.
You can copy a snapshot from one AWS Region to another. In that case, the AWS Region where you call the CopyDBSnapshot
action is the destination AWS Region for the DB snapshot copy.
For more information about copying snapshots, see Copying a DB Snapshot in the Amazon RDS User Guide.
", + "CopyDBSnapshot": "Copies the specified DB snapshot. The source DB snapshot must be in the \"available\" state.
You can copy a snapshot from one AWS Region to another. In that case, the AWS Region where you call the CopyDBSnapshot
action is the destination AWS Region for the DB snapshot copy.
For more information about copying snapshots, see Copying a DB Snapshot in the Amazon RDS User Guide.
", "CopyOptionGroup": "Copies the specified option group.
", "CreateCustomAvailabilityZone": "Creates a custom Availability Zone (AZ).
A custom AZ is an on-premises AZ that is integrated with a VMware vSphere cluster.
For more information about RDS on VMware, see the RDS on VMware User Guide.
", "CreateDBCluster": "Creates a new Amazon Aurora DB cluster.
You can use the ReplicationSourceIdentifier
parameter to create the DB cluster as a Read Replica of another DB cluster or Amazon RDS MySQL DB instance. For cross-region replication where the DB cluster identified by ReplicationSourceIdentifier
is encrypted, you must also specify the PreSignedUrl
parameter.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Creates a new DB security group. DB security groups control access to a DB instance.
A DB security group controls access to EC2-Classic DB instances that are not in a VPC.
Creates a DBSnapshot. The source DBInstance must be in \"available\" state.
", "CreateDBSubnetGroup": "Creates a new DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the AWS Region.
", - "CreateEventSubscription": "Creates an RDS event notification subscription. This action requires a topic ARN (Amazon Resource Name) created by either the RDS console, the SNS console, or the SNS API. To obtain an ARN with SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the SNS console.
You can specify the type of source (SourceType) you want to be notified of, provide a list of RDS sources (SourceIds) that triggers the events, and provide a list of event categories (EventCategories) for events you want to be notified of. For example, you can specify SourceType = db-instance, SourceIds = mydbinstance1, mydbinstance2 and EventCategories = Availability, Backup.
If you specify both the SourceType and SourceIds, such as SourceType = db-instance and SourceIdentifier = myDBInstance1, you are notified of all the db-instance events for the specified source. If you specify a SourceType but do not specify a SourceIdentifier, you receive notice of the events for that source type for all your RDS sources. If you do not specify either the SourceType nor the SourceIdentifier, you are notified of events generated from all RDS sources belonging to your customer account.
RDS event notification is only available for unencrypted SNS topics. If you specify an encrypted SNS topic, event notifications aren't sent for the topic.
Creates an RDS event notification subscription. This action requires a topic Amazon Resource Name (ARN) created by either the RDS console, the SNS console, or the SNS API. To obtain an ARN with SNS, you must create a topic in Amazon SNS and subscribe to the topic. The ARN is displayed in the SNS console.
You can specify the type of source (SourceType) you want to be notified of, provide a list of RDS sources (SourceIds) that triggers the events, and provide a list of event categories (EventCategories) for events you want to be notified of. For example, you can specify SourceType = db-instance, SourceIds = mydbinstance1, mydbinstance2 and EventCategories = Availability, Backup.
If you specify both the SourceType and SourceIds, such as SourceType = db-instance and SourceIdentifier = myDBInstance1, you are notified of all the db-instance events for the specified source. If you specify a SourceType but do not specify a SourceIdentifier, you receive notice of the events for that source type for all your RDS sources. If you don't specify either the SourceType or the SourceIdentifier, you are notified of events generated from all RDS sources belonging to your customer account.
RDS event notification is only available for unencrypted SNS topics. If you specify an encrypted SNS topic, event notifications aren't sent for the topic.
Creates an Aurora global database spread across multiple regions. The global database contains a single primary cluster with read-write capability, and a read-only secondary cluster that receives data from the primary cluster through high-speed replication performed by the Aurora storage subsystem.
You can create a global database that is initially empty, and then add a primary cluster and a secondary cluster to it. Or you can specify an existing Aurora cluster during the create operation, and this cluster becomes the primary cluster of the global database.
This action only applies to Aurora DB clusters.
Creates a new option group. You can create up to 20 option groups.
", "DeleteCustomAvailabilityZone": "Deletes a custom Availability Zone (AZ).
A custom AZ is an on-premises AZ that is integrated with a VMware vSphere cluster.
For more information about RDS on VMware, see the RDS on VMware User Guide.
", @@ -34,7 +35,7 @@ "DeleteDBClusterEndpoint": "Deletes a custom endpoint and removes it from an Amazon Aurora DB cluster.
This action only applies to Aurora DB clusters.
Deletes a specified DB cluster parameter group. The DB cluster parameter group to be deleted can't be associated with any DB clusters.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Deletes a DB cluster snapshot. If the snapshot is being copied, the copy operation is terminated.
The DB cluster snapshot must be in the available
state to be deleted.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
The DeleteDBInstance action deletes a previously provisioned DB instance. When you delete a DB instance, all automated backups for that instance are deleted and can't be recovered. Manual DB snapshots of the DB instance to be deleted by DeleteDBInstance
are not deleted.
If you request a final DB snapshot the status of the Amazon RDS DB instance is deleting
until the DB snapshot is created. The API action DescribeDBInstance
is used to monitor the status of this operation. The action can't be canceled or reverted once submitted.
Note that when a DB instance is in a failure state and has a status of failed
, incompatible-restore
, or incompatible-network
, you can only delete it when you skip creation of the final snapshot with the SkipFinalSnapshot
parameter.
If the specified DB instance is part of an Amazon Aurora DB cluster, you can't delete the DB instance if both of the following conditions are true:
The DB cluster is a Read Replica of another Amazon Aurora DB cluster.
The DB instance is the only instance in the DB cluster.
To delete a DB instance in this case, first call the PromoteReadReplicaDBCluster
API action to promote the DB cluster so it's no longer a Read Replica. After the promotion completes, then call the DeleteDBInstance
API action to delete the final instance in the DB cluster.
The DeleteDBInstance action deletes a previously provisioned DB instance. When you delete a DB instance, all automated backups for that instance are deleted and can't be recovered. Manual DB snapshots of the DB instance to be deleted by DeleteDBInstance
are not deleted.
If you request a final DB snapshot the status of the Amazon RDS DB instance is deleting
until the DB snapshot is created. The API action DescribeDBInstance
is used to monitor the status of this operation. The action can't be canceled or reverted once submitted.
When a DB instance is in a failure state and has a status of failed
, incompatible-restore
, or incompatible-network
, you can only delete it when you skip creation of the final snapshot with the SkipFinalSnapshot
parameter.
If the specified DB instance is part of an Amazon Aurora DB cluster, you can't delete the DB instance if both of the following conditions are true:
The DB cluster is a Read Replica of another Amazon Aurora DB cluster.
The DB instance is the only instance in the DB cluster.
To delete a DB instance in this case, first call the PromoteReadReplicaDBCluster
API action to promote the DB cluster so it's no longer a Read Replica. After the promotion completes, then call the DeleteDBInstance
API action to delete the final instance in the DB cluster.
Deletes automated backups based on the source instance's DbiResourceId
value or the restorable instance's resource ID.
Deletes a specified DB parameter group. The DB parameter group to be deleted can't be associated with any DB instances.
", "DeleteDBProxy": "This is prerelease documentation for the RDS Database Proxy feature in preview release. It is subject to change.
Deletes an existing proxy.
", @@ -55,10 +56,10 @@ "DescribeDBClusterParameters": "Returns the detailed parameter list for a particular DB cluster parameter group.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Returns a list of DB cluster snapshot attribute names and values for a manual DB cluster snapshot.
When sharing snapshots with other AWS accounts, DescribeDBClusterSnapshotAttributes
returns the restore
attribute and a list of IDs for the AWS accounts that are authorized to copy or restore the manual DB cluster snapshot. If all
is included in the list of values for the restore
attribute, then the manual DB cluster snapshot is public and can be copied or restored by all AWS accounts.
To add or remove access for an AWS account to copy or restore a manual DB cluster snapshot, or to make the manual DB cluster snapshot public or private, use the ModifyDBClusterSnapshotAttribute
API action.
This action only applies to Aurora DB clusters.
Returns information about DB cluster snapshots. This API action supports pagination.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Returns information about provisioned Aurora DB clusters. This API supports pagination.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Returns information about provisioned Aurora DB clusters. This API supports pagination.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This operation can also return information for Amazon Neptune DB instances and Amazon DocumentDB instances.
Returns a list of the available DB engines.
", "DescribeDBInstanceAutomatedBackups": "Displays backups for both current and deleted instances. For example, use this operation to find details about automated backups for previously deleted instances. Current instances with retention periods greater than zero (0) are returned for both the DescribeDBInstanceAutomatedBackups
and DescribeDBInstances
operations.
All parameters are optional.
", - "DescribeDBInstances": "Returns information about provisioned RDS instances. This API supports pagination.
", + "DescribeDBInstances": "Returns information about provisioned RDS instances. This API supports pagination.
This operation can also return information for Amazon Neptune DB instances and Amazon DocumentDB instances.
Returns a list of DB log files for the DB instance.
", "DescribeDBParameterGroups": " Returns a list of DBParameterGroup
descriptions. If a DBParameterGroupName
is specified, the list will contain only the description of the specified DB parameter group.
Returns the detailed parameter list for a particular DB parameter group.
", @@ -74,6 +75,7 @@ "DescribeEventCategories": "Displays a list of categories for all event source types, or, if specified, for a specified source type. You can see a list of the event categories and source types in the Events topic in the Amazon RDS User Guide.
", "DescribeEventSubscriptions": "Lists all the subscription descriptions for a customer account. The description for a subscription includes SubscriptionName, SNSTopicARN, CustomerID, SourceType, SourceID, CreationTime, and Status.
If you specify a SubscriptionName, lists the description for that subscription.
", "DescribeEvents": "Returns events related to DB instances, DB security groups, DB snapshots, and DB parameter groups for the past 14 days. Events specific to a particular DB instance, DB security group, database snapshot, or DB parameter group can be obtained by providing the name as a parameter. By default, the past hour of events are returned.
", + "DescribeExportTasks": "Returns information about a snapshot export to Amazon S3. This API operation supports pagination.
", "DescribeGlobalClusters": "Returns information about Aurora global database clusters. This API supports pagination.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Describes the available installation media for a DB engine that requires an on-premises customer provided license, such as Microsoft SQL Server.
", "DescribeOptionGroupOptions": "Describes all available options.
", @@ -88,6 +90,7 @@ "FailoverDBCluster": "Forces a failover for a DB cluster.
A failover for a DB cluster promotes one of the Aurora Replicas (read-only instances) in the DB cluster to be the primary instance (the cluster writer).
Amazon Aurora will automatically fail over to an Aurora Replica, if one exists, when the primary instance fails. You can force a failover when you want to simulate a failure of a primary instance for testing. Because each instance in a DB cluster has its own endpoint address, you will need to clean up and re-establish any existing connections that use those endpoint addresses when the failover is complete.
For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Imports the installation media for a DB engine that requires an on-premises customer provided license, such as SQL Server.
", "ListTagsForResource": "Lists all tags on an Amazon RDS resource.
For an overview on tagging an Amazon RDS resource, see Tagging Amazon RDS Resources in the Amazon RDS User Guide.
", + "ModifyCertificates": "Override the system-default Secure Sockets Layer/Transport Layer Security (SSL/TLS) certificate for Amazon RDS for new DB instances temporarily, or remove the override.
By using this operation, you can specify an RDS-approved SSL/TLS certificate for new DB instances that is different from the default certificate provided by RDS. You can also use this operation to remove the override, so that new DB instances use the default certificate provided by RDS.
You might need to override the default certificate in the following situations:
You already migrated your applications to support the latest certificate authority (CA) certificate, but the new CA certificate is not yet the RDS default CA certificate for the specified AWS Region.
RDS has already moved to a new default CA certificate for the specified AWS Region, but you are still in the process of supporting the new CA certificate. In this case, you temporarily need additional time to finish your application changes.
For more information about rotating your SSL/TLS certificate for RDS DB engines, see Rotating Your SSL/TLS Certificate in the Amazon RDS User Guide.
For more information about rotating your SSL/TLS certificate for Aurora DB engines, see Rotating Your SSL/TLS Certificate in the Amazon Aurora User Guide.
", "ModifyCurrentDBClusterCapacity": "Set the capacity of an Aurora Serverless DB cluster to a specific value.
Aurora Serverless scales seamlessly based on the workload on the DB cluster. In some cases, the capacity might not scale fast enough to meet a sudden change in workload, such as a large number of new transactions. Call ModifyCurrentDBClusterCapacity
to set the capacity explicitly.
After this call sets the DB cluster capacity, Aurora Serverless can automatically scale the DB cluster based on the cooldown period for scaling up and the cooldown period for scaling down.
For more information about Aurora Serverless, see Using Amazon Aurora Serverless in the Amazon Aurora User Guide.
If you call ModifyCurrentDBClusterCapacity
with the default TimeoutAction
, connections that prevent Aurora Serverless from finding a scaling point might be dropped. For more information about scaling points, see Autoscaling for Aurora Serverless in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Modify a setting for an Amazon Aurora DB cluster. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Modifies the properties of an endpoint in an Amazon Aurora DB cluster.
This action only applies to Aurora DB clusters.
Updates a manual DB snapshot, which can be encrypted or not encrypted, with a new engine version.
Amazon RDS supports upgrading DB snapshots for MySQL, Oracle, and PostgreSQL.
", "ModifyDBSnapshotAttribute": "Adds an attribute and values to, or removes an attribute and values from, a manual DB snapshot.
To share a manual DB snapshot with other AWS accounts, specify restore
as the AttributeName
and use the ValuesToAdd
parameter to add a list of IDs of the AWS accounts that are authorized to restore the manual DB snapshot. Uses the value all
to make the manual DB snapshot public, which means it can be copied or restored by all AWS accounts. Do not add the all
value for any manual DB snapshots that contain private information that you don't want available to all AWS accounts. If the manual DB snapshot is encrypted, it can be shared, but only by specifying a list of authorized AWS account IDs for the ValuesToAdd
parameter. You can't use all
as a value for that parameter in this case.
To view which AWS accounts have access to copy or restore a manual DB snapshot, or whether a manual DB snapshot public or private, use the DescribeDBSnapshotAttributes
API action.
Modifies an existing DB subnet group. DB subnet groups must contain at least one subnet in at least two AZs in the AWS Region.
", - "ModifyEventSubscription": "Modifies an existing RDS event notification subscription. Note that you can't modify the source identifiers using this call; to change source identifiers for a subscription, use the AddSourceIdentifierToSubscription
and RemoveSourceIdentifierFromSubscription
calls.
You can see a list of the event categories for a given SourceType in the Events topic in the Amazon RDS User Guide or by using the DescribeEventCategories action.
", + "ModifyEventSubscription": "Modifies an existing RDS event notification subscription. You can't modify the source identifiers using this call. To change source identifiers for a subscription, use the AddSourceIdentifierToSubscription
and RemoveSourceIdentifierFromSubscription
calls.
You can see a list of the event categories for a given SourceType in the Events topic in the Amazon RDS User Guide or by using the DescribeEventCategories action.
", "ModifyGlobalCluster": "Modify a setting for an Amazon Aurora global cluster. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Modifies an existing option group.
", "PromoteReadReplica": "Promotes a Read Replica DB instance to a standalone DB instance.
Backup duration is a function of the amount of changes to the database since the previous backup. If you plan to promote a Read Replica to a standalone instance, we recommend that you enable backups and complete at least one backup prior to promotion. In addition, a Read Replica cannot be promoted to a standalone instance when it is in the backing-up
status. If you have enabled backups on your Read Replica, configure the automated backup window so that daily backups do not interfere with Read Replica promotion.
This command doesn't apply to Aurora MySQL and Aurora PostgreSQL.
Starts a database activity stream to monitor activity on the database. For more information, see Database Activity Streams in the Amazon Aurora User Guide.
", "StartDBCluster": "Starts an Amazon Aurora DB cluster that was stopped using the AWS console, the stop-db-cluster AWS CLI command, or the StopDBCluster action.
For more information, see Stopping and Starting an Aurora Cluster in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Starts an Amazon RDS DB instance that was stopped using the AWS console, the stop-db-instance AWS CLI command, or the StopDBInstance action.
For more information, see Starting an Amazon RDS DB instance That Was Previously Stopped in the Amazon RDS User Guide.
This command doesn't apply to Aurora MySQL and Aurora PostgreSQL. For Aurora DB clusters, use StartDBCluster
instead.
Starts an export of a snapshot to Amazon S3. The provided IAM role must have access to the S3 bucket.
", "StopActivityStream": "Stops a database activity stream that was started using the AWS console, the start-activity-stream
AWS CLI command, or the StartActivityStream
action.
For more information, see Database Activity Streams in the Amazon Aurora User Guide.
", "StopDBCluster": "Stops an Amazon Aurora DB cluster. When you stop a DB cluster, Aurora retains the DB cluster's metadata, including its endpoints and DB parameter groups. Aurora also retains the transaction logs so you can do a point-in-time restore if necessary.
For more information, see Stopping and Starting an Aurora Cluster in the Amazon Aurora User Guide.
This action only applies to Aurora DB clusters.
Stops an Amazon RDS DB instance. When you stop a DB instance, Amazon RDS retains the DB instance's metadata, including its endpoint, DB parameter group, and option group membership. Amazon RDS also retains the transaction logs so you can do a point-in-time restore if necessary.
For more information, see Stopping an Amazon RDS DB Instance Temporarily in the Amazon RDS User Guide.
This command doesn't apply to Aurora MySQL and Aurora PostgreSQL. For Aurora clusters, use StopDBCluster
instead.
Describes a quota for an AWS account.
The following are account quotas:
AllocatedStorage
- The total allocated storage per account, in GiB. The used value is the total allocated storage in the account, in GiB.
AuthorizationsPerDBSecurityGroup
- The number of ingress rules per DB security group. The used value is the highest number of ingress rules in a DB security group in the account. Other DB security groups in the account might have a lower number of ingress rules.
CustomEndpointsPerDBCluster
- The number of custom endpoints per DB cluster. The used value is the highest number of custom endpoints in a DB clusters in the account. Other DB clusters in the account might have a lower number of custom endpoints.
DBClusterParameterGroups
- The number of DB cluster parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB cluster parameter groups in the account.
DBClusterRoles
- The number of associated AWS Identity and Access Management (IAM) roles per DB cluster. The used value is the highest number of associated IAM roles for a DB cluster in the account. Other DB clusters in the account might have a lower number of associated IAM roles.
DBClusters
- The number of DB clusters per account. The used value is the count of DB clusters in the account.
DBInstanceRoles
- The number of associated IAM roles per DB instance. The used value is the highest number of associated IAM roles for a DB instance in the account. Other DB instances in the account might have a lower number of associated IAM roles.
DBInstances
- The number of DB instances per account. The used value is the count of the DB instances in the account.
DBParameterGroups
- The number of DB parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB parameter groups in the account.
DBSecurityGroups
- The number of DB security groups (not VPC security groups) per account, excluding the default security group. The used value is the count of nondefault DB security groups in the account.
DBSubnetGroups
- The number of DB subnet groups per account. The used value is the count of the DB subnet groups in the account.
EventSubscriptions
- The number of event subscriptions per account. The used value is the count of the event subscriptions in the account.
ManualSnapshots
- The number of manual DB snapshots per account. The used value is the count of the manual DB snapshots in the account.
OptionGroups
- The number of DB option groups per account, excluding default option groups. The used value is the count of nondefault DB option groups in the account.
ReadReplicasPerMaster
- The number of Read Replicas per DB instance. The used value is the highest number of Read Replicas for a DB instance in the account. Other DB instances in the account might have a lower number of Read Replicas.
ReservedDBInstances
- The number of reserved DB instances per account. The used value is the count of the active reserved DB instances in the account.
SubnetsPerDBSubnetGroup
- The number of subnets per DB subnet group. The used value is highest number of subnets for a DB subnet group in the account. Other DB subnet groups in the account might have a lower number of subnets.
For more information, see Limits in the Amazon RDS User Guide and Limits in the Amazon Aurora User Guide.
", + "base": "Describes a quota for an AWS account.
The following are account quotas:
AllocatedStorage
- The total allocated storage per account, in GiB. The used value is the total allocated storage in the account, in GiB.
AuthorizationsPerDBSecurityGroup
- The number of ingress rules per DB security group. The used value is the highest number of ingress rules in a DB security group in the account. Other DB security groups in the account might have a lower number of ingress rules.
CustomEndpointsPerDBCluster
- The number of custom endpoints per DB cluster. The used value is the highest number of custom endpoints in a DB clusters in the account. Other DB clusters in the account might have a lower number of custom endpoints.
DBClusterParameterGroups
- The number of DB cluster parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB cluster parameter groups in the account.
DBClusterRoles
- The number of associated AWS Identity and Access Management (IAM) roles per DB cluster. The used value is the highest number of associated IAM roles for a DB cluster in the account. Other DB clusters in the account might have a lower number of associated IAM roles.
DBClusters
- The number of DB clusters per account. The used value is the count of DB clusters in the account.
DBInstanceRoles
- The number of associated IAM roles per DB instance. The used value is the highest number of associated IAM roles for a DB instance in the account. Other DB instances in the account might have a lower number of associated IAM roles.
DBInstances
- The number of DB instances per account. The used value is the count of the DB instances in the account.
Amazon RDS DB instances, Amazon Aurora DB instances, Amazon Neptune instances, and Amazon DocumentDB instances apply to this quota.
DBParameterGroups
- The number of DB parameter groups per account, excluding default parameter groups. The used value is the count of nondefault DB parameter groups in the account.
DBSecurityGroups
- The number of DB security groups (not VPC security groups) per account, excluding the default security group. The used value is the count of nondefault DB security groups in the account.
DBSubnetGroups
- The number of DB subnet groups per account. The used value is the count of the DB subnet groups in the account.
EventSubscriptions
- The number of event subscriptions per account. The used value is the count of the event subscriptions in the account.
ManualSnapshots
- The number of manual DB snapshots per account. The used value is the count of the manual DB snapshots in the account.
OptionGroups
- The number of DB option groups per account, excluding default option groups. The used value is the count of nondefault DB option groups in the account.
ReadReplicasPerMaster
- The number of Read Replicas per DB instance. The used value is the highest number of Read Replicas for a DB instance in the account. Other DB instances in the account might have a lower number of Read Replicas.
ReservedDBInstances
- The number of reserved DB instances per account. The used value is the count of the active reserved DB instances in the account.
SubnetsPerDBSubnetGroup
- The number of subnets per DB subnet group. The used value is highest number of subnets for a DB subnet group in the account. Other DB subnet groups in the account might have a lower number of subnets.
For more information, see Quotas for Amazon RDS in the Amazon RDS User Guide and Quotas for Amazon Aurora in the Amazon Aurora User Guide.
", "refs": { "AccountQuotaList$member": null } @@ -320,7 +324,7 @@ "DBSnapshot$Encrypted": "Specifies whether the DB snapshot is encrypted.
", "DBSnapshot$IAMDatabaseAuthenticationEnabled": "True if mapping of AWS Identity and Access Management (IAM) accounts to database accounts is enabled, and otherwise false.
", "DeleteDBClusterMessage$SkipFinalSnapshot": "A value that indicates whether to skip the creation of a final DB cluster snapshot before the DB cluster is deleted. If skip is specified, no DB cluster snapshot is created. If skip isn't specified, a DB cluster snapshot is created before the DB cluster is deleted. By default, skip isn't specified, and the DB cluster snapshot is created. By default, this parameter is disabled.
You must specify a FinalDBSnapshotIdentifier
parameter if SkipFinalSnapshot
is disabled.
A value that indicates whether to skip the creation of a final DB snapshot before the DB instance is deleted. If skip is specified, no DB snapshot is created. If skip isn't specified, a DB snapshot is created before the DB instance is deleted. By default, skip isn't specified, and the DB snapshot is created.
Note that when a DB instance is in a failure state and has a status of 'failed', 'incompatible-restore', or 'incompatible-network', it can only be deleted when skip is specified.
Specify skip when deleting a Read Replica.
The FinalDBSnapshotIdentifier parameter must be specified if skip isn't specified.
A value that indicates whether to skip the creation of a final DB snapshot before the DB instance is deleted. If skip is specified, no DB snapshot is created. If skip isn't specified, a DB snapshot is created before the DB instance is deleted. By default, skip isn't specified, and the DB snapshot is created.
When a DB instance is in a failure state and has a status of 'failed', 'incompatible-restore', or 'incompatible-network', it can only be deleted when skip is specified.
Specify skip when deleting a Read Replica.
The FinalDBSnapshotIdentifier parameter must be specified if skip isn't specified.
A value that indicates whether to include shared manual DB cluster snapshots from other AWS accounts that this AWS account has been given permission to copy or restore. By default, these snapshots are not included.
You can give an AWS account permission to restore a manual DB cluster snapshot from another AWS account by the ModifyDBClusterSnapshotAttribute
API action.
A value that indicates whether to include manual DB cluster snapshots that are public and can be copied or restored by any AWS account. By default, the public snapshots are not included.
You can share a manual DB cluster snapshot as public by using the ModifyDBClusterSnapshotAttribute API action.
", "DescribeDBClustersMessage$IncludeShared": "Optional Boolean parameter that specifies whether the output includes information about clusters shared from other AWS accounts.
", @@ -374,6 +378,7 @@ "refs": { "BacktrackDBClusterMessage$Force": "A value that indicates whether to force the DB cluster to backtrack when binary logging is enabled. Otherwise, an error occurs when binary logging is enabled.
", "BacktrackDBClusterMessage$UseEarliestTimeOnPointInTimeUnavailable": "A value that indicates whether to backtrack the DB cluster to the earliest possible backtrack time when BacktrackTo is set to a timestamp earlier than the earliest backtrack time. When this parameter is disabled and BacktrackTo is set to a timestamp earlier than the earliest backtrack time, an error occurs.
", + "Certificate$CustomerOverride": "Whether there is an override for the default certificate identifier.
", "CopyDBClusterSnapshotMessage$CopyTags": "A value that indicates whether to copy all tags from the source DB cluster snapshot to the target DB cluster snapshot. By default, tags are not copied.
", "CopyDBSnapshotMessage$CopyTags": "A value that indicates whether to copy all tags from the source DB snapshot to the target DB snapshot. By default, tags are not copied.
", "CreateDBClusterMessage$StorageEncrypted": "A value that indicates whether the DB cluster is encrypted.
", @@ -416,6 +421,7 @@ "DescribeReservedDBInstancesOfferingsMessage$MultiAZ": "A value that indicates whether to show only those reservations that support Multi-AZ.
", "GlobalCluster$StorageEncrypted": "The storage encryption setting for the global database cluster.
", "GlobalCluster$DeletionProtection": "The deletion protection setting for the new global database cluster.
", + "ModifyCertificatesMessage$RemoveCustomerOverride": "A value that indicates whether to remove the override for the default certificate. If the override is removed, the default certificate is the system default.
", "ModifyDBClusterMessage$EnableIAMDatabaseAuthentication": "A value that indicates whether to enable mapping of AWS Identity and Access Management (IAM) accounts to database accounts. By default, mapping is disabled.
For more information, see IAM Database Authentication in the Amazon Aurora User Guide.
", "ModifyDBClusterMessage$DeletionProtection": "A value that indicates whether the DB cluster has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled.
", "ModifyDBClusterMessage$EnableHttpEndpoint": "A value that indicates whether to enable the HTTP endpoint for an Aurora Serverless DB cluster. By default, the HTTP endpoint is disabled.
When enabled, the HTTP endpoint provides a connectionless web service API for running SQL queries on the Aurora Serverless DB cluster. You can also query your database from inside the RDS console with the query editor.
For more information, see Using the Data API for Aurora Serverless in the Amazon Aurora User Guide.
", @@ -428,6 +434,7 @@ "ModifyDBInstanceMessage$EnablePerformanceInsights": "A value that indicates whether to enable Performance Insights for the DB instance.
For more information, see Using Amazon Performance Insights in the Amazon Relational Database Service User Guide.
", "ModifyDBInstanceMessage$UseDefaultProcessorFeatures": "A value that indicates whether the DB instance class of the DB instance uses its default processor features.
", "ModifyDBInstanceMessage$DeletionProtection": "A value that indicates whether the DB instance has deletion protection enabled. The database can't be deleted when deletion protection is enabled. By default, deletion protection is disabled. For more information, see Deleting a DB Instance.
", + "ModifyDBInstanceMessage$CertificateRotationRestart": "A value that indicates whether the DB instance is restarted when you rotate your SSL/TLS certificate.
By default, the DB instance is restarted when you rotate your SSL/TLS certificate. The certificate is not updated until the DB instance is restarted.
Set this parameter only if you are not using SSL/TLS to connect to the DB instance.
If you are using SSL/TLS to connect to the DB instance, follow the appropriate instructions for your DB engine to rotate your SSL/TLS certificate:
For more information about rotating your SSL/TLS certificate for RDS DB engines, see Rotating Your SSL/TLS Certificate. in the Amazon RDS User Guide.
For more information about rotating your SSL/TLS certificate for Aurora DB engines, see Rotating Your SSL/TLS Certificate in the Amazon Aurora User Guide.
Whether Transport Layer Security (TLS) encryption is required for connections to the proxy. By enabling this setting, you can enforce encrypted TLS connections to the proxy, even if the associated database doesn't use TLS.
", "ModifyDBProxyRequest$DebugLogging": "Whether the proxy includes detailed information about SQL statements in its logs. This information helps you to debug issues involving SQL behavior or the performance and scalability of the proxy connections. The debug information includes the text of SQL statements that you submit through the proxy. Thus, only enable this setting when needed for debugging, and only when you have security measures in place to safeguard any sensitive information that appears in the logs.
", "ModifyEventSubscriptionMessage$Enabled": "A value that indicates whether to activate the subscription.
", @@ -476,10 +483,16 @@ "StopActivityStreamRequest$ApplyImmediately": "Specifies whether or not the database activity stream is to stop as soon as possible, regardless of the maintenance window for the database.
" } }, + "CancelExportTaskMessage": { + "base": null, + "refs": { + } + }, "Certificate": { "base": "A CA certificate for an AWS account.
", "refs": { - "CertificateList$member": null + "CertificateList$member": null, + "ModifyCertificatesResult$Certificate": null } }, "CertificateList": { @@ -1729,6 +1742,11 @@ "refs": { } }, + "DescribeExportTasksMessage": { + "base": null, + "refs": { + } + }, "DescribeGlobalClustersMessage": { "base": null, "refs": { @@ -1954,6 +1972,33 @@ "refs": { } }, + "ExportTask": { + "base": "Contains the details of a snapshot export to Amazon S3.
This data type is used as a response element in the DescribeExportTasks
action.
You can't start an export task that's already running.
", + "refs": { + } + }, + "ExportTaskNotFoundFault": { + "base": "The export task doesn't exist.
", + "refs": { + } + }, + "ExportTasksList": { + "base": null, + "refs": { + "ExportTasksMessage$ExportTasks": "Information about an export of a snapshot to Amazon S3.
" + } + }, + "ExportTasksMessage": { + "base": null, + "refs": { + } + }, "FailoverDBClusterMessage": { "base": "", "refs": { @@ -2004,6 +2049,7 @@ "DescribeEventCategoriesMessage$Filters": "This parameter isn't currently supported.
", "DescribeEventSubscriptionsMessage$Filters": "This parameter isn't currently supported.
", "DescribeEventsMessage$Filters": "This parameter isn't currently supported.
", + "DescribeExportTasksMessage$Filters": "Filters specify one or more snapshot exports to describe. The filters are specified as name-value pairs that define what to include in the output.
Supported filters include the following:
export-task-identifier
- An identifier for the snapshot export task.
s3-bucket
- The Amazon S3 bucket the snapshot is exported to.
source-arn
- The Amazon Resource Name (ARN) of the snapshot exported to Amazon S3
status
- The status of the export task.
A filter that specifies one or more global DB clusters to describe.
Supported filters:
db-cluster-id
- Accepts DB cluster identifiers and DB cluster Amazon Resource Names (ARNs). The results list will only include information about the DB clusters identified by these ARNs.
A filter that specifies one or more installation media to describe. Supported filters include the following:
custom-availability-zone-id
- Accepts custom Availability Zone (AZ) identifiers. The results list includes information about only the custom AZs identified by these identifiers.
engine
- Accepts database engines. The results list includes information about only the database engines identified by these identifiers.
For more information about the valid engines for installation media, see ImportInstallationMedia.
This parameter isn't currently supported.
", @@ -2089,6 +2135,16 @@ "DBSecurityGroup$IPRanges": " Contains a list of IPRange
elements.
The IAM role requires additional permissions to export to an Amazon S3 bucket.
", + "refs": { + } + }, + "IamRoleNotFoundFault": { + "base": "The IAM role is missing for exporting to an Amazon S3 bucket.
", + "refs": { + } + }, "ImportInstallationMediaMessage": { "base": null, "refs": { @@ -2168,6 +2224,8 @@ "DBSnapshot$PercentProgress": "The percentage of the estimated data that has been transferred.
", "DownloadDBLogFilePortionMessage$NumberOfLines": "The number of lines to download. If the number of lines specified results in a file over 1 MB in size, the file is truncated at 1 MB in size.
If the NumberOfLines parameter is specified, then the block of lines returned can be from the beginning or the end of the log file, depending on the value of the Marker parameter.
If neither Marker or NumberOfLines are specified, the entire log file is returned up to a maximum of 10000 lines, starting with the most recent log entries first.
If NumberOfLines is specified and Marker isn't specified, then the most recent lines from the end of the log file are returned.
If Marker is specified as \"0\", then the specified number of lines from the beginning of the log file are returned.
You can download the log file in blocks of lines by specifying the size of the block using the NumberOfLines parameter, and by specifying a value of \"0\" for the Marker parameter in your first request. Include the Marker value returned in the response as the Marker value for the next request, continuing until the AdditionalDataPending response element returns false.
Specifies the port that the database engine is listening on.
", + "ExportTask$PercentProgress": "The progress of the snapshot export task as a percentage.
", + "ExportTask$TotalExtractedDataInGB": "The total amount of data exported, in gigabytes.
", "Range$From": "The minimum value in the range.
", "Range$To": "The maximum value in the range.
", "ReservedDBInstance$Duration": "The duration of the reservation in seconds.
", @@ -2361,6 +2419,21 @@ "refs": { } }, + "InvalidExportOnlyFault": { + "base": "The export is invalid for exporting to an Amazon S3 bucket.
", + "refs": { + } + }, + "InvalidExportSourceStateFault": { + "base": "The state of the export snapshot is invalid for exporting to an Amazon S3 bucket.
", + "refs": { + } + }, + "InvalidExportTaskStateFault": { + "base": "You can't cancel an export task that has completed.
", + "refs": { + } + }, "InvalidGlobalClusterStateFault": { "base": "", "refs": { @@ -2471,6 +2544,16 @@ "OptionGroupOptionSetting$MinimumEngineVersionPerAllowedValue": "The minimum DB engine version required for the corresponding allowed value for this option setting.
" } }, + "ModifyCertificatesMessage": { + "base": null, + "refs": { + } + }, + "ModifyCertificatesResult": { + "base": null, + "refs": { + } + }, "ModifyCurrentDBClusterCapacityMessage": { "base": null, "refs": { @@ -2780,7 +2863,7 @@ "DBParameterGroupDetails$Parameters": " A list of Parameter
values.
Contains a list of engine default parameters.
", "ModifyDBClusterParameterGroupMessage$Parameters": "A list of parameters in the DB cluster parameter group to modify.
", - "ModifyDBParameterGroupMessage$Parameters": "An array of parameter names, values, and the apply method for the parameter update. At least one parameter name, value, and apply method must be supplied; subsequent arguments are optional. A maximum of 20 parameters can be modified in a single request.
Valid Values (for the application method): immediate | pending-reboot
You can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters, and changes are applied when you reboot the DB instance without failover.
An array of parameter names, values, and the apply method for the parameter update. At least one parameter name, value, and apply method must be supplied; later arguments are optional. A maximum of 20 parameters can be modified in a single request.
Valid Values (for the application method): immediate | pending-reboot
You can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters, and changes are applied when you reboot the DB instance without failover.
A list of parameter names in the DB cluster parameter group to reset to the default values. You can't use this parameter if the ResetAllParameters
parameter is enabled.
To reset the entire DB parameter group, specify the DBParameterGroup
name and ResetAllParameters
parameters. To reset specific parameters, provide a list of the following: ParameterName
and ApplyMethod
. A maximum of 20 parameters can be modified in a single request.
MySQL
Valid Values (for Apply method): immediate
| pending-reboot
You can use the immediate value with dynamic parameters only. You can use the pending-reboot
value for both dynamic and static parameters, and changes are applied when DB instance reboots.
MariaDB
Valid Values (for Apply method): immediate
| pending-reboot
You can use the immediate value with dynamic parameters only. You can use the pending-reboot
value for both dynamic and static parameters, and changes are applied when DB instance reboots.
Oracle
Valid Values (for Apply method): pending-reboot
The list of identifiers of the event sources for which events are returned. If not specified, then all sources are included in the response. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it can't end with a hyphen or contain two consecutive hyphens.
Constraints:
If SourceIds are supplied, SourceType must also be provided.
If the source type is a DB instance, then a DBInstanceIdentifier
must be supplied.
If the source type is a DB security group, a DBSecurityGroupName
must be supplied.
If the source type is a DB parameter group, a DBParameterGroupName
must be supplied.
If the source type is a DB snapshot, a DBSnapshotIdentifier
must be supplied.
The list of identifiers of the event sources for which events are returned. If not specified, then all sources are included in the response. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens. It can't end with a hyphen or contain two consecutive hyphens.
Constraints:
If SourceIds are supplied, SourceType must also be provided.
If the source type is a DB instance, then a DBInstanceIdentifier
must be supplied.
If the source type is a DB security group, a DBSecurityGroupName
must be supplied.
If the source type is a DB parameter group, a DBParameterGroupName
must be supplied.
If the source type is a DB snapshot, a DBSnapshotIdentifier
must be supplied.
A list of source IDs for the RDS event notification subscription.
" } }, @@ -3243,6 +3326,11 @@ "refs": { } }, + "StartExportTaskMessage": { + "base": null, + "refs": { + } + }, "StopActivityStreamRequest": { "base": null, "refs": { @@ -3311,6 +3399,7 @@ "AvailableProcessorFeature$DefaultValue": "The default value for the processor feature of the DB instance class.
", "AvailableProcessorFeature$AllowedValues": "The allowed values for the processor feature of the DB instance class.
", "BacktrackDBClusterMessage$DBClusterIdentifier": "The DB cluster identifier of the DB cluster to be backtracked. This parameter is stored as a lowercase string.
Constraints:
Must contain from 1 to 63 alphanumeric characters or hyphens.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Example: my-cluster1
The identifier of the snapshot export task to cancel.
", "Certificate$CertificateIdentifier": "The unique key that identifies a certificate.
", "Certificate$CertificateType": "The type of the certificate.
", "Certificate$Thumbprint": "The thumbprint of the certificate.
", @@ -3325,8 +3414,8 @@ "CopyDBClusterParameterGroupMessage$TargetDBClusterParameterGroupDescription": "A description for the copied DB cluster parameter group.
", "CopyDBClusterSnapshotMessage$SourceDBClusterSnapshotIdentifier": "The identifier of the DB cluster snapshot to copy. This parameter isn't case-sensitive.
You can't copy an encrypted, shared DB cluster snapshot from one AWS Region to another.
Constraints:
Must specify a valid system snapshot in the \"available\" state.
If the source snapshot is in the same AWS Region as the copy, specify a valid DB snapshot identifier.
If the source snapshot is in a different AWS Region than the copy, specify a valid DB cluster snapshot ARN. For more information, go to Copying Snapshots Across AWS Regions in the Amazon Aurora User Guide.
Example: my-cluster-snapshot1
The identifier of the new DB cluster snapshot to create from the source DB cluster snapshot. This parameter isn't case-sensitive.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Example: my-cluster-snapshot2
The AWS AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you copy an encrypted DB cluster snapshot from your AWS account, you can specify a value for KmsKeyId
to encrypt the copy with a new KMS encryption key. If you don't specify a value for KmsKeyId
, then the copy of the DB cluster snapshot is encrypted with the same KMS key as the source DB cluster snapshot.
If you copy an encrypted DB cluster snapshot that is shared from another AWS account, then you must specify a value for KmsKeyId
.
To copy an encrypted DB cluster snapshot to another AWS Region, you must set KmsKeyId
to the KMS key ID you want to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.
If you copy an unencrypted DB cluster snapshot and specify a value for the KmsKeyId
parameter, an error is returned.
The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot
API action in the AWS Region that contains the source DB cluster snapshot to copy. The PreSignedUrl
parameter must be used when copying an encrypted DB cluster snapshot from another AWS Region. Don't specify PreSignedUrl
when you are copying an encrypted DB cluster snapshot in the same AWS Region.
The pre-signed URL must be a valid request for the CopyDBSClusterSnapshot
API action that can be executed in the source AWS Region that contains the encrypted DB cluster snapshot to be copied. The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. This is the same identifier for both the CopyDBClusterSnapshot
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that the DB cluster snapshot will be created in.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster snapshot from the us-west-2 AWS Region, then your SourceDBClusterSnapshotIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:cluster-snapshot:aurora-cluster1-snapshot-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The AWS KMS key ID for an encrypted DB cluster snapshot. The KMS key ID is the Amazon Resource Name (ARN), KMS key identifier, or the KMS key alias for the KMS encryption key.
If you copy an encrypted DB cluster snapshot from your AWS account, you can specify a value for KmsKeyId
to encrypt the copy with a new KMS encryption key. If you don't specify a value for KmsKeyId
, then the copy of the DB cluster snapshot is encrypted with the same KMS key as the source DB cluster snapshot.
If you copy an encrypted DB cluster snapshot that is shared from another AWS account, then you must specify a value for KmsKeyId
.
To copy an encrypted DB cluster snapshot to another AWS Region, you must set KmsKeyId
to the KMS key ID you want to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. KMS encryption keys are specific to the AWS Region that they are created in, and you can't use encryption keys from one AWS Region in another AWS Region.
If you copy an unencrypted DB cluster snapshot and specify a value for the KmsKeyId
parameter, an error is returned.
The URL that contains a Signature Version 4 signed request for the CopyDBClusterSnapshot
API action in the AWS Region that contains the source DB cluster snapshot to copy. The PreSignedUrl
parameter must be used when copying an encrypted DB cluster snapshot from another AWS Region. Don't specify PreSignedUrl
when you are copying an encrypted DB cluster snapshot in the same AWS Region.
The pre-signed URL must be a valid request for the CopyDBSClusterSnapshot
API action that can be executed in the source AWS Region that contains the encrypted DB cluster snapshot to be copied. The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the copy of the DB cluster snapshot in the destination AWS Region. This is the same identifier for both the CopyDBClusterSnapshot
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that the DB cluster snapshot is to be created in.
SourceDBClusterSnapshotIdentifier
- The DB cluster snapshot identifier for the encrypted DB cluster snapshot to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster snapshot from the us-west-2 AWS Region, then your SourceDBClusterSnapshotIdentifier
looks like the following example: arn:aws:rds:us-west-2:123456789012:cluster-snapshot:aurora-cluster1-snapshot-20161115
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The identifier or ARN for the source DB parameter group. For information about creating an ARN, see Constructing an ARN for Amazon RDS in the Amazon RDS User Guide.
Constraints:
Must specify a valid DB parameter group.
Must specify a valid DB parameter group identifier, for example my-db-param-group
, or a valid ARN.
The identifier for the copied DB parameter group.
Constraints:
Can't be null, empty, or blank
Must contain from 1 to 255 letters, numbers, or hyphens
First character must be a letter
Can't end with a hyphen or contain two consecutive hyphens
Example: my-db-parameter-group
A description for the copied DB parameter group.
", @@ -3346,7 +3435,7 @@ "CreateDBClusterEndpointMessage$DBClusterEndpointIdentifier": "The identifier to use for the new endpoint. This parameter is stored as a lowercase string.
", "CreateDBClusterEndpointMessage$EndpointType": "The type of the endpoint. One of: READER
, WRITER
, ANY
.
A value that indicates that the DB cluster should be associated with the specified CharacterSet.
", - "CreateDBClusterMessage$DatabaseName": "The name for your database of up to 64 alpha-numeric characters. If you do not provide a name, Amazon RDS will not create a database in the DB cluster you are creating.
", + "CreateDBClusterMessage$DatabaseName": "The name for your database of up to 64 alphanumeric characters. If you do not provide a name, Amazon RDS doesn't create a database in the DB cluster you are creating.
", "CreateDBClusterMessage$DBClusterIdentifier": "The DB cluster identifier. This parameter is stored as a lowercase string.
Constraints:
Must contain from 1 to 63 letters, numbers, or hyphens.
First character must be a letter.
Can't end with a hyphen or contain two consecutive hyphens.
Example: my-cluster1
The name of the DB cluster parameter group to associate with this DB cluster. If you do not specify a value, then the default DB cluster parameter group for the specified DB engine and version is used.
Constraints:
If supplied, must match the name of an existing DB cluster parameter group.
A DB subnet group to associate with this DB cluster.
Constraints: Must match the name of an existing DBSubnetGroup. Must not be default.
Example: mySubnetgroup
The Amazon Resource Name (ARN) of the source DB instance or DB cluster if this DB cluster is created as a Read Replica.
", "CreateDBClusterMessage$KmsKeyId": "The AWS KMS key identifier for an encrypted DB cluster.
The KMS key identifier is the Amazon Resource Name (ARN) for the KMS encryption key. If you are creating a DB cluster with the same AWS account that owns the KMS encryption key used to encrypt the new DB cluster, then you can use the KMS key alias instead of the ARN for the KMS encryption key.
If an encryption key isn't specified in KmsKeyId
:
If ReplicationSourceIdentifier
identifies an encrypted source, then Amazon RDS will use the encryption key used to encrypt the source. Otherwise, Amazon RDS will use your default encryption key.
If the StorageEncrypted
parameter is enabled and ReplicationSourceIdentifier
isn't specified, then Amazon RDS will use your default encryption key.
AWS KMS creates the default encryption key for your AWS account. Your AWS account has a different default encryption key for each AWS Region.
If you create a Read Replica of an encrypted DB cluster in another AWS Region, you must set KmsKeyId
to a KMS key ID that is valid in the destination AWS Region. This key is used to encrypt the Read Replica in that AWS Region.
A URL that contains a Signature Version 4 signed request for the CreateDBCluster
action to be called in the source AWS Region where the DB cluster is replicated from. You only need to specify PreSignedUrl
when you are performing cross-region replication from an encrypted DB cluster.
The pre-signed URL must be a valid request for the CreateDBCluster
API action that can be executed in the source AWS Region that contains the encrypted DB cluster to be copied.
The pre-signed URL request must contain the following parameter values:
KmsKeyId
- The AWS KMS key identifier for the key to use to encrypt the copy of the DB cluster in the destination AWS Region. This should refer to the same KMS key for both the CreateDBCluster
action that is called in the destination AWS Region, and the action contained in the pre-signed URL.
DestinationRegion
- The name of the AWS Region that Aurora Read Replica will be created in.
ReplicationSourceIdentifier
- The DB cluster identifier for the encrypted DB cluster to be copied. This identifier must be in the Amazon Resource Name (ARN) format for the source AWS Region. For example, if you are copying an encrypted DB cluster from the us-west-2 AWS Region, then your ReplicationSourceIdentifier
would look like Example: arn:aws:rds:us-west-2:123456789012:cluster:aurora-cluster1
.
To learn how to generate a Signature Version 4 signed request, see Authenticating Requests: Using Query Parameters (AWS Signature Version 4) and Signature Version 4 Signing Process.
If you are using an AWS SDK tool or the AWS CLI, you can specify SourceRegion
(or --source-region
for the AWS CLI) instead of specifying PreSignedUrl
manually. Specifying SourceRegion
autogenerates a pre-signed URL that is a valid request for the operation that can be executed in the source AWS Region.
The DB engine mode of the DB cluster, either provisioned
, serverless
, parallelquery
, global
, or multimaster
.
The DB engine mode of the DB cluster, either provisioned
, serverless
, parallelquery
, global
, or multimaster
.
Limitations and requirements apply to some DB engine modes. For more information, see the following sections in the Amazon Aurora User Guide:
The global cluster ID of an Aurora cluster that becomes the primary cluster in the new global database cluster.
", "CreateDBClusterParameterGroupMessage$DBClusterParameterGroupName": "The name of the DB cluster parameter group.
Constraints:
Must match the name of an existing DB cluster parameter group.
This value is stored as a lowercase string.
The DB cluster parameter group family name. A DB cluster parameter group can be associated with one and only one DB cluster parameter group family, and can be applied only to a DB cluster running a database engine and engine version compatible with that DB cluster parameter group family.
Aurora MySQL
Example: aurora5.6
, aurora-mysql5.7
Aurora PostgreSQL
Example: aurora-postgresql9.6
The daily time range during which automated backups are created if automated backups are enabled, using the BackupRetentionPeriod
parameter. For more information, see The Backup Window in the Amazon RDS User Guide.
Amazon Aurora
Not applicable. The daily time range for creating automated backups is managed by the DB cluster.
The default is a 30-minute window selected at random from an 8-hour block of time for each AWS Region. To see the time blocks available, see Adjusting the Preferred DB Instance Maintenance Window in the Amazon RDS User Guide.
Constraints:
Must be in the format hh24:mi-hh24:mi
.
Must be in Universal Coordinated Time (UTC).
Must not conflict with the preferred maintenance window.
Must be at least 30 minutes.
The version number of the database engine to use.
For a list of valid engine versions, use the DescribeDBEngineVersions
action.
The following are the database engines and links to information about the major and minor versions that are available with Amazon RDS. Not every database engine is available for every AWS Region.
Amazon Aurora
Not applicable. The version number of the database engine to be used by the DB instance is managed by the DB cluster.
MariaDB
See MariaDB on Amazon RDS Versions in the Amazon RDS User Guide.
Microsoft SQL Server
See Version and Feature Support on Amazon RDS in the Amazon RDS User Guide.
MySQL
See MySQL on Amazon RDS Versions in the Amazon RDS User Guide.
Oracle
See Oracle Database Engine Release Notes in the Amazon RDS User Guide.
PostgreSQL
See Supported PostgreSQL Database Versions in the Amazon RDS User Guide.
", "CreateDBInstanceMessage$LicenseModel": "License model information for this DB instance.
Valid values: license-included
| bring-your-own-license
| general-public-license
Indicates that the DB instance should be associated with the specified option group.
Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group, and that option group can't be removed from a DB instance once it is associated with a DB instance
", + "CreateDBInstanceMessage$OptionGroupName": "Indicates that the DB instance should be associated with the specified option group.
Permanent options, such as the TDE option for Oracle Advanced Security TDE, can't be removed from an option group. Also, that option group can't be removed from a DB instance once it is associated with a DB instance
", "CreateDBInstanceMessage$CharacterSetName": "For supported engines, indicates that the DB instance should be associated with the specified CharacterSet.
Amazon Aurora
Not applicable. The character set is managed by the DB cluster. For more information, see CreateDBCluster
.
The identifier of the DB cluster that the instance will belong to.
", "CreateDBInstanceMessage$StorageType": "Specifies the storage type to be associated with the DB instance.
Valid values: standard | gp2 | io1
If you specify io1
, you must also include a value for the Iops
parameter.
Default: io1
if the Iops
parameter is specified, otherwise gp2
Contains a user-supplied DB cluster identifier. This identifier is the unique key that identifies a DB cluster.
", "DBClusterBacktrack$BacktrackIdentifier": "Contains the backtrack identifier.
", "DBClusterBacktrack$Status": "The status of the backtrack. This property returns one of the following values:
applying
- The backtrack is currently being applied to or rolled back from the DB cluster.
completed
- The backtrack has successfully been applied to or rolled back from the DB cluster.
failed
- An error occurred while the backtrack was applied to or rolled back from the DB cluster.
pending
- The backtrack is currently pending application to or rollback from the DB cluster.
A pagination token that can be used in a subsequent DescribeDBClusterBacktracks
request.
A pagination token that can be used in a later DescribeDBClusterBacktracks
request.
A user-supplied DB cluster identifier. This identifier is the unique key that identifies a DB cluster.
", "DBClusterCapacityInfo$TimeoutAction": "The timeout action of a call to ModifyCurrentDBClusterCapacity
, either ForceApplyCapacityChange
or RollbackCapacityChange
.
The identifier associated with the endpoint. This parameter is stored as a lowercase string.
", @@ -3473,7 +3562,7 @@ "DBClusterEndpointMessage$Marker": " An optional pagination token provided by a previous DescribeDBClusterEndpoints
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
Specifies the instance identifier for this member of the DB cluster.
", "DBClusterMember$DBClusterParameterGroupStatus": "Specifies the status of the DB cluster parameter group for this member of the DB cluster.
", - "DBClusterMessage$Marker": "A pagination token that can be used in a subsequent DescribeDBClusters request.
", + "DBClusterMessage$Marker": "A pagination token that can be used in a later DescribeDBClusters request.
", "DBClusterOptionGroupStatus$DBClusterOptionGroupName": "Specifies the name of the DB cluster option group.
", "DBClusterOptionGroupStatus$Status": "Specifies the status of the DB cluster option group.
", "DBClusterParameterGroup$DBClusterParameterGroupName": "Provides the name of the DB cluster parameter group.
", @@ -3573,7 +3662,7 @@ "DBProxy$Endpoint": "The endpoint that you can use to connect to the proxy. You include the endpoint value in the connection string for a database client application.
", "DBProxyTarget$TargetArn": "The Amazon Resource Name (ARN) for the RDS DB instance or Aurora DB cluster.
", "DBProxyTarget$Endpoint": "The writer endpoint for the RDS DB instance or Aurora DB cluster.
", - "DBProxyTarget$TrackedClusterId": "The DB cluster identifier when the target represents an Aurora DB cluster. This field is blank when the target represents an
", + "DBProxyTarget$TrackedClusterId": "The DB cluster identifier when the target represents an Aurora DB cluster. This field is blank when the target represents an RDS DB instance.
", "DBProxyTarget$RdsResourceId": "The identifier representing the target. It can be the instance identifier for an RDS DB instance, or the cluster identifier for an Aurora DB cluster.
", "DBProxyTargetGroup$DBProxyName": "The identifier for the RDS proxy associated with this target group.
", "DBProxyTargetGroup$TargetGroupName": "The identifier for the target group. This name must be unique for all target groups owned by your AWS account in the specified AWS Region.
", @@ -3671,7 +3760,7 @@ "DescribeDBLogFilesMessage$DBInstanceIdentifier": "The customer-assigned name of the DB instance that contains the log files you want to list.
Constraints:
Must match the identifier of an existing DBInstance.
Filters the available log files for log file names that contain the specified string.
", "DescribeDBLogFilesMessage$Marker": "The pagination token provided in the previous request. If this parameter is specified the response includes only records beyond the marker, up to MaxRecords.
", - "DescribeDBLogFilesResponse$Marker": "A pagination token that can be used in a subsequent DescribeDBLogFiles request.
", + "DescribeDBLogFilesResponse$Marker": "A pagination token that can be used in a later DescribeDBLogFiles request.
", "DescribeDBParameterGroupsMessage$DBParameterGroupName": "The name of a specific DB parameter group to return details for.
Constraints:
If supplied, must match the name of an existing DBClusterParameterGroup.
An optional pagination token provided by a previous DescribeDBParameterGroups
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
The name of a specific DB parameter group to return details for.
Constraints:
If supplied, must match the name of an existing DBParameterGroup.
An optional pagination token provided by a previous DescribeOrderableDBInstanceOptions request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
The identifier of the event source for which events are returned. If not specified, then all sources are included in the response.
Constraints:
If SourceIdentifier is supplied, SourceType must also be provided.
If the source type is DBInstance
, then a DBInstanceIdentifier
must be supplied.
If the source type is DBSecurityGroup
, a DBSecurityGroupName
must be supplied.
If the source type is DBParameterGroup
, a DBParameterGroupName
must be supplied.
If the source type is DBSnapshot
, a DBSnapshotIdentifier
must be supplied.
Can't end with a hyphen or contain two consecutive hyphens.
An optional pagination token provided by a previous DescribeEvents request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
The identifier of the snapshot export task to be described.
", + "DescribeExportTasksMessage$SourceArn": "The Amazon Resource Name (ARN) of the snapshot exported to Amazon S3.
", + "DescribeExportTasksMessage$Marker": " An optional pagination token provided by a previous DescribeExportTasks
request. If you specify this parameter, the response includes only records beyond the marker, up to the value specified by the MaxRecords
parameter.
The maximum number of records to include in the response. If more records exist than the specified value, a pagination token called a marker is included in the response. You can use the marker in a later DescribeExportTasks
request to retrieve the remaining results.
Default: 100
Constraints: Minimum 20, maximum 100.
", "DescribeGlobalClustersMessage$GlobalClusterIdentifier": "The user-supplied DB cluster identifier. If this parameter is specified, information from only the specific DB cluster is returned. This parameter isn't case-sensitive.
Constraints:
If supplied, must match an existing DBClusterIdentifier.
An optional pagination token provided by a previous DescribeGlobalClusters
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
The installation medium ID.
", @@ -3747,7 +3840,7 @@ "DomainMembership$FQDN": "The fully qualified domain name of the Active Directory Domain.
", "DomainMembership$IAMRoleName": "The name of the IAM role to be used when making API calls to the Directory Service.
", "DownloadDBLogFilePortionDetails$LogFileData": "Entries from the specified log file.
", - "DownloadDBLogFilePortionDetails$Marker": "A pagination token that can be used in a subsequent DownloadDBLogFilePortion request.
", + "DownloadDBLogFilePortionDetails$Marker": "A pagination token that can be used in a later DownloadDBLogFilePortion request.
", "DownloadDBLogFilePortionMessage$DBInstanceIdentifier": "The customer-assigned name of the DB instance that contains the log files you want to list.
Constraints:
Must match the identifier of an existing DBInstance.
The name of the log file to be downloaded.
", "DownloadDBLogFilePortionMessage$Marker": "The pagination token provided in the previous request or \"0\". If the Marker parameter is specified the response includes only records beyond the marker until the end of the file or up to NumberOfLines.
", @@ -3774,6 +3867,16 @@ "EventSubscription$EventSubscriptionArn": "The Amazon Resource Name (ARN) for the event subscription.
", "EventSubscriptionsMessage$Marker": " An optional pagination token provided by a previous DescribeOrderableDBInstanceOptions request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
An optional pagination token provided by a previous Events request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords
.
A unique identifier for the snapshot export task. This ID isn't an identifier for the Amazon S3 bucket where the snapshot is exported to.
", + "ExportTask$SourceArn": "The Amazon Resource Name (ARN) of the snapshot exported to Amazon S3.
", + "ExportTask$S3Bucket": "The Amazon S3 bucket that the snapshot is exported to.
", + "ExportTask$S3Prefix": "The Amazon S3 bucket prefix that is the file name and path of the exported snapshot.
", + "ExportTask$IamRoleArn": "The name of the IAM role that is used to write to Amazon S3 when exporting a snapshot.
", + "ExportTask$KmsKeyId": "The ID of the AWS KMS key that is used to encrypt the snapshot when it's exported to Amazon S3. The KMS key ID is the Amazon Resource Name (ARN), the KMS key identifier, or the KMS key alias for the KMS encryption key. The IAM role used for the snapshot export must have encryption and decryption permissions to use this KMS key.
", + "ExportTask$Status": "The progress status of the export task.
", + "ExportTask$FailureCause": "The reason the export failed, if it failed.
", + "ExportTask$WarningMessage": "A warning about the snapshot export task.
", + "ExportTasksMessage$Marker": "A pagination token that can be used in a later DescribeExportTasks
request. A marker is used for pagination to identify the location to begin output for the next response of DescribeExportTasks
.
A DB cluster identifier to force a failover for. This parameter isn't case-sensitive.
Constraints:
Must match the identifier of an existing DBCluster.
The name of the instance to promote to the primary instance.
You must specify the instance identifier for an Aurora Replica in the DB cluster. For example, mydbcluster-replica1
.
The allowed value for an option setting.
", "MinimumEngineVersionPerAllowedValue$MinimumEngineVersion": "The minimum DB engine version required for the allowed value.
", + "ModifyCertificatesMessage$CertificateIdentifier": "The new default certificate identifier to override the current one with.
To determine the valid values, use the describe-certificates
AWS CLI command or the DescribeCertificates
API operation.
The DB cluster identifier for the cluster being modified. This parameter isn't case-sensitive.
Constraints:
Must match the identifier of an existing DB cluster.
The action to take when the timeout is reached, either ForceApplyCapacityChange
or RollbackCapacityChange
.
ForceApplyCapacityChange
, the default, sets the capacity to the specified value as soon as possible.
RollbackCapacityChange
ignores the capacity change if a scaling point isn't found in the timeout period.
The identifier of the endpoint to modify. This parameter is stored as a lowercase string.
", @@ -4078,6 +4182,12 @@ "StartActivityStreamResponse$KinesisStreamName": "The name of the Amazon Kinesis data stream to be used for the database activity stream.
", "StartDBClusterMessage$DBClusterIdentifier": "The DB cluster identifier of the Amazon Aurora DB cluster to be started. This parameter is stored as a lowercase string.
", "StartDBInstanceMessage$DBInstanceIdentifier": "The user-supplied instance identifier.
", + "StartExportTaskMessage$ExportTaskIdentifier": "A unique identifier for the snapshot export task. This ID isn't an identifier for the Amazon S3 bucket where the snapshot is to be exported to.
", + "StartExportTaskMessage$SourceArn": "The Amazon Resource Name (ARN) of the snapshot to export to Amazon S3.
", + "StartExportTaskMessage$S3BucketName": "The name of the Amazon S3 bucket to export the snapshot to.
", + "StartExportTaskMessage$IamRoleArn": "The name of the IAM role to use for writing to the Amazon S3 bucket when exporting a snapshot.
", + "StartExportTaskMessage$KmsKeyId": "The ID of the AWS KMS key to use to encrypt the snapshot exported to Amazon S3. The KMS key ID is the Amazon Resource Name (ARN), the KMS key identifier, or the KMS key alias for the KMS encryption key. The IAM role used for the snapshot export must have encryption and decryption permissions to use this KMS key.
", + "StartExportTaskMessage$S3Prefix": "The Amazon S3 bucket prefix to use as the file name and path of the exported snapshot.
", "StopActivityStreamRequest$ResourceArn": "The Amazon Resource Name (ARN) of the DB cluster for the database activity stream. For example, arn:aws:rds:us-east-1:12345667890:cluster:das-cluster
.
The AWS KMS key identifier used for encrypting messages in the database activity stream.
", "StopActivityStreamResponse$KinesisStreamName": "The name of the Amazon Kinesis data stream used for the database activity stream.
", @@ -4127,11 +4237,13 @@ "DBProxy$VpcSubnetIds": "The EC2 subnet IDs for the proxy.
", "DeregisterDBProxyTargetsRequest$DBInstanceIdentifiers": "One or more DB instance identifiers.
", "DeregisterDBProxyTargetsRequest$DBClusterIdentifiers": "One or more DB cluster identifiers.
", + "ExportTask$ExportOnly": "The data exported from the snapshot. Valid values are the following:
database
- Export all the data of the snapshot.
database.table [table-name]
- Export a table of the snapshot.
database.schema [schema-name]
- Export a database schema of the snapshot. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
database.schema.table [table-name]
- Export a table of the database schema. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
List of DB instance identifiers that are part of the custom endpoint group.
", "ModifyDBClusterEndpointMessage$ExcludedMembers": "List of DB instance identifiers that aren't part of the custom endpoint group. All other eligible instances are reachable through the custom endpoint. Only relevant if the list of static members is empty.
", "ModifyDBProxyRequest$SecurityGroups": "The new list of security groups for the DBProxy
.
One or more DB instance identifiers.
", - "RegisterDBProxyTargetsRequest$DBClusterIdentifiers": "One or more DB cluster identifiers.
" + "RegisterDBProxyTargetsRequest$DBClusterIdentifiers": "One or more DB cluster identifiers.
", + "StartExportTaskMessage$ExportOnly": "The data to be exported from the snapshot. If this parameter is not provided, all the snapshot data is exported. Valid values are the following:
database
- Export all the data of the snapshot.
database.table [table-name]
- Export a table of the snapshot.
database.schema [schema-name]
- Export a database schema of the snapshot. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
database.schema.table [table-name]
- Export a table of the database schema. This value isn't valid for RDS for MySQL, RDS for MariaDB, or Aurora MySQL.
The timestamp of the time to backtrack the DB cluster to, specified in ISO 8601 format. For more information about ISO 8601, see the ISO8601 Wikipedia page.
If the specified time isn't a consistent time for the DB cluster, Aurora automatically chooses the nearest possible consistent time for the DB cluster.
Constraints:
Must contain a valid ISO 8601 timestamp.
Can't contain a timestamp set in the future.
Example: 2017-07-08T18:00Z
The starting date from which the certificate is valid.
", "Certificate$ValidTill": "The final date that the certificate continues to be valid.
", + "Certificate$CustomerOverrideValidTill": "If there is an override for the default certificate identifier, when the override expires.
", "DBCluster$EarliestRestorableTime": "The earliest time to which a database can be restored with point-in-time restore.
", "DBCluster$LatestRestorableTime": "Specifies the latest time to which a database can be restored with point-in-time restore.
", "DBCluster$ClusterCreateTime": "Specifies the time when the DB cluster was created, in Universal Coordinated Time (UTC).
", @@ -4218,6 +4331,9 @@ "DescribeEventsMessage$StartTime": "The beginning of the time interval to retrieve events for, specified in ISO 8601 format. For more information about ISO 8601, go to the ISO8601 Wikipedia page.
Example: 2009-07-08T18:00Z
", "DescribeEventsMessage$EndTime": "The end of the time interval for which to retrieve events, specified in ISO 8601 format. For more information about ISO 8601, go to the ISO8601 Wikipedia page.
Example: 2009-07-08T18:00Z
", "Event$Date": "Specifies the date and time of the event.
", + "ExportTask$SnapshotTime": "The time that the snapshot was created.
", + "ExportTask$TaskStartTime": "The time that the snapshot export task started.
", + "ExportTask$TaskEndTime": "The time that the snapshot export task completed.
", "PendingMaintenanceAction$AutoAppliedAfterDate": "The date of the maintenance window when the action is applied. The maintenance action is applied to the resource during its first maintenance window after this date.
", "PendingMaintenanceAction$ForcedApplyDate": "The date when the maintenance action is automatically applied. The maintenance action is applied to the resource on this date regardless of the maintenance window for the resource.
", "PendingMaintenanceAction$CurrentApplyDate": "The effective date when the pending maintenance action is applied to the resource. This date takes into account opt-in requests received from the ApplyPendingMaintenanceAction
API, the AutoAppliedAfterDate
, and the ForcedApplyDate
. This value is blank if an opt-in request has not been received and nothing has been specified as AutoAppliedAfterDate
or ForcedApplyDate
.
Adds an inbound (ingress) rule to an Amazon Redshift security group. Depending on whether the application accessing your cluster is running on the Internet or an Amazon EC2 instance, you can authorize inbound access to either a Classless Interdomain Routing (CIDR)/Internet Protocol (IP) range or to an Amazon EC2 security group. You can add as many as 20 ingress rules to an Amazon Redshift security group.
If you authorize access to an Amazon EC2 security group, specify EC2SecurityGroupName and EC2SecurityGroupOwnerId. The Amazon EC2 security group and Amazon Redshift cluster must be in the same AWS Region.
If you authorize access to a CIDR/IP address range, specify CIDRIP. For an overview of CIDR blocks, see the Wikipedia article on Classless Inter-Domain Routing.
You must also associate the security group with a cluster so that clients running on these IP addresses or the EC2 instance are authorized to connect to the cluster. For information about managing security groups, go to Working with Security Groups in the Amazon Redshift Cluster Management Guide.
", "AuthorizeSnapshotAccess": "Authorizes the specified AWS customer account to restore the specified snapshot.
For more information about working with snapshots, go to Amazon Redshift Snapshots in the Amazon Redshift Cluster Management Guide.
", "BatchDeleteClusterSnapshots": "Deletes a set of cluster snapshots.
", - "BatchModifyClusterSnapshots": "Modifies the settings for a list of snapshots.
", - "CancelResize": "Cancels a resize operation.
", + "BatchModifyClusterSnapshots": "Modifies the settings for a set of cluster snapshots.
", + "CancelResize": "Cancels a resize operation for a cluster.
", "CopyClusterSnapshot": "Copies the specified automated cluster snapshot to a new manual cluster snapshot. The source must be an automated snapshot and it must be in the available state.
When you delete a cluster, Amazon Redshift deletes any automated snapshots of the cluster. Also, when the retention period of the snapshot expires, Amazon Redshift automatically deletes it. If you want to keep an automated snapshot for a longer period, you can make a manual copy of the snapshot. Manual snapshots are retained until you delete them.
For more information about working with snapshots, go to Amazon Redshift Snapshots in the Amazon Redshift Cluster Management Guide.
", - "CreateCluster": "Creates a new cluster.
To create a cluster in Virtual Private Cloud (VPC), you must provide a cluster subnet group name. The cluster subnet group identifies the subnets of your VPC that Amazon Redshift uses when creating the cluster. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
", + "CreateCluster": "Creates a new cluster with the specified parameters.
To create a cluster in Virtual Private Cloud (VPC), you must provide a cluster subnet group name. The cluster subnet group identifies the subnets of your VPC that Amazon Redshift uses when creating the cluster. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
", "CreateClusterParameterGroup": "Creates an Amazon Redshift parameter group.
Creating parameter groups is independent of creating clusters. You can associate a cluster with a parameter group when you create the cluster. You can also associate an existing cluster with a parameter group after the cluster is created by using ModifyCluster.
Parameters in the parameter group define specific behavior that applies to the databases you create on the cluster. For more information about parameters and parameter groups, go to Amazon Redshift Parameter Groups in the Amazon Redshift Cluster Management Guide.
", "CreateClusterSecurityGroup": "Creates a new Amazon Redshift security group. You use security groups to control access to non-VPC clusters.
For information about managing security groups, go to Amazon Redshift Cluster Security Groups in the Amazon Redshift Cluster Management Guide.
", "CreateClusterSnapshot": "Creates a manual snapshot of the specified cluster. The cluster must be in the available
state.
For more information about working with snapshots, go to Amazon Redshift Snapshots in the Amazon Redshift Cluster Management Guide.
", @@ -19,9 +19,9 @@ "CreateHsmConfiguration": "Creates an HSM configuration that contains the information required by an Amazon Redshift cluster to store and use database encryption keys in a Hardware Security Module (HSM). After creating the HSM configuration, you can specify it as a parameter when creating a cluster. The cluster will then store its encryption keys in the HSM.
In addition to creating an HSM configuration, you must also create an HSM client certificate. For more information, go to Hardware Security Modules in the Amazon Redshift Cluster Management Guide.
", "CreateScheduledAction": "Creates a scheduled action. A scheduled action contains a schedule and an Amazon Redshift API action. For example, you can create a schedule of when to run the ResizeCluster
API operation.
Creates a snapshot copy grant that permits Amazon Redshift to use a customer master key (CMK) from AWS Key Management Service (AWS KMS) to encrypt copied snapshots in a destination region.
For more information about managing snapshot copy grants, go to Amazon Redshift Database Encryption in the Amazon Redshift Cluster Management Guide.
", - "CreateSnapshotSchedule": "Creates a new snapshot schedule.
", - "CreateTags": "Adds one or more tags to a specified resource.
A resource can have up to 50 tags. If you try to create more than 50 tags for a resource, you will receive an error and the attempt will fail.
If you specify a key that already exists for the resource, the value for that key will be updated with the new value.
", - "DeleteCluster": "Deletes a previously provisioned cluster. A successful response from the web service indicates that the request was received correctly. Use DescribeClusters to monitor the status of the deletion. The delete operation cannot be canceled or reverted once submitted. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
If you want to shut down the cluster and retain it for future use, set SkipFinalClusterSnapshot to false
and specify a name for FinalClusterSnapshotIdentifier. You can later restore this snapshot to resume using the cluster. If a final cluster snapshot is requested, the status of the cluster will be \"final-snapshot\" while the snapshot is being taken, then it's \"deleting\" once Amazon Redshift begins deleting the cluster.
For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
", + "CreateSnapshotSchedule": "Creates a snapshot schedule with the rate of every 12 hours.
", + "CreateTags": "Adds tags to a cluster.
A resource can have up to 50 tags. If you try to create more than 50 tags for a resource, you will receive an error and the attempt will fail.
If you specify a key that already exists for the resource, the value for that key will be updated with the new value.
", + "DeleteCluster": "Deletes a previously provisioned cluster without its final snapshot being created. A successful response from the web service indicates that the request was received correctly. Use DescribeClusters to monitor the status of the deletion. The delete operation cannot be canceled or reverted once submitted. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
If you want to shut down the cluster and retain it for future use, set SkipFinalClusterSnapshot to false
and specify a name for FinalClusterSnapshotIdentifier. You can later restore this snapshot to resume using the cluster. If a final cluster snapshot is requested, the status of the cluster will be \"final-snapshot\" while the snapshot is being taken, then it's \"deleting\" once Amazon Redshift begins deleting the cluster.
For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
", "DeleteClusterParameterGroup": "Deletes a specified Amazon Redshift parameter group.
You cannot delete a parameter group if it is associated with a cluster.
Deletes an Amazon Redshift security group.
You cannot delete a security group that is associated with any clusters. You cannot delete the default security group.
For information about managing security groups, go to Amazon Redshift Cluster Security Groups in the Amazon Redshift Cluster Management Guide.
", "DeleteClusterSnapshot": "Deletes the specified manual snapshot. The snapshot must be in the available
state, with no other users authorized to access the snapshot.
Unlike automated snapshots, manual snapshots are retained even after you delete your cluster. Amazon Redshift does not delete your manual snapshots. You must delete manual snapshot explicitly to avoid getting charged. If other accounts are authorized to access the snapshot, you must revoke all of the authorizations before you can delete the snapshot.
", @@ -32,7 +32,7 @@ "DeleteScheduledAction": "Deletes a scheduled action.
", "DeleteSnapshotCopyGrant": "Deletes the specified snapshot copy grant.
", "DeleteSnapshotSchedule": "Deletes a snapshot schedule.
", - "DeleteTags": "Deletes a tag or tags from a resource. You must provide the ARN of the resource from which you want to delete the tag or tags.
", + "DeleteTags": "Deletes tags from a resource. You must provide the ARN of the resource from which you want to delete the tag or tags.
", "DescribeAccountAttributes": "Returns a list of attributes attached to an account
", "DescribeClusterDbRevisions": "Returns an array of ClusterDbRevision
objects.
Returns a list of Amazon Redshift parameter groups, including parameter groups you created and the default parameter group. For each parameter group, the response includes the parameter group name, description, and parameter group family name. You can optionally specify a name to retrieve the description of a specific parameter group.
For more information about parameters and parameter groups, go to Amazon Redshift Parameter Groups in the Amazon Redshift Cluster Management Guide.
If you specify both tag keys and tag values in the same request, Amazon Redshift returns all parameter groups that match any combination of the specified keys and values. For example, if you have owner
and environment
for tag keys, and admin
and test
for tag values, all parameter groups that have any combination of those values are returned.
If both tag keys and values are omitted from the request, parameter groups are returned regardless of whether they have tag keys or values associated with them.
", @@ -58,7 +58,7 @@ "DescribeScheduledActions": "Describes properties of scheduled actions.
", "DescribeSnapshotCopyGrants": "Returns a list of snapshot copy grants owned by the AWS account in the destination region.
For more information about managing snapshot copy grants, go to Amazon Redshift Database Encryption in the Amazon Redshift Cluster Management Guide.
", "DescribeSnapshotSchedules": "Returns a list of snapshot schedules.
", - "DescribeStorage": "Returns the total amount of snapshot usage and provisioned storage in megabytes.
", + "DescribeStorage": "Returns account level backups storage size and provisional storage.
", "DescribeTableRestoreStatus": "Lists the status of one or more table restore requests made using the RestoreTableFromClusterSnapshot API action. If you don't specify a value for the TableRestoreRequestId
parameter, then DescribeTableRestoreStatus
returns the status of all table restore requests ordered by the date and time of the request in ascending order. Otherwise DescribeTableRestoreStatus
returns the status of the table specified by TableRestoreRequestId
.
Returns a list of tags. You can return tags from a specific resource by specifying an ARN, or you can return all tags for a given type of resource, such as clusters, snapshots, and so on.
The following are limitations for DescribeTags
:
You cannot specify an ARN and a resource-type value together in the same request.
You cannot use the MaxRecords
and Marker
parameters together with the ARN parameter.
The MaxRecords
parameter can be a range from 10 to 50 results to return in a request.
If you specify both tag keys and tag values in the same request, Amazon Redshift returns all resources that match any combination of the specified keys and values. For example, if you have owner
and environment
for tag keys, and admin
and test
for tag values, all resources that have any combination of those values are returned.
If both tag keys and values are omitted from the request, resources are returned regardless of whether they have tag keys or values associated with them.
", "DisableLogging": "Stops logging information, such as queries and connection attempts, for the specified Amazon Redshift cluster.
", @@ -67,22 +67,22 @@ "EnableSnapshotCopy": "Enables the automatic copy of snapshots from one region to another region for a specified cluster.
", "GetClusterCredentials": "Returns a database user name and temporary password with temporary authorization to log on to an Amazon Redshift database. The action returns the database user name prefixed with IAM:
if AutoCreate
is False
or IAMA:
if AutoCreate
is True
. You can optionally specify one or more database user groups that the user will join at log on. By default, the temporary credentials expire in 900 seconds. You can optionally specify a duration between 900 seconds (15 minutes) and 3600 seconds (60 minutes). For more information, see Using IAM Authentication to Generate Database User Credentials in the Amazon Redshift Cluster Management Guide.
The AWS Identity and Access Management (IAM)user or role that executes GetClusterCredentials must have an IAM policy attached that allows access to all necessary actions and resources. For more information about permissions, see Resource Policies for GetClusterCredentials in the Amazon Redshift Cluster Management Guide.
If the DbGroups
parameter is specified, the IAM policy must allow the redshift:JoinGroup
action with access to the listed dbgroups
.
In addition, if the AutoCreate
parameter is set to True
, then the policy must include the redshift:CreateClusterUser
privilege.
If the DbName
parameter is specified, the IAM policy must allow access to the resource dbname
for the specified database name.
Returns an array of DC2 ReservedNodeOfferings that matches the payment type, term, and usage price of the given DC1 reserved node.
", - "ModifyCluster": "Modifies the settings for a cluster. For example, you can add another security or parameter group, update the preferred maintenance window, or change the master user password. Resetting a cluster password or modifying the security groups associated with a cluster do not need a reboot. However, modifying a parameter group requires a reboot for parameters to take effect. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
You can also change node type and the number of nodes to scale up or down the cluster. When resizing a cluster, you must specify both the number of nodes and the node type even if one of the parameters does not change.
", + "ModifyCluster": "Modifies the settings for a cluster.
You can also change node type and the number of nodes to scale up or down the cluster. When resizing a cluster, you must specify both the number of nodes and the node type even if one of the parameters does not change.
You can add another security or parameter group, or change the master user password. Resetting a cluster password or modifying the security groups associated with a cluster do not need a reboot. However, modifying a parameter group requires a reboot for parameters to take effect. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
", "ModifyClusterDbRevision": "Modifies the database revision of a cluster. The database revision is a unique revision of the database running in a cluster.
", "ModifyClusterIamRoles": "Modifies the list of AWS Identity and Access Management (IAM) roles that can be used by the cluster to access other AWS services.
A cluster can have up to 10 IAM roles associated at any time.
", - "ModifyClusterMaintenance": "Modifies the maintenance settings of a cluster. For example, you can defer a maintenance window. You can also update or cancel a deferment.
", + "ModifyClusterMaintenance": "Modifies the maintenance settings of a cluster.
", "ModifyClusterParameterGroup": "Modifies the parameters of a parameter group.
For more information about parameters and parameter groups, go to Amazon Redshift Parameter Groups in the Amazon Redshift Cluster Management Guide.
", - "ModifyClusterSnapshot": "Modifies the settings for a snapshot.
", + "ModifyClusterSnapshot": "Modifies the settings for a snapshot.
This exanmple modifies the manual retention period setting for a cluster snapshot.
", "ModifyClusterSnapshotSchedule": "Modifies a snapshot schedule for a cluster.
", "ModifyClusterSubnetGroup": "Modifies a cluster subnet group to include the specified list of VPC subnets. The operation replaces the existing list of subnets with the new list of subnets.
", "ModifyEventSubscription": "Modifies an existing Amazon Redshift event notification subscription.
", - "ModifyScheduledAction": "Modify a scheduled action.
", + "ModifyScheduledAction": "Modifies a scheduled action.
", "ModifySnapshotCopyRetentionPeriod": "Modifies the number of days to retain snapshots in the destination AWS Region after they are copied from the source AWS Region. By default, this operation only changes the retention period of copied automated snapshots. The retention periods for both new and existing copied automated snapshots are updated with the new retention period. You can set the manual option to change only the retention periods of copied manual snapshots. If you set this option, only newly copied manual snapshots have the new retention period.
", "ModifySnapshotSchedule": "Modifies a snapshot schedule. Any schedule associated with a cluster is modified asynchronously.
", "PurchaseReservedNodeOffering": "Allows you to purchase reserved nodes. Amazon Redshift offers a predefined set of reserved node offerings. You can purchase one or more of the offerings. You can call the DescribeReservedNodeOfferings API to obtain the available reserved node offerings. You can call this API by providing a specific reserved node offering and the number of nodes you want to reserve.
For more information about reserved node offerings, go to Purchasing Reserved Nodes in the Amazon Redshift Cluster Management Guide.
", "RebootCluster": "Reboots a cluster. This action is taken as soon as possible. It results in a momentary outage to the cluster, during which the cluster status is set to rebooting
. A cluster event is created when the reboot is completed. Any pending cluster modifications (see ModifyCluster) are applied at this reboot. For more information about managing clusters, go to Amazon Redshift Clusters in the Amazon Redshift Cluster Management Guide.
Sets one or more parameters of the specified parameter group to their default values and sets the source values of the parameters to \"engine-default\". To reset the entire parameter group specify the ResetAllParameters parameter. For parameter changes to take effect you must reboot any associated clusters.
", - "ResizeCluster": "Changes the size of the cluster. You can change the cluster's type, or change the number or type of nodes. The default behavior is to use the elastic resize method. With an elastic resize, your cluster is available for read and write operations more quickly than with the classic resize method.
Elastic resize operations have the following restrictions:
You can only resize clusters of the following types:
dc2.large
dc2.8xlarge
ds2.xlarge
ds2.8xlarge
The type of nodes that you add must match the node type for the cluster.
Changes the size of the cluster. You can change the cluster's type, or change the number or type of nodes. The default behavior is to use the elastic resize method. With an elastic resize, your cluster is available for read and write operations more quickly than with the classic resize method.
Elastic resize operations have the following restrictions:
You can only resize clusters of the following types:
dc2.large
dc2.8xlarge
ds2.xlarge
ds2.8xlarge
ra3.16xlarge
The type of nodes that you add must match the node type for the cluster.
Creates a new cluster from a snapshot. By default, Amazon Redshift creates the resulting cluster with the same configuration as the original cluster from which the snapshot was created, except that the new cluster is created with the default cluster security and parameter groups. After Amazon Redshift creates the cluster, you can use the ModifyCluster API to associate a different security group and different parameter group with the restored cluster. If you are using a DS node type, you can also choose to change to another DS node type of the same size during restore.
If you restore a cluster into a VPC, you must provide a cluster subnet group where you want the cluster restored.
For more information about working with snapshots, go to Amazon Redshift Snapshots in the Amazon Redshift Cluster Management Guide.
", "RestoreTableFromClusterSnapshot": "Creates a new table from a table in an Amazon Redshift cluster snapshot. You must create the new table within the Amazon Redshift cluster that the snapshot was taken from.
You cannot use RestoreTableFromClusterSnapshot
to restore a table with the same name as an existing table in an Amazon Redshift cluster. That is, you cannot overwrite an existing table in a cluster with a restored table. If you want to replace your original table with a new, restored table, then rename or drop your original table before you call RestoreTableFromClusterSnapshot
. When you have renamed your original table, then you can pass the original name of the table as the NewTableName
parameter value in the call to RestoreTableFromClusterSnapshot
. This way, you can replace the original table with the table created from the snapshot.
Revokes an ingress rule in an Amazon Redshift security group for a previously authorized IP range or Amazon EC2 security group. To add an ingress rule, see AuthorizeClusterSecurityGroupIngress. For information about managing security groups, go to Amazon Redshift Cluster Security Groups in the Amazon Redshift Cluster Management Guide.
", @@ -292,7 +292,7 @@ "ModifyClusterMessage$AllowVersionUpgrade": "If true
, major version upgrades will be applied automatically to the cluster during the maintenance window.
Default: false
If true
, the cluster can be accessed from a public network. Only clusters in VPCs can be set to be publicly available.
An option that specifies whether to create the cluster with enhanced VPC routing enabled. To create a cluster that uses enhanced VPC routing, the cluster must be in a VPC. For more information, see Enhanced VPC Routing in the Amazon Redshift Cluster Management Guide.
If this option is true
, enhanced VPC routing is enabled.
Default: false
", - "ModifyClusterMessage$Encrypted": "Indicates whether the cluster is encrypted. If the cluster is encrypted and you provide a value for the KmsKeyId
parameter, we will encrypt the cluster with the provided KmsKeyId
. If you don't provide a KmsKeyId
, we will encrypt with the default key. In the China region we will use legacy encryption if you specify that the cluster is encrypted.
Indicates whether the cluster is encrypted. If the value is encrypted (true) and you provide a value for the KmsKeyId
parameter, we encrypt the cluster with the provided KmsKeyId
. If you don't provide a KmsKeyId
, we encrypt with the default key. In the China region we use legacy encryption if you specify that the cluster is encrypted.
If the value is not encrypted (false), then the cluster is decrypted.
", "ModifyClusterSnapshotScheduleMessage$DisassociateSchedule": "A boolean to indicate whether to remove the assoiciation between the cluster and the schedule.
", "ModifyEventSubscriptionMessage$Enabled": "A Boolean value indicating if the subscription is enabled. true
indicates the subscription is enabled
A modified enable flag of the scheduled action. If true, the scheduled action is active. If false, the scheduled action is disabled.
", @@ -1367,7 +1367,7 @@ "GetClusterCredentialsMessage$DurationSeconds": "The number of seconds until the returned temporary password expires.
Constraint: minimum 900, maximum 3600.
Default: 900
", "GetReservedNodeExchangeOfferingsInputMessage$MaxRecords": "An integer setting the maximum number of ReservedNodeOfferings to retrieve.
", "ModifyClusterMaintenanceMessage$DeferMaintenanceDuration": "An integer indicating the duration of the maintenance window in days. If you specify a duration, you can't specify an end time. The duration must be 45 days or less.
", - "ModifyClusterMessage$NumberOfNodes": "The new number of nodes of the cluster. If you specify a new number of nodes, you must also specify the node type parameter.
When you submit your request to resize a cluster, Amazon Redshift sets access permissions for the cluster to read-only. After Amazon Redshift provisions a new cluster according to your resize requirements, there will be a temporary outage while the old cluster is deleted and your connection is switched to the new cluster. When the new connection is complete, the original access permissions for the cluster are restored. You can use DescribeResize to track the progress of the resize request.
Valid Values: Integer greater than 0
.
The new number of nodes of the cluster. If you specify a new number of nodes, you must also specify the node type parameter.
For more information about resizing clusters, go to Resizing Clusters in Amazon Redshift in the Amazon Redshift Cluster Management Guide.
Valid Values: Integer greater than 0
.
The number of days that automated snapshots are retained. If the value is 0, automated snapshots are disabled. Even if automated snapshots are disabled, you can still create manual snapshots when you want with CreateClusterSnapshot.
If you decrease the automated snapshot retention period from its current value, existing automated snapshots that fall outside of the new retention period will be immediately deleted.
Default: Uses existing setting.
Constraints: Must be a value from 0 to 35.
", "ModifyClusterMessage$ManualSnapshotRetentionPeriod": "The default for number of days that a newly created manual snapshot is retained. If the value is -1, the manual snapshot is retained indefinitely. This value doesn't retroactively change the retention periods of existing manual snapshots.
The value must be either -1 or an integer between 1 and 3,653.
The default value is -1.
", "ModifyClusterSnapshotMessage$ManualSnapshotRetentionPeriod": "The number of days that a manual snapshot is retained. If the value is -1, the manual snapshot is retained indefinitely.
If the manual snapshot falls outside of the new retention period, you can specify the force option to immediately delete the snapshot.
The value must be either -1 or an integer between 1 and 3,653.
", @@ -2347,11 +2347,11 @@ "CreateClusterMessage$DBName": "The name of the first database to be created when the cluster is created.
To create additional databases after the cluster is created, connect to the cluster with a SQL client and use SQL commands to create a database. For more information, go to Create a Database in the Amazon Redshift Database Developer Guide.
Default: dev
Constraints:
Must contain 1 to 64 alphanumeric characters.
Must contain only lowercase letters.
Cannot be a word that is reserved by the service. A list of reserved words can be found in Reserved Words in the Amazon Redshift Database Developer Guide.
A unique identifier for the cluster. You use this identifier to refer to the cluster for any subsequent cluster operations such as deleting or modifying. The identifier also appears in the Amazon Redshift console.
Constraints:
Must contain from 1 to 63 alphanumeric characters or hyphens.
Alphabetic characters must be lowercase.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Must be unique for all clusters within an AWS account.
Example: myexamplecluster
The type of the cluster. When cluster type is specified as
single-node
, the NumberOfNodes parameter is not required.
multi-node
, the NumberOfNodes parameter is required.
Valid Values: multi-node
| single-node
Default: multi-node
The node type to be provisioned for the cluster. For information about node types, go to Working with Clusters in the Amazon Redshift Cluster Management Guide.
Valid Values: ds2.xlarge
| ds2.8xlarge
| ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
The node type to be provisioned for the cluster. For information about node types, go to Working with Clusters in the Amazon Redshift Cluster Management Guide.
Valid Values: ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
| ra3.16xlarge
The user name associated with the master user account for the cluster that is being created.
Constraints:
Must be 1 - 128 alphanumeric characters. The user name can't be PUBLIC
.
First character must be a letter.
Cannot be a reserved word. A list of reserved words can be found in Reserved Words in the Amazon Redshift Database Developer Guide.
The password associated with the master user account for the cluster that is being created.
Constraints:
Must be between 8 and 64 characters in length.
Must contain at least one uppercase letter.
Must contain at least one lowercase letter.
Must contain one number.
Can be any printable ASCII character (ASCII code 33 to 126) except ' (single quote), \" (double quote), \\, /, @, or space.
The name of a cluster subnet group to be associated with this cluster.
If this parameter is not provided the resulting cluster will be deployed outside virtual private cloud (VPC).
", - "CreateClusterMessage$AvailabilityZone": "The EC2 Availability Zone (AZ) in which you want Amazon Redshift to provision the cluster. For example, if you have several EC2 instances running in a specific Availability Zone, then you might want the cluster to be provisioned in the same zone in order to decrease network latency.
Default: A random, system-chosen Availability Zone in the region that is specified by the endpoint.
Example: us-east-1d
Constraint: The specified Availability Zone must be in the same region as the current endpoint.
", + "CreateClusterMessage$AvailabilityZone": "The EC2 Availability Zone (AZ) in which you want Amazon Redshift to provision the cluster. For example, if you have several EC2 instances running in a specific Availability Zone, then you might want the cluster to be provisioned in the same zone in order to decrease network latency.
Default: A random, system-chosen Availability Zone in the region that is specified by the endpoint.
Example: us-east-2d
Constraint: The specified Availability Zone must be in the same region as the current endpoint.
", "CreateClusterMessage$PreferredMaintenanceWindow": "The weekly time range (in UTC) during which automated cluster maintenance can occur.
Format: ddd:hh24:mi-ddd:hh24:mi
Default: A 30-minute window selected at random from an 8-hour block of time per region, occurring on a random day of the week. For more information about the time blocks for each region, see Maintenance Windows in Amazon Redshift Cluster Management Guide.
Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
Constraints: Minimum 30-minute window.
", "CreateClusterMessage$ClusterParameterGroupName": "The name of the parameter group to be associated with this cluster.
Default: The default Amazon Redshift cluster parameter group. For information about the default parameter group, go to Working with Amazon Redshift Parameter Groups
Constraints:
Must be 1 to 255 alphanumeric characters or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
The version of the Amazon Redshift engine software that you want to deploy on the cluster.
The version selected runs on all the nodes in the cluster.
Constraints: Only version 1.0 is currently available.
Example: 1.0
A description for the subnet group.
", "CreateEventSubscriptionMessage$SubscriptionName": "The name of the event subscription to be created.
Constraints:
Cannot be null, empty, or blank.
Must contain from 1 to 255 alphanumeric characters or hyphens.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
The Amazon Resource Name (ARN) of the Amazon SNS topic used to transmit the event notifications. The ARN is created by Amazon SNS when you create a topic and subscribe to it.
", - "CreateEventSubscriptionMessage$SourceType": "The type of source that will be generating the events. For example, if you want to be notified of events generated by a cluster, you would set this parameter to cluster. If this value is not specified, events are returned for all Amazon Redshift objects in your AWS account. You must specify a source type in order to specify source IDs.
Valid values: cluster, cluster-parameter-group, cluster-security-group, and cluster-snapshot.
", + "CreateEventSubscriptionMessage$SourceType": "The type of source that will be generating the events. For example, if you want to be notified of events generated by a cluster, you would set this parameter to cluster. If this value is not specified, events are returned for all Amazon Redshift objects in your AWS account. You must specify a source type in order to specify source IDs.
Valid values: cluster, cluster-parameter-group, cluster-security-group, cluster-snapshot, and scheduled-action.
", "CreateEventSubscriptionMessage$Severity": "Specifies the Amazon Redshift event severity to be published by the event notification subscription.
Values: ERROR, INFO
", "CreateHsmClientCertificateMessage$HsmClientCertificateIdentifier": "The identifier to be assigned to the new HSM client certificate that the cluster will use to connect to the HSM to use the database encryption keys.
", "CreateHsmConfigurationMessage$HsmConfigurationIdentifier": "The identifier to be assigned to the new Amazon Redshift HSM configuration.
", @@ -2390,7 +2390,7 @@ "CreateSnapshotCopyGrantMessage$KmsKeyId": "The unique identifier of the customer master key (CMK) to which to grant Amazon Redshift permission. If no key is specified, the default key is used.
", "CreateSnapshotScheduleMessage$ScheduleIdentifier": "A unique identifier for a snapshot schedule. Only alphanumeric characters are allowed for the identifier.
", "CreateSnapshotScheduleMessage$ScheduleDescription": "The description of the snapshot schedule.
", - "CreateTagsMessage$ResourceName": "The Amazon Resource Name (ARN) to which you want to add the tag or tags. For example, arn:aws:redshift:us-east-1:123456789:cluster:t1
.
The Amazon Resource Name (ARN) to which you want to add the tag or tags. For example, arn:aws:redshift:us-east-2:123456789:cluster:t1
.
Describes the status of the cluster. While the transfer is in progress the status is transferringdata
.
The name of the cluster parameter group family to which the engine default parameters apply.
", @@ -2409,7 +2409,7 @@ "DeleteScheduledActionMessage$ScheduledActionName": "The name of the scheduled action to delete.
", "DeleteSnapshotCopyGrantMessage$SnapshotCopyGrantName": "The name of the snapshot copy grant to delete.
", "DeleteSnapshotScheduleMessage$ScheduleIdentifier": "A unique identifier of the snapshot schedule to delete.
", - "DeleteTagsMessage$ResourceName": "The Amazon Resource Name (ARN) from which you want to remove the tag or tags. For example, arn:aws:redshift:us-east-1:123456789:cluster:t1
.
The Amazon Resource Name (ARN) from which you want to remove the tag or tags. For example, arn:aws:redshift:us-east-2:123456789:cluster:t1
.
A unique identifier for a cluster whose ClusterDbRevisions
you are requesting. This parameter is case sensitive. All clusters defined for an account are returned by default.
An optional parameter that specifies the starting point for returning a set of response records. When the results of a DescribeClusterDbRevisions
request exceed the value specified in MaxRecords
, Amazon Redshift returns a value in the marker
field of the response. You can retrieve the next set of response records by providing the returned marker
value in the marker
parameter and retrying the request.
Constraints: You can specify either the ClusterIdentifier
parameter, or the marker
parameter, but not both.
The name of a specific parameter group for which to return details. By default, details about all parameter groups and the default parameter group are returned.
", @@ -2435,7 +2435,7 @@ "DescribeClustersMessage$Marker": "An optional parameter that specifies the starting point to return a set of response records. When the results of a DescribeClusters request exceed the value specified in MaxRecords
, AWS returns a value in the Marker
field of the response. You can retrieve the next set of response records by providing the returned marker value in the Marker
parameter and retrying the request.
Constraints: You can specify either the ClusterIdentifier parameter or the Marker parameter, but not both.
", "DescribeDefaultClusterParametersMessage$ParameterGroupFamily": "The name of the cluster parameter group family.
", "DescribeDefaultClusterParametersMessage$Marker": "An optional parameter that specifies the starting point to return a set of response records. When the results of a DescribeDefaultClusterParameters request exceed the value specified in MaxRecords
, AWS returns a value in the Marker
field of the response. You can retrieve the next set of response records by providing the returned marker value in the Marker
parameter and retrying the request.
The source type, such as cluster or parameter group, to which the described event categories apply.
Valid values: cluster, cluster-snapshot, cluster-parameter-group, and cluster-security-group.
", + "DescribeEventCategoriesMessage$SourceType": "The source type, such as cluster or parameter group, to which the described event categories apply.
Valid values: cluster, cluster-snapshot, cluster-parameter-group, cluster-security-group, and scheduled-action.
", "DescribeEventSubscriptionsMessage$SubscriptionName": "The name of the Amazon Redshift event notification subscription to be described.
", "DescribeEventSubscriptionsMessage$Marker": "An optional parameter that specifies the starting point to return a set of response records. When the results of a DescribeEventSubscriptions request exceed the value specified in MaxRecords
, AWS returns a value in the Marker
field of the response. You can retrieve the next set of response records by providing the returned marker value in the Marker
parameter and retrying the request.
The identifier of the event source for which events will be returned. If this parameter is not specified, then all sources are included in the response.
Constraints:
If SourceIdentifier is supplied, SourceType must also be provided.
Specify a cluster identifier when SourceType is cluster
.
Specify a cluster security group name when SourceType is cluster-security-group
.
Specify a cluster parameter group name when SourceType is cluster-parameter-group
.
Specify a cluster snapshot identifier when SourceType is cluster-snapshot
.
The Amazon Redshift cluster that the table is being restored to.
", "DescribeTableRestoreStatusMessage$TableRestoreRequestId": "The identifier of the table restore request to return status for. If you don't specify a TableRestoreRequestId
value, then DescribeTableRestoreStatus
returns the status of all in-progress table restore requests.
An optional pagination token provided by a previous DescribeTableRestoreStatus
request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by the MaxRecords
parameter.
The Amazon Resource Name (ARN) for which you want to describe the tag or tags. For example, arn:aws:redshift:us-east-1:123456789:cluster:t1
.
The Amazon Resource Name (ARN) for which you want to describe the tag or tags. For example, arn:aws:redshift:us-east-2:123456789:cluster:t1
.
The type of resource with which you want to view tags. Valid resource types are:
Cluster
CIDR/IP
EC2 security group
Snapshot
Cluster security group
Subnet group
HSM connection
HSM certificate
Parameter group
Snapshot copy grant
For more information about Amazon Redshift resource types and constructing ARNs, go to Specifying Policy Elements: Actions, Effects, Resources, and Principals in the Amazon Redshift Cluster Management Guide.
", "DescribeTagsMessage$Marker": "A value that indicates the starting point for the next set of response records in a subsequent request. If a value is returned in a response, you can retrieve the next set of records by providing this returned marker value in the marker
parameter and retrying the command. If the marker
field is empty, all response records have been retrieved for the request.
The identifier of the cluster on which logging is to be stopped.
Example: examplecluster
The name of the Amazon Redshift event notification subscription.
", "EventSubscription$SnsTopicArn": "The Amazon Resource Name (ARN) of the Amazon SNS topic used by the event notification subscription.
", "EventSubscription$Status": "The status of the Amazon Redshift event notification subscription.
Constraints:
Can be one of the following: active | no-permission | topic-not-exist
The status \"no-permission\" indicates that Amazon Redshift no longer has permission to post to the Amazon SNS topic. The status \"topic-not-exist\" indicates that the topic was deleted after the subscription was created.
The source type of the events returned the Amazon Redshift event notification, such as cluster, or cluster-snapshot.
", + "EventSubscription$SourceType": "The source type of the events returned by the Amazon Redshift event notification, such as cluster, cluster-snapshot, cluster-parameter-group, cluster-security-group, or scheduled-action.
", "EventSubscription$Severity": "The event severity specified in the Amazon Redshift event notification subscription.
Values: ERROR, INFO
", "EventSubscriptionsMessage$Marker": "A value that indicates the starting point for the next set of response records in a subsequent request. If a value is returned in a response, you can retrieve the next set of records by providing this returned marker value in the Marker
parameter and retrying the command. If the Marker
field is empty, all response records have been retrieved for the request.
A value that indicates the starting point for the next set of response records in a subsequent request. If a value is returned in a response, you can retrieve the next set of records by providing this returned marker value in the Marker
parameter and retrying the command. If the Marker
field is empty, all response records have been retrieved for the request.
A unique identifier for the deferred maintenance window.
", "ModifyClusterMessage$ClusterIdentifier": "The unique identifier of the cluster to be modified.
Example: examplecluster
The new cluster type.
When you submit your cluster resize request, your existing cluster goes into a read-only mode. After Amazon Redshift provisions a new cluster based on your resize requirements, there will be outage for a period while the old cluster is deleted and your connection is switched to the new cluster. You can use DescribeResize to track the progress of the resize request.
Valid Values: multi-node | single-node
The new node type of the cluster. If you specify a new node type, you must also specify the number of nodes parameter.
When you submit your request to resize a cluster, Amazon Redshift sets access permissions for the cluster to read-only. After Amazon Redshift provisions a new cluster according to your resize requirements, there will be a temporary outage while the old cluster is deleted and your connection is switched to the new cluster. When the new connection is complete, the original access permissions for the cluster are restored. You can use DescribeResize to track the progress of the resize request.
Valid Values: ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
The new node type of the cluster. If you specify a new node type, you must also specify the number of nodes parameter.
For more information about resizing clusters, go to Resizing Clusters in Amazon Redshift in the Amazon Redshift Cluster Management Guide.
Valid Values: ds2.xlarge
| ds2.8xlarge
| dc1.large
| dc1.8xlarge
| dc2.large
| dc2.8xlarge
| ra3.16xlarge
The new password for the cluster master user. This change is asynchronously applied as soon as possible. Between the time of the request and the completion of the request, the MasterUserPassword
element exists in the PendingModifiedValues
element of the operation response.
Operations never return the password, so this operation provides a way to regain access to the master user account for a cluster if the password is lost.
Default: Uses existing setting.
Constraints:
Must be between 8 and 64 characters in length.
Must contain at least one uppercase letter.
Must contain at least one lowercase letter.
Must contain one number.
Can be any printable ASCII character (ASCII code 33 to 126) except ' (single quote), \" (double quote), \\, /, @, or space.
The name of the cluster parameter group to apply to this cluster. This change is applied only after the cluster is rebooted. To reboot a cluster use RebootCluster.
Default: Uses existing setting.
Constraints: The cluster parameter group must be in the same parameter group family that matches the cluster version.
", "ModifyClusterMessage$PreferredMaintenanceWindow": "The weekly time range (in UTC) during which system maintenance can occur, if necessary. If system maintenance is necessary during the window, it may result in an outage.
This maintenance window change is made immediately. If the new maintenance window indicates the current time, there must be at least 120 minutes between the current time and end of the window in order to ensure that pending changes are applied.
Default: Uses existing setting.
Format: ddd:hh24:mi-ddd:hh24:mi, for example wed:07:30-wed:08:00
.
Valid Days: Mon | Tue | Wed | Thu | Fri | Sat | Sun
Constraints: Must be at least 30 minutes.
", @@ -2556,7 +2556,7 @@ "ModifyClusterSubnetGroupMessage$Description": "A text description of the subnet group to be modified.
", "ModifyEventSubscriptionMessage$SubscriptionName": "The name of the modified Amazon Redshift event notification subscription.
", "ModifyEventSubscriptionMessage$SnsTopicArn": "The Amazon Resource Name (ARN) of the SNS topic to be used by the event notification subscription.
", - "ModifyEventSubscriptionMessage$SourceType": "The type of source that will be generating the events. For example, if you want to be notified of events generated by a cluster, you would set this parameter to cluster. If this value is not specified, events are returned for all Amazon Redshift objects in your AWS account. You must specify a source type in order to specify source IDs.
Valid values: cluster, cluster-parameter-group, cluster-security-group, and cluster-snapshot.
", + "ModifyEventSubscriptionMessage$SourceType": "The type of source that will be generating the events. For example, if you want to be notified of events generated by a cluster, you would set this parameter to cluster. If this value is not specified, events are returned for all Amazon Redshift objects in your AWS account. You must specify a source type in order to specify source IDs.
Valid values: cluster, cluster-parameter-group, cluster-security-group, cluster-snapshot, and scheduled-action.
", "ModifyEventSubscriptionMessage$Severity": "Specifies the Amazon Redshift event severity to be published by the event notification subscription.
Values: ERROR, INFO
", "ModifyScheduledActionMessage$ScheduledActionName": "The name of the scheduled action to modify.
", "ModifyScheduledActionMessage$Schedule": "A modified schedule in either at( )
or cron( )
format. For more information about this parameter, see ScheduledAction.
The identifier of the cluster that will be created from restoring the snapshot.
Constraints:
Must contain from 1 to 63 alphanumeric characters or hyphens.
Alphabetic characters must be lowercase.
First character must be a letter.
Cannot end with a hyphen or contain two consecutive hyphens.
Must be unique for all clusters within an AWS account.
The name of the snapshot from which to create the new cluster. This parameter isn't case sensitive.
Example: my-snapshot-id
The name of the cluster the source snapshot was created from. This parameter is required if your IAM user has a policy containing a snapshot resource element that specifies anything other than * for the cluster name.
", - "RestoreFromClusterSnapshotMessage$AvailabilityZone": "The Amazon EC2 Availability Zone in which to restore the cluster.
Default: A random, system-chosen Availability Zone.
Example: us-east-1a
The Amazon EC2 Availability Zone in which to restore the cluster.
Default: A random, system-chosen Availability Zone.
Example: us-east-2a
The name of the subnet group where you want to cluster restored.
A snapshot of cluster in VPC can be restored only in VPC. Therefore, you must provide subnet group name where you want the cluster restored.
", "RestoreFromClusterSnapshotMessage$OwnerAccount": "The AWS customer account used to create or copy the snapshot. Required if you are restoring a snapshot you do not own, optional if you own the snapshot.
", "RestoreFromClusterSnapshotMessage$HsmClientCertificateIdentifier": "Specifies the name of the HSM client certificate the Amazon Redshift cluster uses to retrieve the data encryption keys stored in an HSM.
", @@ -2699,7 +2699,7 @@ "Tag$Value": "The value for the resource tag.
", "TagKeyList$member": null, "TagValueList$member": null, - "TaggedResource$ResourceName": "The Amazon Resource Name (ARN) with which the tag is associated, for example: arn:aws:redshift:us-east-1:123456789:cluster:t1
.
The Amazon Resource Name (ARN) with which the tag is associated, for example: arn:aws:redshift:us-east-2:123456789:cluster:t1
.
The type of resource with which the tag is associated. Valid resource types are:
Cluster
CIDR/IP
EC2 security group
Snapshot
Cluster security group
Subnet group
HSM connection
HSM certificate
Parameter group
For more information about Amazon Redshift resource types and constructing ARNs, go to Constructing an Amazon Redshift Amazon Resource Name (ARN) in the Amazon Redshift Cluster Management Guide.
", "TaggedResourceListMessage$Marker": "A value that indicates the starting point for the next set of response records in a subsequent request. If a value is returned in a response, you can retrieve the next set of records by providing this returned marker value in the Marker
parameter and retrying the command. If the Marker
field is empty, all response records have been retrieved for the request.
The starting point to return a set of response tracklist records. You can retrieve the next set of response records by providing the returned marker value in the Marker
parameter and retrying the request.
The tag key is noncompliant with the effective tag policy.
", - "ComplianceDetails$KeysWithNoncompliantValues": "The tag value is noncompliant with the effective tag policy.
", + "ComplianceDetails$NoncompliantKeys": "These tag keys on the resource are noncompliant with the effective tag policy.
", + "ComplianceDetails$KeysWithNoncompliantValues": "These are keys defined in the effective policy that are on the resource with either incorrect case treatment or noncompliant values.
", "GetTagKeysOutput$TagKeys": "A list of all tag keys in the AWS account.
" } }, diff --git a/models/apis/s3/2006-03-01/docs-2.json b/models/apis/s3/2006-03-01/docs-2.json index 473ffcab864..129e9b2f6b7 100644 --- a/models/apis/s3/2006-03-01/docs-2.json +++ b/models/apis/s3/2006-03-01/docs-2.json @@ -4,9 +4,9 @@ "operations": { "AbortMultipartUpload": "This operation aborts a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.
To verify that all parts have been removed, so you don't get charged for the part storage, you should call the ListParts operation and ensure that the parts list is empty.
For information about permissions required to use the multipart upload API, see Multipart Upload API and Permissions.
The following operations are related to AbortMultipartUpload
:
Completes a multipart upload by assembling previously uploaded parts.
You first initiate the multipart upload and then upload all parts using the UploadPart operation. After successfully uploading all relevant parts of an upload, you call this operation to complete the upload. Upon receiving this request, Amazon S3 concatenates all the parts in ascending order by part number to create a new object. In the Complete Multipart Upload request, you must provide the parts list. You must ensure that the parts list is complete. This operation concatenates the parts that you provide in the list. For each part in the list, you must provide the part number and the ETag
value, returned after that part was uploaded.
Processing of a Complete Multipart Upload request could take several minutes to complete. After Amazon S3 begins processing the request, it sends an HTTP response header that specifies a 200 OK response. While processing is in progress, Amazon S3 periodically sends white space characters to keep the connection from timing out. Because a request could fail after the initial 200 OK response has been sent, it is important that you check the response body to determine whether the request succeeded.
Note that if CompleteMultipartUpload
fails, applications should be prepared to retry the failed requests. For more information, see Amazon S3 Error Best Practices.
For more information about multipart uploads, see Uploading Objects Using Multipart Upload.
For information about permissions required to use the multipart upload API, see Multipart Upload API and Permissions.
GetBucketLifecycle
has the following special errors:
Error code: EntityTooSmall
Description: Your proposed upload is smaller than the minimum allowed object size. Each part must be at least 5 MB in size, except the last part.
400 Bad Request
Error code: InvalidPart
Description: One or more of the specified parts could not be found. The part might not have been uploaded, or the specified entity tag might not have matched the part's entity tag.
400 Bad Request
Error code: InvalidPartOrder
Description: The list of parts was not in ascending order. The parts list must be specified in order by part number.
400 Bad Request
Error code: NoSuchUpload
Description: The specified multipart upload does not exist. The upload ID might be invalid, or the multipart upload might have been aborted or completed.
404 Not Found
The following operations are related to DeleteBucketMetricsConfiguration
:
Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic operation using this API. However, for copying an object greater than 5 GB, you must use the multipart upload Upload Part - Copy API. For more information, see Copy Object Using the REST Multipart Upload API.
When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
Amazon S3 transfer acceleration does not support cross-region copies. If you request a cross-region copy using a transfer acceleration endpoint, you get a 400 Bad Request
error. For more information about transfer acceleration, see Transfer Acceleration.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
To only copy an object under certain conditions, such as whether the Etag
matches or whether the object was modified before or after a specified date, use the request parameters x-amz-copy-source-if-match
, x-amz-copy-source-if-none-match
, x-amz-copy-source-if-unmodified-since
, or x-amz-copy-source-if-modified-since
.
All headers with the x-amz-
prefix, including x-amz-copy-source
, must be signed.
You can use this operation to change the storage class of an object that is already stored in Amazon S3 using the StorageClass
parameter. For more information, see Storage Classes.
The source object that you are copying can be encrypted or unencrypted. If the source object is encrypted, it can be encrypted by server-side encryption using AWS managed encryption keys or by using a customer-provided encryption key. When copying an object, you can request that Amazon S3 encrypt the target object by using either the AWS managed encryption keys or by using your own encryption key. You can do this regardless of the form of server-side encryption that was used to encrypt the source, or even if the source object was not encrypted. For more information about server-side encryption, see Using Server-Side Encryption.
A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy operation starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK
response. This means that a 200 OK
response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.
If the copy is successful, you receive a response with information about the copied object.
If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.
Consider the following when using request headers:
Consideration 1 – If both the x-amz-copy-source-if-match
and x-amz-copy-source-if-unmodified-since
headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:
x-amz-copy-source-if-match
condition evaluates to true
x-amz-copy-source-if-unmodified-since
condition evaluates to false
Consideration 2 – If both of the x-amz-copy-source-if-none-match
and x-amz-copy-source-if-modified-since
headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed
response code:
x-amz-copy-source-if-none-match
condition evaluates to false
x-amz-copy-source-if-modified-since
condition evaluates to true
The copy request charge is based on the storage class and Region you specify for the destination object. For pricing information, see Amazon S3 Pricing.
Following are other considerations when using CopyObject
:
By default, x-amz-copy-source
identifies the current version of an object to copy. (If the current version is a delete marker, Amazon S3 behaves as if the object was deleted.) To copy a different version, use the versionId
subresource.
If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id
response header in the response.
If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null.
If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see .
When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:
Specify a canned ACL with the x-amz-acl
request header. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
To encrypt the target object, you must provide the appropriate encryption-related request headers. The one you use depends on whether you want to use AWS managed encryption keys or provide your own encryption key.
To encrypt the target object using server-side encryption with an AWS managed encryption key, provide the following request headers, as appropriate.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
but don't provide x-amz-server-side- encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed customer master key (CMK) in AWS KMS to protect the data.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in KMS.
To encrypt the target object using server-side encryption with an encryption key that you provide, use the following headers.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
If the source object is encrypted using server-side encryption with customer-provided encryption keys, you must use the following headers.
x-amz-copy-source-server-side-encryption-customer-algorithm
x-amz-copy-source-server-side-encryption-customer-key
x-amz-copy-source-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in Amazon KMS.
You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:
Specify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.
Specify access permissions explicitly — To explicitly grant access permissions to specific AWS accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:
x-amz-grant-read
x-amz-grant-write
x-amz-grant-read-acp
x-amz-grant-write-acp
x-amz-grant-full-control
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants the AWS accounts identified by email addresses permissions to read object data and its metadata:
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
The following operations are related to CopyObject
:
For more information, see Copying Objects.
", + "CopyObject": "Creates a copy of an object that is already stored in Amazon S3.
You can store individual objects of up to 5 TB in Amazon S3. You create a copy of your object up to 5 GB in size in a single atomic operation using this API. However, for copying an object greater than 5 GB, you must use the multipart upload Upload Part - Copy API. For more information, see Copy Object Using the REST Multipart Upload API.
When copying an object, you can preserve all metadata (default) or specify new metadata. However, the ACL is not preserved and is set to private for the user making the request. To override the default ACL setting, specify a new ACL when generating a copy request. For more information, see Using ACLs.
Amazon S3 transfer acceleration does not support cross-region copies. If you request a cross-region copy using a transfer acceleration endpoint, you get a 400 Bad Request
error. For more information about transfer acceleration, see Transfer Acceleration.
All copy requests must be authenticated. Additionally, you must have read access to the source object and write access to the destination bucket. For more information, see REST Authentication. Both the Region that you want to copy the object from and the Region that you want to copy the object to must be enabled for your account.
To only copy an object under certain conditions, such as whether the Etag
matches or whether the object was modified before or after a specified date, use the request parameters x-amz-copy-source-if-match
, x-amz-copy-source-if-none-match
, x-amz-copy-source-if-unmodified-since
, or x-amz-copy-source-if-modified-since
.
All headers with the x-amz-
prefix, including x-amz-copy-source
, must be signed.
You can use this operation to change the storage class of an object that is already stored in Amazon S3 using the StorageClass
parameter. For more information, see Storage Classes.
The source object that you are copying can be encrypted or unencrypted. If the source object is encrypted, it can be encrypted by server-side encryption using AWS managed encryption keys or by using a customer-provided encryption key. When copying an object, you can request that Amazon S3 encrypt the target object by using either the AWS managed encryption keys or by using your own encryption key. You can do this regardless of the form of server-side encryption that was used to encrypt the source, or even if the source object was not encrypted. For more information about server-side encryption, see Using Server-Side Encryption.
A copy request might return an error when Amazon S3 receives the copy request or while Amazon S3 is copying the files. If the error occurs before the copy operation starts, you receive a standard Amazon S3 error. If the error occurs during the copy operation, the error response is embedded in the 200 OK
response. This means that a 200 OK
response can contain either a success or an error. Design your application to parse the contents of the response and handle it appropriately.
If the copy is successful, you receive a response with information about the copied object.
If the request is an HTTP 1.1 request, the response is chunk encoded. If it were not, it would not contain the content-length, and you would need to read the entire body.
Consider the following when using request headers:
Consideration 1 – If both the x-amz-copy-source-if-match
and x-amz-copy-source-if-unmodified-since
headers are present in the request and evaluate as follows, Amazon S3 returns 200 OK and copies the data:
x-amz-copy-source-if-match
condition evaluates to true
x-amz-copy-source-if-unmodified-since
condition evaluates to false
Consideration 2 – If both of the x-amz-copy-source-if-none-match
and x-amz-copy-source-if-modified-since
headers are present in the request and evaluate as follows, Amazon S3 returns the 412 Precondition Failed
response code:
x-amz-copy-source-if-none-match
condition evaluates to false
x-amz-copy-source-if-modified-since
condition evaluates to true
The copy request charge is based on the storage class and Region you specify for the destination object. For pricing information, see Amazon S3 Pricing.
Following are other considerations when using CopyObject
:
By default, x-amz-copy-source
identifies the current version of an object to copy. (If the current version is a delete marker, Amazon S3 behaves as if the object was deleted.) To copy a different version, use the versionId
subresource.
If you enable versioning on the target bucket, Amazon S3 generates a unique version ID for the object being copied. This version ID is different from the version ID of the source object. Amazon S3 returns the version ID of the copied object in the x-amz-version-id
response header in the response.
If you do not enable versioning or suspend it on the target bucket, the version ID that Amazon S3 generates is always null.
If the source object's storage class is GLACIER, you must restore a copy of this object before you can use it as a source object for the copy operation. For more information, see .
When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:
Specify a canned ACL with the x-amz-acl
request header. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
To encrypt the target object, you must provide the appropriate encryption-related request headers. The one you use depends on whether you want to use AWS managed encryption keys or provide your own encryption key.
To encrypt the target object using server-side encryption with an AWS managed encryption key, provide the following request headers, as appropriate.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS KMS to protect the data. If you want to use a customer managed AWS KMS CMK, you must provide the x-amz-server-side-encryption-aws-kms-key-id
of the symmetric customer managed CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service Developer Guide.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in KMS.
To encrypt the target object using server-side encryption with an encryption key that you provide, use the following headers.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
If the source object is encrypted using server-side encryption with customer-provided encryption keys, you must use the following headers.
x-amz-copy-source-server-side-encryption-customer-algorithm
x-amz-copy-source-server-side-encryption-customer-key
x-amz-copy-source-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in Amazon KMS.
You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:
Specify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.
Specify access permissions explicitly — To explicitly grant access permissions to specific AWS accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:
x-amz-grant-read
x-amz-grant-write
x-amz-grant-read-acp
x-amz-grant-write-acp
x-amz-grant-full-control
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants the AWS accounts identified by email addresses permissions to read object data and its metadata:
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
The following operations are related to CopyObject
:
For more information, see Copying Objects.
", "CreateBucket": "Creates a new bucket. To create a bucket, you must register with Amazon S3 and have a valid AWS Access Key ID to authenticate requests. Anonymous requests are never allowed to create buckets. By creating the bucket, you become the bucket owner.
Not every string is an acceptable bucket name. For information on bucket naming restrictions, see Working with Amazon S3 Buckets.
By default, the bucket is created in the US East (N. Virginia) Region. You can optionally specify a Region in the request body. You might choose a Region to optimize latency, minimize costs, or address regulatory requirements. For example, if you reside in Europe, you will probably find it advantageous to create buckets in the EU (Ireland) Region. For more information, see How to Select a Region for Your Buckets.
If you send your create bucket request to the s3.amazonaws.com
endpoint, the request goes to the us-east-1 Region. Accordingly, the signature calculations in Signature Version 4 must use us-east-1 as the Region, even if the location constraint in the request specifies another Region where the bucket is to be created. If you create a bucket in a Region other than US East (N. Virginia), your application must be able to handle 307 redirect. For more information, see Virtual Hosting of Buckets.
When creating a bucket using this operation, you can optionally specify the accounts or groups that should be granted specific permissions on the bucket. There are two ways to grant the appropriate permissions using the request headers.
Specify a canned ACL using the x-amz-acl
request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.
Specify access permissions explicitly using the x-amz-grant-read
, x-amz-grant-write
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. These headers map to the set of permissions Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants the AWS accounts identified by email addresses permissions to read object data and its metadata:
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
The following operations are related to CreateBucket
:
This operation initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.
For more information about multipart uploads, see Multipart Upload Overview.
If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort operation and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy.
For information about the permissions required to use the multipart upload API, see Multipart Upload API and Permissions.
For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (AWS Signature Version 4).
After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.
You can optionally request server-side encryption. For server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. You can provide your own encryption key, or use AWS Key Management Service (AWS KMS) customer master keys (CMKs) or Amazon S3-managed encryption keys. If you choose to provide your own encryption key, the request headers you provide in UploadPart) and UploadPartCopy) requests must match the headers you used in the request to initiate the upload by using CreateMultipartUpload
.
To perform a multipart upload with encryption using an AWS KMS CMK, the requester must have permission to the kms:Encrypt
, kms:Decrypt
, kms:ReEncrypt*
, kms:GenerateDataKey*
, and kms:DescribeKey
actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload.
If your AWS Identity and Access Management (IAM) user or role is in the same AWS account as the AWS KMS CMK, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role.
For more information, see Protecting Data Using Server-Side Encryption.
When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:
Specify a canned ACL with the x-amz-acl
request header. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use AWS managed encryption keys or provide your own encryption key.
Use encryption keys managed by Amazon S3 or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS) – If you want AWS to manage the keys used to encrypt data, specify the following headers in the request.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't provide x-amz-server-side- encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS KMS to protect the data.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:
Specify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.
Specify access permissions explicitly — To explicitly grant access permissions to specific AWS accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:
x-amz-grant-read
x-amz-grant-write
x-amz-grant-read-acp
x-amz-grant-write-acp
x-amz-grant-full-control
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants the AWS accounts identified by email addresses permissions to read object data and its metadata:
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
The following operations are related to CreateMultipartUpload
:
This operation initiates a multipart upload and returns an upload ID. This upload ID is used to associate all of the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests (see UploadPart). You also include this upload ID in the final request to either complete or abort the multipart upload request.
For more information about multipart uploads, see Multipart Upload Overview.
If you have configured a lifecycle rule to abort incomplete multipart uploads, the upload must complete within the number of days specified in the bucket lifecycle configuration. Otherwise, the incomplete multipart upload becomes eligible for an abort operation and Amazon S3 aborts the multipart upload. For more information, see Aborting Incomplete Multipart Uploads Using a Bucket Lifecycle Policy.
For information about the permissions required to use the multipart upload API, see Multipart Upload API and Permissions.
For request signing, multipart upload is just a series of regular requests. You initiate a multipart upload, send one or more requests to upload parts, and then complete the multipart upload process. You sign each request individually. There is nothing special about signing multipart upload requests. For more information about signing, see Authenticating Requests (AWS Signature Version 4).
After you initiate a multipart upload and upload one or more parts, to stop being charged for storing the uploaded parts, you must either complete or abort the multipart upload. Amazon S3 frees up the space used to store the parts and stop charging you for storing them only after you either complete or abort a multipart upload.
You can optionally request server-side encryption. For server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. You can provide your own encryption key, or use AWS Key Management Service (AWS KMS) customer master keys (CMKs) or Amazon S3-managed encryption keys. If you choose to provide your own encryption key, the request headers you provide in UploadPart) and UploadPartCopy) requests must match the headers you used in the request to initiate the upload by using CreateMultipartUpload
.
To perform a multipart upload with encryption using an AWS KMS CMK, the requester must have permission to the kms:Encrypt
, kms:Decrypt
, kms:ReEncrypt*
, kms:GenerateDataKey*
, and kms:DescribeKey
actions on the key. These permissions are required because Amazon S3 must decrypt and read data from the encrypted file parts before it completes the multipart upload.
If your AWS Identity and Access Management (IAM) user or role is in the same AWS account as the AWS KMS CMK, then you must have these permissions on the key policy. If your IAM user or role belongs to a different account than the key, then you must have the permissions on both the key policy and your IAM user or role.
For more information, see Protecting Data Using Server-Side Encryption.
When copying an object, you can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:
Specify a canned ACL with the x-amz-acl
request header. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use AWS managed encryption keys or provide your own encryption key.
Use encryption keys managed by Amazon S3 or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS) – If you want AWS to manage the keys used to encrypt data, specify the following headers in the request.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS KMS to protect the data.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the access control list (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:
Specify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.
Specify access permissions explicitly — To explicitly grant access permissions to specific AWS accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly, use:
x-amz-grant-read
x-amz-grant-write
x-amz-grant-read-acp
x-amz-grant-write-acp
x-amz-grant-full-control
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants the AWS accounts identified by email addresses permissions to read object data and its metadata:
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
The following operations are related to CreateMultipartUpload
:
Deletes the bucket. All objects (including all object versions and delete markers) in the bucket must be deleted before the bucket itself can be deleted.
Related Resources
Deletes an analytics configuration for the bucket (specified by the analytics configuration ID).
To use this operation, you must have permissions to perform the s3:PutAnalyticsConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
For information about the Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to DeleteBucketAnalyticsConfiguration
:
Deletes the cors
configuration information set for the bucket.
To use this operation, you must have permission to perform the s3:PutBucketCORS
action. The bucket owner has this permission by default and can grant this permission to others.
For information about cors
, see Enabling Cross-Origin Resource Sharing in the Amazon Simple Storage Service Developer Guide.
Related Resources:
", @@ -19,7 +19,7 @@ "DeleteBucketTagging": "Deletes the tags from the bucket.
To use this operation, you must have permission to perform the s3:PutBucketTagging
action. By default, the bucket owner has this permission and can grant this permission to others.
The following operations are related to DeleteBucketTagging
:
This operation removes the website configuration for a bucket. Amazon S3 returns a 200 OK
response upon successfully deleting a website configuration on the specified bucket. You will get a 200 OK
response if the website configuration you are trying to delete does not exist on the bucket. Amazon S3 returns a 404
response if the bucket specified in the request does not exist.
This DELETE operation requires the S3:DeleteBucketWebsite
permission. By default, only the bucket owner can delete the website configuration attached to a bucket. However, bucket owners can grant other users permission to delete the website configuration by writing a bucket policy granting them the S3:DeleteBucketWebsite
permission.
For more information about hosting websites, see Hosting Websites on Amazon S3.
The following operations are related to DeleteBucketWebsite
:
Removes the null version (if there is one) of an object and inserts a delete marker, which becomes the latest version of the object. If there isn't a null version, Amazon S3 does not remove any objects.
To remove a specific version, you must be the bucket owner and you must use the version Id subresource. Using this subresource permanently deletes the version. If the object deleted is a delete marker, Amazon S3 sets the response header, x-amz-delete-marker
, to true.
If the object you want to delete is in a bucket where the bucket versioning configuration is MFA Delete enabled, you must include the x-amz-mfa
request header in the DELETE versionId
request. Requests that include x-amz-mfa
must use HTTPS.
For more information about MFA Delete, see Using MFA Delete. To see sample requests that use versioning, see Sample Request.
You can delete objects by explicitly calling the DELETE Object API or configure its lifecycle (PutBucketLifecycle) to enable Amazon S3 to remove them for you. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them the s3:DeleteObject
, s3:DeleteObjectVersion
, and s3:PutLifeCycleConfiguration
actions.
The following operation is related to DeleteObject
:
Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.
To use this operation, you must have permission to perform the s3:DeleteObjectTagging
action.
To delete tags of a specific object version, add the versionId
query parameter in the request. You will need permission for the s3:DeleteObjectVersionTagging
action.
The following operations are related to DeleteBucketMetricsConfiguration
:
Removes the entire tag set from the specified object. For more information about managing object tags, see Object Tagging.
To use this operation, you must have permission to perform the s3:DeleteObjectTagging
action.
To delete tags of a specific object version, add the versionId
query parameter in the request. You will need permission for the s3:DeleteObjectVersionTagging
action.
The following operations are related to DeleteBucketMetricsConfiguration
:
This operation enables you to delete multiple objects from a bucket using a single HTTP request. If you know the object keys that you want to delete, then this operation provides a suitable alternative to sending individual delete requests, reducing per-request overhead.
The request contains a list of up to 1000 keys that you want to delete. In the XML, you provide the object key names, and optionally, version IDs if you want to delete a specific version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a delete operation and returns the result of that delete, success, or failure, in the response. Note that if the object specified in the request is not found, Amazon S3 returns the result as deleted.
The operation supports two modes for the response: verbose and quiet. By default, the operation uses verbose mode in which the response includes the result of deletion of each key in your request. In quiet mode the response includes only keys where the delete operation encountered an error. For a successful deletion, the operation does not return any information about the delete in the response body.
When performing this operation on an MFA Delete enabled bucket, that attempts to delete any versioned objects, you must include an MFA token. If you do not provide one, the entire request will fail, even if there are non-versioned objects you are trying to delete. If you provide an invalid token, whether there are versioned keys in the request or not, the entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA Delete.
Finally, the Content-MD5 header is required for all Multi-Object Delete requests. Amazon S3 uses the header value to ensure that your request body has not been altered in transit.
The following operations are related to DeleteObjects
:
Removes the PublicAccessBlock
configuration for an Amazon S3 bucket. To use this operation, you must have the s3:PutBucketPublicAccessBlock
permission. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
The following operations are related to DeleteBucketMetricsConfiguration
:
This implementation of the GET operation uses the accelerate
subresource to return the Transfer Acceleration state of a bucket, which is either Enabled
or Suspended
. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to and from Amazon S3.
To use this operation, you must have permission to perform the s3:GetAccelerateConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
You set the Transfer Acceleration state of an existing bucket to Enabled
or Suspended
by using the PutBucketAccelerateConfiguration operation.
A GET accelerate
request does not return a state value for a bucket that has no transfer acceleration state. A bucket has no Transfer Acceleration state if a state has never been set on the bucket.
For more information about transfer acceleration, see Transfer Acceleration in the Amazon Simple Storage Service Developer Guide.
Related Resources
", @@ -65,7 +65,7 @@ "PutBucketAcl": "Sets the permissions on an existing bucket using access control lists (ACL). For more information, see Using ACLs. To set the ACL of a bucket, you must have WRITE_ACP
permission.
You can use one of the following two ways to set a bucket's permissions:
Specify the ACL in the request body
Specify permissions using request headers
You cannot specify access permission using both the body and the request headers.
Depending on your application needs, you may choose to set the ACL on a bucket using either the request body or the headers. For example, if you have an existing application that updates a bucket ACL using the request body, then you can continue to use that approach.
Access Permissions
You can set access permissions using one of the following methods:
Specify a canned ACL with the x-amz-acl
request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. Specify the canned ACL name as the value of x-amz-acl
. If you use this header, you cannot use other access control-specific headers in your request. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. When using these headers, you specify explicit access permissions and grantees (AWS accounts or Amazon S3 groups) who will receive the permission. If you use these ACL-specific headers, you cannot use the x-amz-acl
header to set a canned ACL. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-write
header grants create, overwrite, and delete objects permission to LogDelivery group predefined by Amazon S3 and two AWS accounts identified by their email addresses.
x-amz-grant-write: uri=\"http://acs.amazonaws.com/groups/s3/LogDelivery\", emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
Grantee Values
You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:
By Email address:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"AmazonCustomerByEmail\"><EmailAddress><>Grantees@email.com<></EmailAddress>lt;/Grantee>
The grantee is resolved to the CanonicalUser and, in a response to a GET Object acl request, appears as the CanonicalUser.
By the person's ID:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\"><ID><>ID<></ID><DisplayName><>GranteesEmail<></DisplayName> </Grantee>
DisplayName is optional and ignored in the request
By URI:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"Group\"><URI><>http://acs.amazonaws.com/groups/global/AuthenticatedUsers<></URI></Grantee>
Related Resources
", "PutBucketAnalyticsConfiguration": "Sets an analytics configuration for the bucket (specified by the analytics configuration ID). You can have up to 1,000 analytics configurations per bucket.
You can choose to have storage class analysis export analysis reports sent to a comma-separated values (CSV) flat file. See the DataExport
request element. Reports are updated daily and are based on the object filters that you configure. When selecting data export, you specify a destination bucket and an optional destination prefix where the file is written. You can export the data to a destination bucket in a different account. However, the destination bucket must be in the same Region as the bucket that you are making the PUT analytics configuration to. For more information, see Amazon S3 Analytics – Storage Class Analysis.
You must create a bucket policy on the destination bucket where the exported file is written to grant permissions to Amazon S3 to write objects to the bucket. For an example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
To use this operation, you must have permissions to perform the s3:PutAnalyticsConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
Special Errors
HTTP Error: HTTP 400 Bad Request
Code: InvalidArgument
Cause: Invalid argument.
HTTP Error: HTTP 400 Bad Request
Code: TooManyConfigurations
Cause: You are attempting to create a new configuration but have already reached the 1,000-configuration limit.
HTTP Error: HTTP 403 Forbidden
Code: AccessDenied
Cause: You are not the owner of the specified bucket, or you do not have the s3:PutAnalyticsConfiguration bucket permission to set the configuration on the bucket.
Related Resources
Sets the cors
configuration for your bucket. If the configuration exists, Amazon S3 replaces it.
To use this operation, you must be allowed to perform the s3:PutBucketCORS
action. By default, the bucket owner has this permission and can grant it to others.
You set this configuration on a bucket so that the bucket can service cross-origin requests. For example, you might want to enable a request whose origin is http://www.example.com
to access your Amazon S3 bucket at my.example.bucket.com
by using the browser's XMLHttpRequest
capability.
To enable cross-origin resource sharing (CORS) on a bucket, you add the cors
subresource to the bucket. The cors
subresource is an XML document in which you configure rules that identify origins and the HTTP methods that can be executed on your bucket. The document is limited to 64 KB in size.
When Amazon S3 receives a cross-origin request (or a pre-flight OPTIONS request) against a bucket, it evaluates the cors
configuration on the bucket and uses the first CORSRule
rule that matches the incoming browser request to enable a cross-origin request. For a rule to match, the following conditions must be met:
The request's Origin
header must match AllowedOrigin
elements.
The request method (for example, GET, PUT, HEAD, and so on) or the Access-Control-Request-Method
header in case of a pre-flight OPTIONS
request must be one of the AllowedMethod
elements.
Every header specified in the Access-Control-Request-Headers
request header of a pre-flight request must match an AllowedHeader
element.
For more information about CORS, go to Enabling Cross-Origin Resource Sharing in the Amazon Simple Storage Service Developer Guide.
Related Resources
", - "PutBucketEncryption": "This implementation of the PUT
operation uses the encryption
subresource to set the default encryption state of an existing bucket.
This implementation of the PUT
operation sets default encryption for a buckets using server-side encryption with Amazon S3-managed keys SSE-S3 or AWS KMS customer master keys (CMKs) (SSE-KMS) bucket.
This operation requires AWS Signature Version 4. For more information, see Authenticating Requests (AWS Signature Version 4).
To use this operation, you must have permissions to perform the s3:PutEncryptionConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
Related Resources
", + "PutBucketEncryption": "This implementation of the PUT
operation uses the encryption
subresource to set the default encryption state of an existing bucket.
This implementation of the PUT
operation sets default encryption for a bucket using server-side encryption with Amazon S3-managed keys SSE-S3 or AWS KMS customer master keys (CMKs) (SSE-KMS).
This operation requires AWS Signature Version 4. For more information, see Authenticating Requests (AWS Signature Version 4).
To use this operation, you must have permissions to perform the s3:PutEncryptionConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
Related Resources
", "PutBucketInventoryConfiguration": "This implementation of the PUT
operation adds an inventory configuration (identified by the inventory ID) to the bucket. You can have up to 1,000 inventory configurations per bucket.
Amazon S3 inventory generates inventories of the objects in the bucket on a daily or weekly basis, and the results are published to a flat file. The bucket that is inventoried is called the source bucket, and the bucket where the inventory flat file is stored is called the destination bucket. The destination bucket must be in the same AWS Region as the source bucket.
When you configure an inventory for a source bucket, you specify the destination bucket where you want the inventory to be stored, and whether to generate the inventory daily or weekly. You can also configure what object metadata to include and whether to inventory all object versions or only current versions. For more information, see Amazon S3 Inventory in the Amazon Simple Storage Service Developer Guide.
You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to write objects to the bucket in the defined location. For an example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
To use this operation, you must have permissions to perform the s3:PutInventoryConfiguration
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
Special Errors
HTTP 400 Bad Request Error
Code: InvalidArgument
Cause: Invalid Argument
HTTP 400 Bad Request Error
Code: TooManyConfigurations
Cause: You are attempting to create a new configuration but have already reached the 1,000-configuration limit.
HTTP 403 Forbidden Error
Code: AccessDenied
Cause: You are not the owner of the specified bucket, or you do not have the s3:PutInventoryConfiguration
bucket permission to set the configuration on the bucket
Related Resources
For an updated version of this API, see PutBucketLifecycleConfiguration. This version has been deprecated. Existing lifecycle configurations will work. For new lifecycle configurations, use the updated API.
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. For information about lifecycle configuration, see Object Lifecycle Management in the Amazon Simple Storage Service Developer Guide.
By default, all Amazon S3 resources, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration) are private. Only the resource owner, the AWS account that created the resource, can access it. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, users must get the s3:PutLifecycleConfiguration
permission.
You can also explicitly deny permissions. Explicit denial also supersedes any other permissions. If you want to prevent users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
s3:DeleteObject
s3:DeleteObjectVersion
s3:PutLifecycleConfiguration
For more information about permissions, see Managing Access Permissions to your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
For more examples of transitioning objects to storage classes such as STANDARD_IA or ONEZONE_IA, see Examples of Lifecycle Configuration.
Related Resources
GetBucketLifecycle(Deprecated)
By default, a resource owner—in this case, a bucket owner, which is the AWS account that created the bucket—can perform any of the operations. A resource owner can also grant others permission to perform the operation. For more information, see the following topics in the Amazon Simple Storage Service Developer Guide:
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. For information about lifecycle configuration, see Managing Access Permissions to Your Amazon S3 Resources.
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object key name prefix, one or more object tags, or a combination of both. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility. For the related API description, see PutBucketLifecycle.
Rules
You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. Each rule consists of the following:
Filter identifying a subset of objects to which the rule applies. The filter can be based on a key name prefix, object tags, or a combination of both.
Status whether the rule is in effect.
One or more lifecycle transition and expiration actions that you want Amazon S3 to perform on the objects identified by the filter. If the state of your bucket is versioning-enabled or versioning-suspended, you can have many versions of the same object (one current version and zero or more noncurrent versions). Amazon S3 provides predefined actions that you can specify for current and noncurrent object versions.
For more information, see Object Lifecycle Management and Lifecycle Configuration Elements.
Permissions
By default, all Amazon S3 resources are private, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration). Only the resource owner (that is, the AWS account that created it) can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, a user must get the s3:PutLifecycleConfiguration permission.
You can also explicitly deny permissions. Explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
s3:DeleteObject
s3:DeleteObjectVersion
s3:PutLifecycleConfiguration
For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources.
The following are related to PutBucketLifecycleConfiguration
:
Sets the tags for a bucket.
Use tags to organize your AWS bill to reflect your own cost structure. To do this, sign up to get your AWS account bill with tag key values included. Then, to see the cost of combined resources, organize your billing information according to resources with the same tag key values. For example, you can tag several resources with a specific application name, and then organize your billing information to see the total cost of that application across several services. For more information, see Cost Allocation and Tagging.
Within a bucket, if you add a tag that has the same key as an existing tag, the new value overwrites the old value. For more information, see Using Cost Allocation in Amazon S3 Bucket Tags.
To use this operation, you must have permissions to perform the s3:PutBucketTagging
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
PutBucketTagging
has the following special errors:
Error code: InvalidTagError
Description: The tag provided was not a valid tag. This error can occur if the tag did not pass input validation. For information about tag restrictions, see User-Defined Tag Restrictions and AWS-Generated Cost Allocation Tag Restrictions.
Error code: MalformedXMLError
Description: The XML provided does not match the schema.
Error code: OperationAbortedError
Description: A conflicting conditional operation is currently in progress against this resource. Please try again.
Error code: InternalError
Description: The service was unable to apply the provided tag to the bucket.
The following operations are related to PutBucketTagging
:
Sets the versioning state of an existing bucket. To set the versioning state, you must be the bucket owner.
You can set the versioning state with one of the following values:
Enabled—Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID.
Suspended—Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null.
If the versioning state has never been set on a bucket, it has no versioning state; a GetBucketVersioning request does not return a versioning state value.
If the bucket owner enables MFA Delete in the bucket versioning configuration, the bucket owner must include the x-amz-mfa request
header and the Status
and the MfaDelete
request elements in a request to set the versioning state of the bucket.
If you have an object expiration lifecycle policy in your non-versioned bucket and you want to maintain the same permanent delete behavior when you enable versioning, you must add a noncurrent expiration policy. The noncurrent expiration lifecycle policy will manage the deletes of the noncurrent object versions in the version-enabled bucket. (A version-enabled bucket maintains one current and zero or more noncurrent object versions.) For more information, see Lifecycle and Versioning.
Related Resources
", "PutBucketWebsite": "Sets the configuration of the website that is specified in the website
subresource. To configure a bucket as a website, you can add this subresource on the bucket with website configuration information such as the file name of the index document and any redirect rules. For more information, see Hosting Websites on Amazon S3.
This PUT operation requires the S3:PutBucketWebsite
permission. By default, only the bucket owner can configure the website attached to a bucket; however, bucket owners can allow other users to set the website configuration by writing a bucket policy that grants them the S3:PutBucketWebsite
permission.
To redirect all website requests sent to the bucket's website endpoint, you add a website configuration with the following elements. Because all requests are sent to another website, you don't need to provide index document name for the bucket.
WebsiteConfiguration
RedirectAllRequestsTo
HostName
Protocol
If you want granular control over redirects, you can use the following elements to add routing rules that describe conditions for redirecting requests and information about the redirect destination. In this case, the website configuration must provide an index document for the bucket, because some requests might not be redirected.
WebsiteConfiguration
IndexDocument
Suffix
ErrorDocument
Key
RoutingRules
RoutingRule
Condition
HttpErrorCodeReturnedEquals
KeyPrefixEquals
Redirect
Protocol
HostName
ReplaceKeyPrefixWith
ReplaceKeyWith
HttpRedirectCode
Adds an object to a bucket. You must have WRITE permissions on a bucket to add an object to it.
Amazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the entire object to the bucket.
Amazon S3 is a distributed system. If it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. Amazon S3 does not provide object locking; if you need this, make sure to build it into your application layer or use versioning instead.
To ensure that data is not corrupted traversing the network, use the Content-MD5
header. When you use this header, Amazon S3 checks the object against the provided MD5 value and, if they do not match, returns an error. Additionally, you can calculate the MD5 while putting an object to Amazon S3 and compare the returned ETag to the calculated MD5 value.
To configure your application to send the request headers before sending the request body, use the 100-continue
HTTP status code. For PUT operations, this helps you avoid sending the message body if the message is rejected based on the headers (for example, because authentication fails or a redirect occurs). For more information on the 100-continue
HTTP status code, see Section 8.2.3 of http://www.ietf.org/rfc/rfc2616.txt.
You can optionally request server-side encryption. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. You have the option to provide your own encryption key or use AWS managed encryption keys. For more information, see Using Server-Side Encryption.
You can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:
Specify a canned ACL with the x-amz-acl
request header. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use AWS managed encryption keys or provide your own encryption key.
Use encryption keys managed by Amazon S3 or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS) – If you want AWS to manage the keys used to encrypt data, specify the following headers in the request.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't provide x-amz-server-side- encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS KMS to protect the data.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS.
Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the Access Control List (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:
Specify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.
Specify access permissions explicitly — To explicitly grant access permissions to specific AWS accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly use:
x-amz-grant-read
x-amz-grant-write
x-amz-grant-read-acp
x-amz-grant-write-acp
x-amz-grant-full-control
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
Using email addresses to specify a grantee is only supported in the following AWS Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
EU (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the AWS General Reference
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants the AWS accounts identified by email addresses permissions to read object data and its metadata:
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use AWS-managed encryption keys or provide your own encryption key.
Use encryption keys managed by Amazon S3 or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS) – If you want AWS to manage the keys used to encrypt data, specify the following headers in the request.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't provide x-amz-server-side- encryption-aws-kms-key-id
, Amazon S3 uses the default AWS KMS CMK to protect the data.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.
If you use this feature, the ETag value that Amazon S3 returns in the response is not the MD5 of the object.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
Storage Class Options
By default, Amazon S3 uses the Standard storage class to store newly created objects. The Standard storage class provides high durability and high availability. You can specify other storage classes depending on the performance needs. For more information, see Storage Classes in the Amazon Simple Storage Service Developer Guide.
Versioning
If you enable versioning for a bucket, Amazon S3 automatically generates a unique version ID for the object being stored. Amazon S3 returns this ID in the response using the x-amz-version-id response
header. If versioning is suspended, Amazon S3 always uses null as the version ID for the object stored. For more information about returning the versioning state of a bucket, see GetBucketVersioning. If you enable versioning for a bucket, when Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects.
Related Resources
", + "PutObject": "Adds an object to a bucket. You must have WRITE permissions on a bucket to add an object to it.
Amazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the entire object to the bucket.
Amazon S3 is a distributed system. If it receives multiple write requests for the same object simultaneously, it overwrites all but the last object written. Amazon S3 does not provide object locking; if you need this, make sure to build it into your application layer or use versioning instead.
To ensure that data is not corrupted traversing the network, use the Content-MD5
header. When you use this header, Amazon S3 checks the object against the provided MD5 value and, if they do not match, returns an error. Additionally, you can calculate the MD5 while putting an object to Amazon S3 and compare the returned ETag to the calculated MD5 value.
To configure your application to send the request headers before sending the request body, use the 100-continue
HTTP status code. For PUT operations, this helps you avoid sending the message body if the message is rejected based on the headers (for example, because authentication fails or a redirect occurs). For more information on the 100-continue
HTTP status code, see Section 8.2.3 of http://www.ietf.org/rfc/rfc2616.txt.
You can optionally request server-side encryption. With server-side encryption, Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts the data when you access it. You have the option to provide your own encryption key or use AWS managed encryption keys. For more information, see Using Server-Side Encryption.
You can optionally specify the accounts or groups that should be granted specific permissions on the new object. There are two ways to grant the permissions using the request headers:
Specify a canned ACL with the x-amz-acl
request header. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use AWS managed encryption keys or provide your own encryption key.
Use encryption keys managed by Amazon S3 or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS) – If you want AWS to manage the keys used to encrypt data, specify the following headers in the request.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS KMS to protect the data. If you want to use a customer managed AWS KMS CMK, you must provide the x-amz-server-side-encryption-aws-kms-key-id
of the symmetric customer managed CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service Developer Guide.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS.
Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS.
You also can use the following access control–related headers with this operation. By default, all objects are private. Only the owner has full access control. When adding a new object, you can grant permissions to individual AWS accounts or to predefined groups defined by Amazon S3. These permissions are then added to the Access Control List (ACL) on the object. For more information, see Using ACLs. With this operation, you can grant access permissions using one of the following two methods:
Specify a canned ACL (x-amz-acl
) — Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. For more information, see Canned ACL.
Specify access permissions explicitly — To explicitly grant access permissions to specific AWS accounts or groups, use the following headers. Each header maps to specific permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview. In the header, you specify a list of grantees who get the specific permission. To grant permissions explicitly use:
x-amz-grant-read
x-amz-grant-write
x-amz-grant-read-acp
x-amz-grant-write-acp
x-amz-grant-full-control
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
Using email addresses to specify a grantee is only supported in the following AWS Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
EU (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the AWS General Reference
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants the AWS accounts identified by email addresses permissions to read object data and its metadata:
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
You can optionally tell Amazon S3 to encrypt data at rest using server-side encryption. Server-side encryption is for data encryption at rest. Amazon S3 encrypts your data as it writes it to disks in its data centers and decrypts it when you access it. The option you use depends on whether you want to use AWS-managed encryption keys or provide your own encryption key.
Use encryption keys managed by Amazon S3 or customer master keys (CMKs) stored in AWS Key Management Service (AWS KMS) – If you want AWS to manage the keys used to encrypt data, specify the following headers in the request.
x-amz-server-side-encryption
x-amz-server-side-encryption-aws-kms-key-id
x-amz-server-side-encryption-context
If you specify x-amz-server-side-encryption:aws:kms
, but don't provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS KMS to protect the data. If you want to use a customer managed AWS KMS CMK, you must provide the x-amz-server-side-encryption-aws-kms-key-id
of the symmetric customer managed CMK. Amazon S3 only supports symmetric CMKs and not asymmetric CMKs. For more information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service Developer Guide.
All GET and PUT requests for an object protected by AWS KMS fail if you don't make them with SSL or by using SigV4.
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
Use customer-provided encryption keys – If you want to manage your own encryption keys, provide all the following headers in the request.
If you use this feature, the ETag value that Amazon S3 returns in the response is not the MD5 of the object.
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about server-side encryption with CMKs stored in AWS KMS (SSE-KMS), see Protecting Data Using Server-Side Encryption with CMKs stored in AWS KMS.
Storage Class Options
By default, Amazon S3 uses the Standard storage class to store newly created objects. The Standard storage class provides high durability and high availability. You can specify other storage classes depending on the performance needs. For more information, see Storage Classes in the Amazon Simple Storage Service Developer Guide.
Versioning
If you enable versioning for a bucket, Amazon S3 automatically generates a unique version ID for the object being stored. Amazon S3 returns this ID in the response using the x-amz-version-id response
header. If versioning is suspended, Amazon S3 always uses null as the version ID for the object stored. For more information about returning the versioning state of a bucket, see GetBucketVersioning. If you enable versioning for a bucket, when Amazon S3 receives multiple write requests for the same object simultaneously, it stores all of the objects.
Related Resources
", "PutObjectAcl": "Uses the acl
subresource to set the access control list (ACL) permissions for an object that already exists in a bucket. You must have WRITE_ACP
permission to set the ACL of an object.
Depending on your application needs, you can choose to set the ACL on an object using either the request body or the headers. For example, if you have an existing application that updates a bucket ACL using the request body, you can continue to use that approach.
Access Permissions
You can set access permissions using one of the following methods:
Specify a canned ACL with the x-amz-acl
request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. Specify the canned ACL name as the value of x-amz-ac
l. If you use this header, you cannot use other access control-specific headers in your request. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. When using these headers, you specify explicit access permissions and grantees (AWS accounts or Amazon S3 groups) who will receive the permission. If you use these ACL-specific headers, you cannot use x-amz-acl
header to set a canned ACL. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the following:
emailAddress
– if the value specified is the email address of an AWS account
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
For example, the following x-amz-grant-read
header grants list objects permission to the two AWS accounts identified by their email addresses.
x-amz-grant-read: emailAddress=\"xyz@amazon.com\", emailAddress=\"abc@amazon.com\"
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
Grantee Values
You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:
By Email address:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"AmazonCustomerByEmail\"><EmailAddress><>Grantees@email.com<></EmailAddress>lt;/Grantee>
The grantee is resolved to the CanonicalUser and, in a response to a GET Object acl request, appears as the CanonicalUser.
By the person's ID:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\"><ID><>ID<></ID><DisplayName><>GranteesEmail<></DisplayName> </Grantee>
DisplayName is optional and ignored in the request.
By URI:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"Group\"><URI><>http://acs.amazonaws.com/groups/global/AuthenticatedUsers<></URI></Grantee>
Versioning
The ACL of an object is set at the object version level. By default, PUT sets the ACL of the current version of an object. To set the ACL of a different version, use the versionId
subresource.
Related Resources
", "PutObjectLegalHold": "Applies a Legal Hold configuration to the specified object.
Related Resources
", "PutObjectLockConfiguration": "Places an Object Lock configuration on the specified bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket.
DefaultRetention
requires either Days or Years. You can't specify both at the same time.
Related Resources
", @@ -2944,7 +2944,7 @@ "ReplicaKmsKeyID": { "base": null, "refs": { - "EncryptionConfiguration$ReplicaKmsKeyID": "Specifies the AWS KMS Key ID (Key ARN or Alias ARN) for the destination bucket. Amazon S3 uses this key to encrypt replica objects.
" + "EncryptionConfiguration$ReplicaKmsKeyID": "Specifies the ID (Key ARN or Alias ARN) of the customer managed customer master key (CMK) stored in AWS Key Management Service (KMS) for the destination bucket. Amazon S3 uses this key to encrypt replica objects. Amazon S3 only supports symmetric customer managed CMKs. For more information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service Developer Guide.
" } }, "ReplicationConfiguration": { @@ -3035,7 +3035,7 @@ } }, "RequestPayer": { - "base": "Confirms that the requester knows that she or he will be charged for the request. Bucket owners need not specify this parameter in their requests. For information about downloading objects from Requester Pays buckets, see Downloading Objects in Requestor Pays Buckets in the Amazon S3 Developer Guide.
", + "base": "Confirms that the requester knows that they will be charged for the request. Bucket owners need not specify this parameter in their requests. For information about downloading objects from requester pays buckets, see Downloading Objects in Requestor Pays Buckets in the Amazon S3 Developer Guide.
", "refs": { "AbortMultipartUploadRequest$RequestPayer": null, "CompleteMultipartUploadRequest$RequestPayer": null, @@ -3262,20 +3262,20 @@ "SSEKMSKeyId": { "base": null, "refs": { - "CompleteMultipartUploadOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
", - "CopyObjectOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
", - "CopyObjectRequest$SSEKMSKeyId": "Specifies the AWS KMS key ID to use for object encryption. All GET and PUT requests for an object protected by AWS KMS will fail if not made via SSL or using SigV4. For information about configuring using any of the officially supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request Authentication in the Amazon S3 Developer Guide.
", - "CreateMultipartUploadOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
", - "CreateMultipartUploadRequest$SSEKMSKeyId": "Specifies the AWS KMS key ID to use for object encryption. All GET and PUT requests for an object protected by AWS KMS will fail if not made via SSL or using SigV4. For information about configuring using any of the officially supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request Authentication in the Amazon S3 Developer Guide.
", - "Encryption$KMSKeyId": "If the encryption type is aws:kms
, this optional value specifies the AWS KMS key ID to use for encryption of job results.
If present, specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
", - "HeadObjectOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
", - "PutObjectOutput$SSEKMSKeyId": "If x-amz-server-side-encryption
is present and has the value of aws:kms
, this header specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
If x-amz-server-side-encryption
is present and has the value of aws:kms
, this header specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
If the value of x-amz-server-side-encryption
is aws:kms
, this header specifies the ID of the AWS KMS CMK that will be used for the object. If you specify x-amz-server-side-encryption:aws:kms
, but do not provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
Specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) to use for encrypting inventory reports.
", + "CompleteMultipartUploadOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) that was used for the object.
", + "CopyObjectOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) that was used for the object.
", + "CopyObjectRequest$SSEKMSKeyId": "Specifies the AWS KMS key ID to use for object encryption. All GET and PUT requests for an object protected by AWS KMS will fail if not made via SSL or using SigV4. For information about configuring using any of the officially supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request Authentication in the Amazon S3 Developer Guide.
", + "CreateMultipartUploadOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) that was used for the object.
", + "CreateMultipartUploadRequest$SSEKMSKeyId": "Specifies the ID of the symmetric customer managed AWS KMS CMK to use for object encryption. All GET and PUT requests for an object protected by AWS KMS will fail if not made via SSL or using SigV4. For information about configuring using any of the officially supported AWS SDKs and AWS CLI, see Specifying the Signature Version in Request Authentication in the Amazon S3 Developer Guide.
", + "Encryption$KMSKeyId": "If the encryption type is aws:kms
, this optional value specifies the ID of the symmetric customer managed AWS KMS CMK to use for encryption of job results. Amazon S3 only supports symmetric CMKs. For more information, see Using Symmetric and Asymmetric Keys in the AWS Key Management Service Developer Guide.
If present, specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) that was used for the object.
", + "HeadObjectOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) that was used for the object.
", + "PutObjectOutput$SSEKMSKeyId": "If x-amz-server-side-encryption
is present and has the value of aws:kms
, this header specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) that was used for the object.
If x-amz-server-side-encryption
is present and has the value of aws:kms
, this header specifies the ID of the AWS Key Management Service (AWS KMS) symmetrical customer managed customer master key (CMK) that was used for the object.
If the value of x-amz-server-side-encryption
is aws:kms
, this header specifies the ID of the symmetric customer managed AWS KMS CMK that will be used for the object. If you specify x-amz-server-side-encryption:aws:kms
, but do not provide x-amz-server-side-encryption-aws-kms-key-id
, Amazon S3 uses the AWS managed CMK in AWS to protect the data.
Specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) to use for encrypting inventory reports.
", "ServerSideEncryptionByDefault$KMSMasterKeyID": "KMS master key ID to use for the default encryption. This parameter is allowed if and only if SSEAlgorithm
is set to aws:kms
.
If present, specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) that was used for the object.
", - "UploadPartOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) customer master key (CMK) was used for the object.
" + "UploadPartCopyOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) that was used for the object.
", + "UploadPartOutput$SSEKMSKeyId": "If present, specifies the ID of the AWS Key Management Service (AWS KMS) symmetric customer managed customer master key (CMK) was used for the object.
" } }, "SSES3": { diff --git a/models/apis/sagemaker/2017-07-24/api-2.json b/models/apis/sagemaker/2017-07-24/api-2.json index 44de4dafb53..17d165443f0 100644 --- a/models/apis/sagemaker/2017-07-24/api-2.json +++ b/models/apis/sagemaker/2017-07-24/api-2.json @@ -259,7 +259,10 @@ "requestUri":"/" }, "input":{"shape":"CreatePresignedDomainUrlRequest"}, - "output":{"shape":"CreatePresignedDomainUrlResponse"} + "output":{"shape":"CreatePresignedDomainUrlResponse"}, + "errors":[ + {"shape":"ResourceNotFound"} + ] }, "CreatePresignedNotebookInstanceUrl":{ "name":"CreatePresignedNotebookInstanceUrl", @@ -816,6 +819,15 @@ {"shape":"ResourceNotFound"} ] }, + "DescribeWorkforce":{ + "name":"DescribeWorkforce", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeWorkforceRequest"}, + "output":{"shape":"DescribeWorkforceResponse"} + }, "DescribeWorkteam":{ "name":"DescribeWorkteam", "http":{ @@ -1105,7 +1117,10 @@ "requestUri":"/" }, "input":{"shape":"ListTrialComponentsRequest"}, - "output":{"shape":"ListTrialComponentsResponse"} + "output":{"shape":"ListTrialComponentsResponse"}, + "errors":[ + {"shape":"ResourceNotFound"} + ] }, "ListTrials":{ "name":"ListTrials", @@ -1114,7 +1129,10 @@ "requestUri":"/" }, "input":{"shape":"ListTrialsRequest"}, - "output":{"shape":"ListTrialsResponse"} + "output":{"shape":"ListTrialsResponse"}, + "errors":[ + {"shape":"ResourceNotFound"} + ] }, "ListUserProfiles":{ "name":"ListUserProfiles", @@ -1407,6 +1425,15 @@ {"shape":"ResourceNotFound"} ] }, + "UpdateWorkforce":{ + "name":"UpdateWorkforce", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateWorkforceRequest"}, + "output":{"shape":"UpdateWorkforceResponse"} + }, "UpdateWorkteam":{ "name":"UpdateWorkteam", "http":{ @@ -2162,6 +2189,16 @@ "LocalPath":{"shape":"DirectoryPath"} } }, + "Cidr":{ + "type":"string", + "max":64, + "min":4, + "pattern":"(^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])(\\/(3[0-2]|[1-2][0-9]|[0-9]))$)|(^s*((([0-9A-Fa-f]{1,4}:){7}([0-9A-Fa-f]{1,4}|:))|(([0-9A-Fa-f]{1,4}:){6}(:[0-9A-Fa-f]{1,4}|((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){5}(((:[0-9A-Fa-f]{1,4}){1,2})|:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3})|:))|(([0-9A-Fa-f]{1,4}:){4}(((:[0-9A-Fa-f]{1,4}){1,3})|((:[0-9A-Fa-f]{1,4})?:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){3}(((:[0-9A-Fa-f]{1,4}){1,4})|((:[0-9A-Fa-f]{1,4}){0,2}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){2}(((:[0-9A-Fa-f]{1,4}){1,5})|((:[0-9A-Fa-f]{1,4}){0,3}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(([0-9A-Fa-f]{1,4}:){1}(((:[0-9A-Fa-f]{1,4}){1,6})|((:[0-9A-Fa-f]{1,4}){0,4}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:))|(:(((:[0-9A-Fa-f]{1,4}){1,7})|((:[0-9A-Fa-f]{1,4}){0,5}:((25[0-5]|2[0-4]d|1dd|[1-9]?d)(.(25[0-5]|2[0-4]d|1dd|[1-9]?d)){3}))|:)))(%.+)?s*(\\/(12[0-8]|1[0-1][0-9]|[1-9][0-9]|[0-9]))$)" + }, + "Cidrs":{ + "type":"list", + "member":{"shape":"Cidr"} + }, "CodeRepositoryArn":{ "type":"string", "max":2048, @@ -4121,6 +4158,20 @@ "UserSettings":{"shape":"UserSettings"} } }, + "DescribeWorkforceRequest":{ + "type":"structure", + "required":["WorkforceName"], + "members":{ + "WorkforceName":{"shape":"WorkforceName"} + } + }, + "DescribeWorkforceResponse":{ + "type":"structure", + "required":["Workforce"], + "members":{ + "Workforce":{"shape":"Workforce"} + } + }, "DescribeWorkteamRequest":{ "type":"structure", "required":["WorkteamName"], @@ -4672,6 +4723,7 @@ "type":"string", "enum":[ "TENSORFLOW", + "KERAS", "MXNET", "ONNX", "PYTORCH", @@ -6034,6 +6086,8 @@ "ListTrialComponentsRequest":{ "type":"structure", "members":{ + "ExperimentName":{"shape":"ExperimentEntityName"}, + "TrialName":{"shape":"ExperimentEntityName"}, "SourceArn":{"shape":"String256"}, "CreatedAfter":{"shape":"Timestamp"}, "CreatedBefore":{"shape":"Timestamp"}, @@ -7835,6 +7889,13 @@ "SourceAlgorithms":{"shape":"SourceAlgorithmList"} } }, + "SourceIpConfig":{ + "type":"structure", + "required":["Cidrs"], + "members":{ + "Cidrs":{"shape":"Cidrs"} + } + }, "SourceType":{ "type":"string", "max":128 @@ -8058,7 +8119,7 @@ "TaskAvailabilityLifetimeInSeconds":{ "type":"integer", "max":864000, - "min":1 + "min":60 }, "TaskCount":{ "type":"integer", @@ -8154,6 +8215,12 @@ "ml.m4.4xlarge", "ml.m4.10xlarge", "ml.m4.16xlarge", + "ml.g4dn.xlarge", + "ml.g4dn.2xlarge", + "ml.g4dn.4xlarge", + "ml.g4dn.8xlarge", + "ml.g4dn.12xlarge", + "ml.g4dn.16xlarge", "ml.m5.large", "ml.m5.xlarge", "ml.m5.2xlarge", @@ -8938,6 +9005,21 @@ "UserProfileArn":{"shape":"UserProfileArn"} } }, + "UpdateWorkforceRequest":{ + "type":"structure", + "required":["WorkforceName"], + "members":{ + "WorkforceName":{"shape":"WorkforceName"}, + "SourceIpConfig":{"shape":"SourceIpConfig"} + } + }, + "UpdateWorkforceResponse":{ + "type":"structure", + "required":["Workforce"], + "members":{ + "Workforce":{"shape":"Workforce"} + } + }, "UpdateWorkteamRequest":{ "type":"structure", "required":["WorkteamName"], @@ -9054,6 +9136,30 @@ "max":5, "min":1 }, + "Workforce":{ + "type":"structure", + "required":[ + "WorkforceName", + "WorkforceArn" + ], + "members":{ + "WorkforceName":{"shape":"WorkforceName"}, + "WorkforceArn":{"shape":"WorkforceArn"}, + "LastUpdatedDate":{"shape":"Timestamp"}, + "SourceIpConfig":{"shape":"SourceIpConfig"} + } + }, + "WorkforceArn":{ + "type":"string", + "max":256, + "pattern":"arn:aws[a-z\\-]*:sagemaker:[a-z0-9\\-]*:[0-9]{12}:workforce/.*" + }, + "WorkforceName":{ + "type":"string", + "max":63, + "min":1, + "pattern":"^[a-zA-Z0-9]([a-zA-Z0-9\\-])*$" + }, "Workteam":{ "type":"structure", "required":[ diff --git a/models/apis/sagemaker/2017-07-24/docs-2.json b/models/apis/sagemaker/2017-07-24/docs-2.json index cdec28b8fd1..2dc96ed6642 100644 --- a/models/apis/sagemaker/2017-07-24/docs-2.json +++ b/models/apis/sagemaker/2017-07-24/docs-2.json @@ -28,7 +28,7 @@ "CreateTrainingJob": "Starts a model training job. After training completes, Amazon SageMaker saves the resulting model artifacts to an Amazon S3 location that you specify.
If you choose to host your model using Amazon SageMaker hosting services, you can use the resulting model artifacts as part of the model. You can also use the artifacts in a machine learning service other than Amazon SageMaker, provided that you know how to use them for inferences.
In the request body, you provide the following:
AlgorithmSpecification
- Identifies the training algorithm to use.
HyperParameters
- Specify these algorithm-specific parameters to enable the estimation of model parameters during training. Hyperparameters can be tuned to optimize this learning process. For a list of hyperparameters for each training algorithm provided by Amazon SageMaker, see Algorithms.
InputDataConfig
- Describes the training dataset and the Amazon S3, EFS, or FSx location where it is stored.
OutputDataConfig
- Identifies the Amazon S3 bucket where you want Amazon SageMaker to save the results of model training.
ResourceConfig
- Identifies the resources, ML compute instances, and ML storage volumes to deploy for model training. In distributed training, you specify more than one instance.
EnableManagedSpotTraining
- Optimize the cost of training machine learning models by up to 80% by using Amazon EC2 Spot instances. For more information, see Managed Spot Training.
RoleARN
- The Amazon Resource Number (ARN) that Amazon SageMaker assumes to perform tasks on your behalf during model training. You must grant this role the necessary permissions so that Amazon SageMaker can successfully complete model training.
StoppingCondition
- To help cap training costs, use MaxRuntimeInSeconds
to set a time limit for training. Use MaxWaitTimeInSeconds
to specify how long you are willing to wait for a managed spot training job to complete.
For more information about Amazon SageMaker, see How It Works.
", "CreateTransformJob": "Starts a transform job. A transform job uses a trained model to get inferences on a dataset and saves these results to an Amazon S3 location that you specify.
To perform batch transformations, you create a transform job and use the data that you have readily available.
In the request body, you provide the following:
TransformJobName
- Identifies the transform job. The name must be unique within an AWS Region in an AWS account.
ModelName
- Identifies the model to use. ModelName
must be the name of an existing Amazon SageMaker model in the same AWS Region and AWS account. For information on creating a model, see CreateModel.
TransformInput
- Describes the dataset to be transformed and the Amazon S3 location where it is stored.
TransformOutput
- Identifies the Amazon S3 location where you want Amazon SageMaker to save the results from the transform job.
TransformResources
- Identifies the ML compute instances for the transform job.
For more information about how batch transformation works, see Batch Transform.
", "CreateTrial": "Creates an Amazon SageMaker trial. A trial is a set of steps called trial components that produce a machine learning model. A trial is part of a single Amazon SageMaker experiment.
When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK.
You can add tags to a trial and then use the Search API to search for the tags.
To get a list of all your trials, call the ListTrials API. To view a trial's properties, call the DescribeTrial API. To create a trial component, call the CreateTrialComponent API.
", - "CreateTrialComponent": "Creates a trial component, which is a stage of a machine learning trial. A trial is composed of one or more trial components. A trial component can be used in multiple trials.
Trial components include pre-processing jobs, training jobs, and batch transform jobs.
When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK.
You can add tags to a trial component and then use the Search API to search for the tags.
You can create a trial component through a direct call to the CreateTrialComponent
API. However, you can't specify the Source
property of the component in the request, therefore, the component isn't associated with an Amazon SageMaker job. You must use Amazon SageMaker Studio, the Amazon SageMaker Python SDK, or the AWS SDK for Python (Boto) to create the component with a valid Source
property.
Creates a trial component, which is a stage of a machine learning trial. A trial is composed of one or more trial components. A trial component can be used in multiple trials.
Trial components include pre-processing jobs, training jobs, and batch transform jobs.
When you use Amazon SageMaker Studio or the Amazon SageMaker Python SDK, all experiments, trials, and trial components are automatically tracked, logged, and indexed. When you use the AWS SDK for Python (Boto), you must use the logging APIs provided by the SDK.
You can add tags to a trial component and then use the Search API to search for the tags.
CreateTrialComponent
can only be invoked from within an Amazon SageMaker managed environment. This includes Amazon SageMaker training jobs, processing jobs, transform jobs, and Amazon SageMaker notebooks. A call to CreateTrialComponent
from outside one of these environments results in an error.
Creates a new user profile. A user profile represents a single user within a Domain, and is the main way to reference a \"person\" for the purposes of sharing, reporting and other user-oriented features. This entity is created during on-boarding. If an administrator invites a person by email or imports them from SSO, a new UserProfile is automatically created. This entity is the primary holder of settings for an individual user and has a reference to the user's private Amazon Elastic File System (EFS) home directory.
", "CreateWorkteam": "Creates a new work team for labeling your data. A work team is defined by one or more Amazon Cognito user pools. You must first create the user pools before you can create a work team.
You cannot create more than 25 work teams in an account and region.
", "DeleteAlgorithm": "Removes the specified algorithm from your account.
", @@ -74,6 +74,7 @@ "DescribeTrial": "Provides a list of a trial's properties.
", "DescribeTrialComponent": "Provides a list of a trials component's properties.
", "DescribeUserProfile": "Describes the user profile.
", + "DescribeWorkforce": "Lists private workforce information, including workforce name, Amazon Resource Name (ARN), and, if applicable, allowed IP address ranges (CIDRs). Allowable IP address ranges are the IP addresses that workers can use to access tasks.
This operation applies only to private workforces.
Gets information about a specific work team. You can see information such as the create date, the last updated date, membership information, and the work team's Amazon Resource Name (ARN).
", "DisassociateTrialComponent": "Disassociates a trial component from a trial. This doesn't effect other trials the component is associated with. Before you can delete a component, you must disassociate the component from all trials it is associated with. To associate a trial component with a trial, call the AssociateTrialComponent API.
", "GetSearchSuggestions": "An auto-complete API for the search functionality in the Amazon SageMaker console. It returns suggestions of possible matches for the property name to use in Search
queries. Provides suggestions for HyperParameters
, Tags
, and Metrics
.
Lists training jobs.
", "ListTrainingJobsForHyperParameterTuningJob": "Gets a list of TrainingJobSummary objects that describe the training jobs that a hyperparameter tuning job launched.
", "ListTransformJobs": "Lists transform jobs.
", - "ListTrialComponents": "Lists the trial components in your account. You can filter the list to show only components that were created in a specific time range. You can sort the list by trial component name or creation time.
", + "ListTrialComponents": "Lists the trial components in your account. You can sort the list by trial component name or creation time. You can filter the list to show only components that were created in a specific time range. You can also filter on one of the following:
ExperimentName
SourceArn
TrialName
Lists the trials in your account. Specify an experiment name to limit the list to the trials that are part of that experiment. The list can be filtered to show only trials that were created in a specific time range. The list can be sorted by trial name or creation time.
", "ListUserProfiles": "Lists user profiles.
", "ListWorkteams": "Gets a list of work teams that you have defined in a region. The list may be empty if no work team satisfies the filter specified in the NameContains
parameter.
Renders the UI template so that you can preview the worker's experience.
", - "Search": "Finds Amazon SageMaker resources that match a search query. Matching resource objects are returned as a list of SearchResult
objects in the response. You can sort the search results by any resource property in a ascending or descending order.
You can query against the following value types: numerical, text, Booleans, and timestamps.
", + "Search": "Finds Amazon SageMaker resources that match a search query. Matching resource objects are returned as a list of SearchResult
objects in the response. You can sort the search results by any resource property in a ascending or descending order.
You can query against the following value types: numeric, text, Boolean, and timestamp.
", "StartMonitoringSchedule": "Starts a previously stopped monitoring schedule.
New monitoring schedules are immediately started after creation.
Launches an ML compute instance with the latest version of the libraries and attaches your ML storage volume. After configuring the notebook instance, Amazon SageMaker sets the notebook instance status to InService
. A notebook instance's status must be InService
before you can connect to your Jupyter notebook.
A method for forcing the termination of a running job.
", @@ -132,6 +133,7 @@ "UpdateTrial": "Updates the display name of a trial.
", "UpdateTrialComponent": "Updates one or more properties of a trial component.
", "UpdateUserProfile": "Updates a user profile.
", + "UpdateWorkforce": "Restricts access to tasks assigned to workers in the specified workforce to those within specific ranges of IP addresses. You specify allowed IP addresses by creating a list of up to four CIDRs.
By default, a workforce isn't restricted to specific IP addresses. If you specify a range of IP addresses, workers who attempt to access tasks using any IP address outside the specified range are denied access and get a Not Found
error message on the worker portal. After restricting access with this operation, you can see the allowed IP values for a private workforce with the operation.
This operation applies only to private workforces.
Updates an existing work team with new member definitions or description.
" }, "shapes": { @@ -594,20 +596,20 @@ "Boolean": { "base": null, "refs": { - "AlgorithmSpecification$EnableSageMakerMetricsTimeSeries": "To generate and save time-series metrics during training, set to true
. The default is false
and time-series metrics aren't generated except in the following cases:
You use one of the Amazon SageMaker built-in algorithms
You use one of the following prebuilt Amazon SageMaker Docker images:
Tensorflow
MXNet
PyTorch
You specify at least one MetricDefinition
To generate and save time-series metrics during training, set to true
. The default is false
and time-series metrics aren't generated except in the following cases:
You use one of the Amazon SageMaker built-in algorithms
You use one of the following Prebuilt Amazon SageMaker Docker Images:
Tensorflow (version >= 1.15)
MXNet (version >= 1.6)
PyTorch (version >= 1.3)
You specify at least one MetricDefinition
Whether to use traffic encryption between the container layers.
", "ChannelSpecification$IsRequired": "Indicates whether the channel is required by the algorithm.
", - "CreateModelInput$EnableNetworkIsolation": "Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
The Semantic Segmentation built-in algorithm does not support network isolation.
Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If you enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
The Semantic Segmentation built-in algorithm does not support network isolation.
Isolates the model container. No inbound or outbound network calls can be made to or from the model container.
", + "CreateTrainingJobRequest$EnableNetworkIsolation": "Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If you enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
", "CreateTrainingJobRequest$EnableInterContainerTrafficEncryption": "To encrypt all communications between ML compute instances in distributed training, choose True
. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training. For more information, see Protect Communications Between ML Compute Instances in a Distributed Training Job.
To train models using managed spot training, choose True
. Managed spot training provides a fully managed and scalable infrastructure for training machine learning models. this option is useful when training jobs can be interrupted and when there is flexibility when the training job is run.
The complete and intermediate results of jobs are stored in an Amazon S3 bucket, and can be used as a starting point to train models incrementally. Amazon SageMaker provides metrics and logs in CloudWatch. They can be used to see when managed spot training jobs are running, interrupted, resumed, or completed.
", - "DescribeModelOutput$EnableNetworkIsolation": "If True
, no inbound or outbound network calls can be made to or from the model container.
The Semantic Segmentation built-in algorithm does not support network isolation.
If you want to allow inbound or outbound network calls, except for calls between peers within a training cluster for distributed training, choose True
. If you enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
The Semantic Segmentation built-in algorithm does not support network isolation.
If True
, no inbound or outbound network calls can be made to or from the model container.
If you want to allow inbound or outbound network calls, except for calls between peers within a training cluster for distributed training, choose True
. If you enable network isolation for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
To encrypt all communications between ML compute instances in distributed training, choose True
. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithms in distributed training.
A Boolean indicating whether managed spot training is enabled (True
) or not (False
).
Indicates whether this hyperparameter is tunable in a hyperparameter tuning job.
", "HyperParameterSpecification$IsRequired": "Indicates whether this hyperparameter is required.
", - "HyperParameterTrainingJobDefinition$EnableNetworkIsolation": "Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If network isolation is used for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
The Semantic Segmentation built-in algorithm does not support network isolation.
Isolates the training container. No inbound or outbound network calls can be made, except for calls between peers within a training cluster for distributed training. If network isolation is used for training jobs that are configured to use a VPC, Amazon SageMaker downloads and uploads customer data and model artifacts through the specified VPC, but the training container does not have network access.
", "HyperParameterTrainingJobDefinition$EnableInterContainerTrafficEncryption": "To encrypt all communications between ML compute instances in distributed training, choose True
. Encryption provides greater security for distributed training, but training might take longer. How long it takes depends on the amount of communication between compute instances, especially if you use a deep learning algorithm in distributed training.
A Boolean indicating whether managed spot training is enabled (True
) or not (False
).
Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
", @@ -776,6 +778,18 @@ "TrainingJob$CheckpointConfig": null } }, + "Cidr": { + "base": null, + "refs": { + "Cidrs$member": null + } + }, + "Cidrs": { + "base": null, + "refs": { + "SourceIpConfig$Cidrs": "A list of one to four Classless Inter-Domain Routing (CIDR) values.
Maximum: 4 CIDR values
The following Length Constraints apply to individual CIDR values in the CIDR value list.
The name of the tensor collection.
" + "CollectionConfiguration$CollectionName": "The name of the tensor collection. The name must be unique relative to other rule configuration names.
" } }, "CollectionParameters": { @@ -1382,7 +1396,7 @@ "DataInputConfig": { "base": null, "refs": { - "InputConfig$DataInputConfig": "Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.
TensorFlow
: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console, {\"input\":[1,1024,1024,3]}
If using the CLI, {\\\"input\\\":[1,1024,1024,3]}
Examples for two inputs:
If using the console, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
If using the CLI, {\\\"data1\\\": [1,28,28,1], \\\"data2\\\":[1,28,28,1]}
MXNET/ONNX
: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console, {\"data\":[1,3,1024,1024]}
If using the CLI, {\\\"data\\\":[1,3,1024,1024]}
Examples for two inputs:
If using the console, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
If using the CLI, {\\\"var1\\\": [1,1,28,28], \\\"var2\\\":[1,1,28,28]}
PyTorch
: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.
Examples for one input in dictionary format:
If using the console, {\"input0\":[1,3,224,224]}
If using the CLI, {\\\"input0\\\":[1,3,224,224]}
Example for one input in list format: [[1,3,224,224]]
Examples for two inputs in dictionary format:
If using the console, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
If using the CLI, {\\\"input0\\\":[1,3,224,224], \\\"input1\\\":[1,3,224,224]}
Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]
XGBOOST
: input data name and shape are not needed.
Specifies the name and shape of the expected data inputs for your trained model with a JSON dictionary form. The data inputs are InputConfig$Framework specific.
TensorFlow
: You must specify the name and shape (NHWC format) of the expected data inputs using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console, {\"input\":[1,1024,1024,3]}
If using the CLI, {\\\"input\\\":[1,1024,1024,3]}
Examples for two inputs:
If using the console, {\"data1\": [1,28,28,1], \"data2\":[1,28,28,1]}
If using the CLI, {\\\"data1\\\": [1,28,28,1], \\\"data2\\\":[1,28,28,1]}
KERAS
: You must specify the name and shape (NCHW format) of expected data inputs using a dictionary format for your trained model. Note that while Keras model artifacts should be uploaded in NHWC (channel-last) format, DataInputConfig
should be specified in NCHW (channel-first) format. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console, {\"input_1\":[1,3,224,224]}
If using the CLI, {\\\"input_1\\\":[1,3,224,224]}
Examples for two inputs:
If using the console, {\"input_1\": [1,3,224,224], \"input_2\":[1,3,224,224]}
If using the CLI, {\\\"input_1\\\": [1,3,224,224], \\\"input_2\\\":[1,3,224,224]}
MXNET/ONNX
: You must specify the name and shape (NCHW format) of the expected data inputs in order using a dictionary format for your trained model. The dictionary formats required for the console and CLI are different.
Examples for one input:
If using the console, {\"data\":[1,3,1024,1024]}
If using the CLI, {\\\"data\\\":[1,3,1024,1024]}
Examples for two inputs:
If using the console, {\"var1\": [1,1,28,28], \"var2\":[1,1,28,28]}
If using the CLI, {\\\"var1\\\": [1,1,28,28], \\\"var2\\\":[1,1,28,28]}
PyTorch
: You can either specify the name and shape (NCHW format) of expected data inputs in order using a dictionary format for your trained model or you can specify the shape only using a list format. The dictionary formats required for the console and CLI are different. The list formats for the console and CLI are the same.
Examples for one input in dictionary format:
If using the console, {\"input0\":[1,3,224,224]}
If using the CLI, {\\\"input0\\\":[1,3,224,224]}
Example for one input in list format: [[1,3,224,224]]
Examples for two inputs in dictionary format:
If using the console, {\"input0\":[1,3,224,224], \"input1\":[1,3,224,224]}
If using the CLI, {\\\"input0\\\":[1,3,224,224], \\\"input1\\\":[1,3,224,224]}
Example for two inputs in list format: [[1,3,224,224], [1,3,224,224]]
XGBOOST
: input data name and shape are not needed.
(Optional) The local directory where checkpoints are written. The default directory is /opt/ml/checkpoints/
.
Path to local storage location for tensors. Defaults to /opt/ml/output/tensors/
.
Path to local storage location for rules. Defaults to /opt/ml/processing/output/rule/
.
Path to local storage location for output of rules. Defaults to /opt/ml/processing/output/rule/
.
The full path to the directory to associate with the channel.
", "TensorBoardOutputConfig$LocalPath": "Path to local storage location for tensorBoard output. Defaults to /opt/ml/output/tensorboard
.
The name of the experiment as displayed. If DisplayName
isn't specified, ExperimentName
is displayed.
The name of the experiment.
", "ExperimentSummary$DisplayName": "The name of the experiment as displayed. If DisplayName
isn't specified, ExperimentName
is displayed.
A filter that returns only components that are part of the specified experiment. If you specify ExperimentName
, you can't filter by SourceArn
or TrialName
.
A filter that returns only components that are part of the specified trial. If you specify TrialName
, you can't filter by ExperimentName
or SourceArn
.
A filter that returns only trials that are part of the specified experiment.
", "Parent$TrialName": "The name of the trial.
", "Parent$ExperimentName": "The name of the experiment.
", @@ -2546,7 +2572,7 @@ } }, "HumanLoopActivationConditionsConfig": { - "base": "Defines under what conditions SageMaker creates a human loop.
", + "base": "Defines under what conditions SageMaker creates a human loop. Used within .
", "refs": { "HumanLoopActivationConfig$HumanLoopActivationConditionsConfig": "Container structure for defining under what conditions SageMaker creates a human loop.
" } @@ -2917,7 +2943,7 @@ "AutoMLOutputDataConfig$KmsKeyId": "The AWS KMS encryption key ID.
", "AutoMLSecurityConfig$VolumeKmsKeyId": "The key used to encrypt stored data.
", "CreateDomainRequest$HomeEfsFileSystemKmsKeyId": "The AWS Key Management Service encryption key ID.
", - "CreateEndpointConfigInput$KmsKeyId": "The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a KmsKeyId
when using an instance type with local storage. If any of the models that you specify in the ProductionVariants
parameter use nitro-based instances with local storage, do not specify a value for the KmsKeyId
parameter. If you specify a value for KmsKeyId
when using any nitro-based instances with local storage, the call to CreateEndpointConfig
fails.
For a list of instance types that support local instance storage, see Instance Store Volumes.
For more information about local instance storage encryption, see SSD Instance Store Volumes.
The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance that hosts the endpoint.
The KmsKeyId can be any of the following formats:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
The KMS key policy must grant permission to the IAM role that you specify in your CreateEndpoint
, UpdateEndpoint
requests. For more information, refer to the AWS Key Management Service section Using Key Policies in AWS KMS
Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a KmsKeyId
when using an instance type with local storage. If any of the models that you specify in the ProductionVariants
parameter use nitro-based instances with local storage, do not specify a value for the KmsKeyId
parameter. If you specify a value for KmsKeyId
when using any nitro-based instances with local storage, the call to CreateEndpointConfig
fails.
For a list of instance types that support local instance storage, see Instance Store Volumes.
For more information about local instance storage encryption, see SSD Instance Store Volumes.
The Amazon Resource Name (ARN) of a AWS Key Management Service key that Amazon SageMaker uses to encrypt data on the storage volume attached to your notebook instance. The KMS key you provide must be enabled. For information, see Enabling and Disabling Keys in the AWS Key Management Service Developer Guide.
", "DataCaptureConfig$KmsKeyId": "", "DataCaptureConfigSummary$KmsKeyId": "", @@ -2934,8 +2960,8 @@ "ProcessingOutputConfig$KmsKeyId": "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the processing job output. KmsKeyId
can be an ID of a KMS key, ARN of a KMS key, alias of a KMS key, or alias of a KMS key. The KmsKeyId
is applied to all outputs.
The AWS KMS key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the training job.
Certain Nitro-based instances include local storage, dependent on the instance type. Local storage volumes are encrypted using a hardware module on the instance. You can't request a VolumeKmsKeyId
when using an instance type with local storage.
For a list of instance types that support local instance storage, see Instance Store Volumes.
For more information about local instance storage encryption, see SSD Instance Store Volumes.
The VolumeKmsKeyId
can be in any of the following formats:
// KMS Key ID
\"1234abcd-12ab-34cd-56ef-1234567890ab\"
// Amazon Resource Name (ARN) of a KMS Key
\"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
The AWS Key Management Service encryption key ID.
", - "TransformOutput$KmsKeyId": "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId
can be any of the following formats:
// KMS Key ID
\"1234abcd-12ab-34cd-56ef-1234567890ab\"
// Amazon Resource Name (ARN) of a KMS Key
\"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
// KMS Key Alias
\"alias/ExampleAlias\"
// Amazon Resource Name (ARN) of a KMS Key Alias
\"arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias\"
If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.
The KMS key policy must grant permission to the IAM role that you specify in your CreateModel request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.
", - "TransformResources$VolumeKmsKeyId": "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt model data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeKmsKeyId
can be any of the following formats:
// KMS Key ID
\"1234abcd-12ab-34cd-56ef-1234567890ab\"
// Amazon Resource Name (ARN) of a KMS Key
\"arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab\"
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption. The KmsKeyId
can be any of the following formats:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
If you don't provide a KMS key ID, Amazon SageMaker uses the default KMS key for Amazon S3 for your role's account. For more information, see KMS-Managed Encryption Keys in the Amazon Simple Storage Service Developer Guide.
The KMS key policy must grant permission to the IAM role that you specify in your CreateModel request. For more information, see Using Key Policies in AWS KMS in the AWS Key Management Service Developer Guide.
", + "TransformResources$VolumeKmsKeyId": "The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt model data on the storage volume attached to the ML compute instance(s) that run the batch transform job. The VolumeKmsKeyId
can be any of the following formats:
Key ID: 1234abcd-12ab-34cd-56ef-1234567890ab
Key ARN: arn:aws:kms:us-west-2:111122223333:key/1234abcd-12ab-34cd-56ef-1234567890ab
Alias name: alias/ExampleAlias
Alias name ARN: arn:aws:kms:us-west-2:111122223333:alias/ExampleAlias
Defines the maximum number of data objects that can be labeled by human workers at the same time. Also referred to as batch size. Each object may have more than one worker at one time.
" + "HumanTaskConfig$MaxConcurrentTaskCount": "Defines the maximum number of data objects that can be labeled by human workers at the same time. Also referred to as batch size. Each object may have more than one worker at one time. The default value is 1000 objects.
" } }, "MaxConcurrentTransforms": { @@ -3533,7 +3559,7 @@ "ListDomainsRequest$MaxResults": "Returns a list up to a specified limit.
", "ListEndpointConfigsInput$MaxResults": "The maximum number of training jobs to return in the response.
", "ListEndpointsInput$MaxResults": "The maximum number of endpoints to return in the response.
", - "ListExperimentsRequest$MaxResults": "The maximum number of experiments to return in the response.
", + "ListExperimentsRequest$MaxResults": "The maximum number of experiments to return in the response. The default value is 10.
", "ListFlowDefinitionsRequest$MaxResults": "The total number of items to return. If the total number of available items is more than the value specified in MaxResults
, then a NextToken
will be provided in the output that you can use to resume pagination.
The total number of items to return. If the total number of available items is more than the value specified in MaxResults
, then a NextToken
will be provided in the output that you can use to resume pagination.
The maximum number of tuning jobs to return. The default value is 10.
", @@ -3550,8 +3576,8 @@ "ListTrainingJobsForHyperParameterTuningJobRequest$MaxResults": "The maximum number of training jobs to return. The default value is 10.
", "ListTrainingJobsRequest$MaxResults": "The maximum number of training jobs to return in the response.
", "ListTransformJobsRequest$MaxResults": "The maximum number of transform jobs to return in the response. The default value is 10
.
The maximum number of components to return in the response.
", - "ListTrialsRequest$MaxResults": "The maximum number of trials to return in the response.
", + "ListTrialComponentsRequest$MaxResults": "The maximum number of components to return in the response. The default value is 10.
", + "ListTrialsRequest$MaxResults": "The maximum number of trials to return in the response. The default value is 10.
", "ListUserProfilesRequest$MaxResults": "Returns a list up to a specified limit.
", "ListWorkteamsRequest$MaxResults": "The maximum number of work teams to return in each page of the response.
", "SearchRequest$MaxResults": "The maximum number of results to return in a SearchResponse
.
The size, in GB, of the ML storage volume attached to the notebook instance.
" + "DebugRuleConfiguration$VolumeSizeInGB": "The size, in GB, of the ML storage volume attached to the processing instance.
" } }, "OrderKey": { @@ -4680,13 +4706,13 @@ "PropertyNameHint": { "base": null, "refs": { - "PropertyNameQuery$PropertyNameHint": "Text that is part of a property's name. The property names of hyperparameter, metric, and tag key names that begin with the specified text in the PropertyNameHint
.
Text that begins a property's name.
" } }, "PropertyNameQuery": { - "base": "A type of SuggestionQuery
. A suggestion query for retrieving property names that match the specified hint.
Part of the SuggestionQuery
type. Specifies a hint for retrieving property names that begin with the specified text.
A type of SuggestionQuery
. Defines a property name hint. Only property names that match the specified hint are included in the response.
Defines a property name hint. Only property names that begin with the specified hint are included in the response.
" } }, "PropertyNameSuggestion": { @@ -4804,7 +4830,7 @@ "base": null, "refs": { "Filter$Name": "A property name. For example, TrainingJobName
. For the list of valid property names returned in a search result for each supported resource, see TrainingJob properties. You must specify a valid property name for the resource.
The name of the property to use in the nested filters. The value must match a listed property name, such as InputDataConfig
.
The name of the property to use in the nested filters. The value must match a listed property name, such as InputDataConfig
.
A suggested property name based on what you entered in the search textbox in the Amazon SageMaker console.
", "SearchRequest$SortBy": "The name of the resource property used to sort the SearchResults
. The default is LastModifiedTime
.
The name of the Amazon SageMaker resource to Search for. The only valid Resource
value is TrainingJob
.
The name of the Amazon SageMaker resource to search for. Currently, the only valid Resource
value is TrainingJob
.
The name of the Amazon SageMaker resource to Search for.
", + "SearchRequest$Resource": "The name of the Amazon SageMaker resource to search for.
" } }, "ResponseMIMEType": { @@ -5088,7 +5114,7 @@ } }, "ShuffleConfig": { - "base": "A configuration for a shuffle option for input data in a channel. If you use S3Prefix
for S3DataType
, the results of the S3 key prefix matches are shuffled. If you use ManifestFile
, the order of the S3 object references in the ManifestFile
is shuffled. If you use AugmentedManifestFile
, the order of the JSON lines in the AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed
value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets, this ensures that the order of the training data is different for each epoch, and it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig
is combined with S3DataDistributionType
of ShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.
A configuration for a shuffle option for input data in a channel. If you use S3Prefix
for S3DataType
, the results of the S3 key prefix matches are shuffled. If you use ManifestFile
, the order of the S3 object references in the ManifestFile
is shuffled. If you use AugmentedManifestFile
, the order of the JSON lines in the AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed
value.
For Pipe input mode, when ShuffleConfig
is specified shuffling is done at the start of every epoch. With large datasets, this ensures that the order of the training data is different for each epoch, and it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig
is combined with S3DataDistributionType
of ShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.
A configuration for a shuffle option for input data in a channel. If you use S3Prefix
for S3DataType
, this shuffles the results of the S3 key prefix matches. If you use ManifestFile
, the order of the S3 object references in the ManifestFile
is shuffled. If you use AugmentedManifestFile
, the order of the JSON lines in the AugmentedManifestFile
is shuffled. The shuffling order is determined using the Seed
value.
For Pipe input mode, shuffling is done at the start of every epoch. With large datasets this ensures that the order of the training data is different for each epoch, it helps reduce bias and possible overfitting. In a multi-node training job when ShuffleConfig is combined with S3DataDistributionType
of ShardedByS3Key
, the data is shuffled across nodes so that the content sent to a particular node on the first epoch might be sent to a different node on the second epoch.
Details about the algorithm that was used to create the model package.
" } }, + "SourceIpConfig": { + "base": "A list of IP address ranges (CIDRs). Used to create an allow list of IP addresses for a private workforce. For more information, see .
", + "refs": { + "UpdateWorkforceRequest$SourceIpConfig": "A list of one to four worker IP address ranges (CIDRs) that can be used to access tasks assigned to this workforce.
Maximum: 4 CIDR values
", + "Workforce$SourceIpConfig": "A list of one to four IP address ranges (CIDRs) to be added to the workforce allow list.
" + } + }, "SourceType": { "base": null, "refs": { @@ -5308,7 +5341,7 @@ "CreateUserProfileRequest$SingleSignOnUserValue": "The username of the associated AWS Single Sign-On User for this UserProfile. If the Domain's AuthMode is SSO, this field is required, and must match a valid username of a user in your directory. If the Domain's AuthMode is not SSO, this field cannot be specified.
", "DescribeDomainResponse$SingleSignOnManagedApplicationInstanceId": "The SSO managed application instance ID.
", "DescribeUserProfileResponse$SingleSignOnUserValue": "The SSO user value.
", - "ListTrialComponentsRequest$SourceArn": "A filter that returns only components that have the specified source Amazon Resource Name (ARN).
" + "ListTrialComponentsRequest$SourceArn": "A filter that returns only components that have the specified source Amazon Resource Name (ARN). If you specify SourceArn
, you can't filter by ExperimentName
or TrialName
.
Limits the property names that are included in the response.
", + "base": "Specified in the GetSearchSuggestions request. Limits the property names that are included in the response.
", "refs": { "GetSearchSuggestionsRequest$SuggestionQuery": "Limits the property names that are included in the response.
" } @@ -5438,7 +5471,7 @@ "TaskAvailabilityLifetimeInSeconds": { "base": null, "refs": { - "HumanTaskConfig$TaskAvailabilityLifetimeInSeconds": "The length of time that a task remains available for labeling by human workers. If you choose the Amazon Mechanical Turk workforce, the maximum is 12 hours (43200). For private and vendor workforces, the maximum is as listed.
" + "HumanTaskConfig$TaskAvailabilityLifetimeInSeconds": "The length of time that a task remains available for labeling by human workers. If you choose the Amazon Mechanical Turk workforce, the maximum is 12 hours (43200). The default value is 864000 seconds (1 day). For private and vendor workforces, the maximum is as listed.
" } }, "TaskCount": { @@ -5690,6 +5723,7 @@ "TrialSummary$LastModifiedTime": "When the trial was last modified.
", "UpdateTrialComponentRequest$StartTime": "When the component started.
", "UpdateTrialComponentRequest$EndTime": "When the component ended.
", + "Workforce$LastUpdatedDate": "The most recent date that was used to successfully add one or more IP address ranges (CIDRs) to a private workforce's allow list.
", "Workteam$CreateDate": "The date and time that the work team was created (timestamp).
", "Workteam$LastUpdatedDate": "The date and time that the work team was last updated (timestamp).
" } @@ -6286,6 +6320,16 @@ "refs": { } }, + "UpdateWorkforceRequest": { + "base": null, + "refs": { + } + }, + "UpdateWorkforceResponse": { + "base": null, + "refs": { + } + }, "UpdateWorkteamRequest": { "base": null, "refs": { @@ -6437,6 +6481,27 @@ "VpcConfig$SecurityGroupIds": "The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets
field.
A single private workforce, which is automatically created when you create your first private work team. You can create one private work force in each AWS Region. By default, any workforce related API operation used in a specific region will apply to the workforce created in that region. To learn how to create a private workforce, see Create a Private Workforce.
", + "refs": { + "DescribeWorkforceResponse$Workforce": "A single private workforce, which is automatically created when you create your first private work team. You can create one private work force in each AWS Region. By default, any workforce related API operation used in a specific region will apply to the workforce created in that region. To learn how to create a private workforce, see Create a Private Workforce.
", + "UpdateWorkforceResponse$Workforce": "A single private workforce, which is automatically created when you create your first private work team. You can create one private work force in each AWS Region. By default, any workforce related API operation used in a specific region will apply to the workforce created in that region. To learn how to create a private workforce, see Create a Private Workforce.
" + } + }, + "WorkforceArn": { + "base": null, + "refs": { + "Workforce$WorkforceArn": "The Amazon Resource Name (ARN) of the private workforce.
" + } + }, + "WorkforceName": { + "base": null, + "refs": { + "DescribeWorkforceRequest$WorkforceName": "The name of the private workforce whose access you want to restrict. WorkforceName
is automatically set to \"default\"
when a workforce is created and cannot be modified.
The name of the private workforce whose access you want to restrict. WorkforceName
is automatically set to \"default\"
when a workforce is created and cannot be modified.
The name of the private workforce whose access you want to restrict. WorkforceName
is automatically set to \"default\"
when a workforce is created and cannot be modified.
Provides details about a labeling work team.
", "refs": { diff --git a/models/apis/securityhub/2018-10-26/api-2.json b/models/apis/securityhub/2018-10-26/api-2.json index bba7fce4303..389cc263a27 100644 --- a/models/apis/securityhub/2018-10-26/api-2.json +++ b/models/apis/securityhub/2018-10-26/api-2.json @@ -245,6 +245,21 @@ {"shape":"InvalidInputException"} ] }, + "DescribeStandardsControls":{ + "name":"DescribeStandardsControls", + "http":{ + "method":"GET", + "requestUri":"/standards/controls/{StandardsSubscriptionArn+}" + }, + "input":{"shape":"DescribeStandardsControlsRequest"}, + "output":{"shape":"DescribeStandardsControlsResponse"}, + "errors":[ + {"shape":"InternalException"}, + {"shape":"InvalidInputException"}, + {"shape":"InvalidAccessException"}, + {"shape":"ResourceNotFoundException"} + ] + }, "DisableImportFindingsForProduct":{ "name":"DisableImportFindingsForProduct", "http":{ @@ -598,6 +613,21 @@ {"shape":"LimitExceededException"}, {"shape":"ResourceNotFoundException"} ] + }, + "UpdateStandardsControl":{ + "name":"UpdateStandardsControl", + "http":{ + "method":"PATCH", + "requestUri":"/standards/control/{StandardsControlArn+}" + }, + "input":{"shape":"UpdateStandardsControlRequest"}, + "output":{"shape":"UpdateStandardsControlResponse"}, + "errors":[ + {"shape":"InternalException"}, + {"shape":"InvalidInputException"}, + {"shape":"InvalidAccessException"}, + {"shape":"ResourceNotFoundException"} + ] } }, "shapes":{ @@ -663,6 +693,56 @@ "type":"list", "member":{"shape":"NonEmptyString"} }, + "AvailabilityZone":{ + "type":"structure", + "members":{ + "ZoneName":{"shape":"NonEmptyString"}, + "SubnetId":{"shape":"NonEmptyString"} + } + }, + "AvailabilityZones":{ + "type":"list", + "member":{"shape":"AvailabilityZone"} + }, + "AwsCloudFrontDistributionDetails":{ + "type":"structure", + "members":{ + "DomainName":{"shape":"NonEmptyString"}, + "ETag":{"shape":"NonEmptyString"}, + "LastModifiedTime":{"shape":"NonEmptyString"}, + "Logging":{"shape":"AwsCloudFrontDistributionLogging"}, + "Origins":{"shape":"AwsCloudFrontDistributionOrigins"}, + "Status":{"shape":"NonEmptyString"}, + "WebAclId":{"shape":"NonEmptyString"} + } + }, + "AwsCloudFrontDistributionLogging":{ + "type":"structure", + "members":{ + "Bucket":{"shape":"NonEmptyString"}, + "Enabled":{"shape":"Boolean"}, + "IncludeCookies":{"shape":"Boolean"}, + "Prefix":{"shape":"NonEmptyString"} + } + }, + "AwsCloudFrontDistributionOriginItem":{ + "type":"structure", + "members":{ + "DomainName":{"shape":"NonEmptyString"}, + "Id":{"shape":"NonEmptyString"}, + "OriginPath":{"shape":"NonEmptyString"} + } + }, + "AwsCloudFrontDistributionOriginItemList":{ + "type":"list", + "member":{"shape":"AwsCloudFrontDistributionOriginItem"} + }, + "AwsCloudFrontDistributionOrigins":{ + "type":"structure", + "members":{ + "Items":{"shape":"AwsCloudFrontDistributionOriginItemList"} + } + }, "AwsEc2InstanceDetails":{ "type":"structure", "members":{ @@ -677,12 +757,34 @@ "LaunchedAt":{"shape":"NonEmptyString"} } }, + "AwsElbv2LoadBalancerDetails":{ + "type":"structure", + "members":{ + "AvailabilityZones":{"shape":"AvailabilityZones"}, + "CanonicalHostedZoneId":{"shape":"NonEmptyString"}, + "CreatedTime":{"shape":"NonEmptyString"}, + "DNSName":{"shape":"NonEmptyString"}, + "IpAddressType":{"shape":"NonEmptyString"}, + "Scheme":{"shape":"NonEmptyString"}, + "SecurityGroups":{"shape":"SecurityGroups"}, + "State":{"shape":"LoadBalancerState"}, + "Type":{"shape":"NonEmptyString"}, + "VpcId":{"shape":"NonEmptyString"} + } + }, "AwsIamAccessKeyDetails":{ "type":"structure", "members":{ - "UserName":{"shape":"NonEmptyString"}, + "UserName":{ + "shape":"NonEmptyString", + "deprecated":true, + "deprecatedMessage":"This field is deprecated, use PrincipalName instead." + }, "Status":{"shape":"AwsIamAccessKeyStatus"}, - "CreatedAt":{"shape":"NonEmptyString"} + "CreatedAt":{"shape":"NonEmptyString"}, + "PrincipalId":{"shape":"NonEmptyString"}, + "PrincipalType":{"shape":"NonEmptyString"}, + "PrincipalName":{"shape":"NonEmptyString"} } }, "AwsIamAccessKeyStatus":{ @@ -692,6 +794,111 @@ "Inactive" ] }, + "AwsIamRoleAssumeRolePolicyDocument":{ + "type":"string", + "max":131072, + "min":1, + "pattern":"[\\u0009\\u000A\\u000D\\u0020-\\u007E\\u00A1-\\u00FF]+" + }, + "AwsIamRoleDetails":{ + "type":"structure", + "members":{ + "AssumeRolePolicyDocument":{"shape":"AwsIamRoleAssumeRolePolicyDocument"}, + "CreateDate":{"shape":"NonEmptyString"}, + "RoleId":{"shape":"NonEmptyString"}, + "RoleName":{"shape":"NonEmptyString"}, + "MaxSessionDuration":{"shape":"Integer"}, + "Path":{"shape":"NonEmptyString"} + } + }, + "AwsKmsKeyDetails":{ + "type":"structure", + "members":{ + "AWSAccountId":{"shape":"NonEmptyString"}, + "CreationDate":{"shape":"Double"}, + "KeyId":{"shape":"NonEmptyString"}, + "KeyManager":{"shape":"NonEmptyString"}, + "KeyState":{"shape":"NonEmptyString"}, + "Origin":{"shape":"NonEmptyString"} + } + }, + "AwsLambdaFunctionCode":{ + "type":"structure", + "members":{ + "S3Bucket":{"shape":"NonEmptyString"}, + "S3Key":{"shape":"NonEmptyString"}, + "S3ObjectVersion":{"shape":"NonEmptyString"}, + "ZipFile":{"shape":"NonEmptyString"} + } + }, + "AwsLambdaFunctionDeadLetterConfig":{ + "type":"structure", + "members":{ + "TargetArn":{"shape":"NonEmptyString"} + } + }, + "AwsLambdaFunctionDetails":{ + "type":"structure", + "members":{ + "Code":{"shape":"AwsLambdaFunctionCode"}, + "CodeSha256":{"shape":"NonEmptyString"}, + "DeadLetterConfig":{"shape":"AwsLambdaFunctionDeadLetterConfig"}, + "Environment":{"shape":"AwsLambdaFunctionEnvironment"}, + "FunctionName":{"shape":"NonEmptyString"}, + "Handler":{"shape":"NonEmptyString"}, + "KmsKeyArn":{"shape":"NonEmptyString"}, + "LastModified":{"shape":"NonEmptyString"}, + "Layers":{"shape":"AwsLambdaFunctionLayerList"}, + "MasterArn":{"shape":"NonEmptyString"}, + "MemorySize":{"shape":"Integer"}, + "RevisionId":{"shape":"NonEmptyString"}, + "Role":{"shape":"NonEmptyString"}, + "Runtime":{"shape":"NonEmptyString"}, + "Timeout":{"shape":"Integer"}, + "TracingConfig":{"shape":"AwsLambdaFunctionTracingConfig"}, + "VpcConfig":{"shape":"AwsLambdaFunctionVpcConfig"}, + "Version":{"shape":"NonEmptyString"} + } + }, + "AwsLambdaFunctionEnvironment":{ + "type":"structure", + "members":{ + "Variables":{"shape":"FieldMap"}, + "Error":{"shape":"AwsLambdaFunctionEnvironmentError"} + } + }, + "AwsLambdaFunctionEnvironmentError":{ + "type":"structure", + "members":{ + "ErrorCode":{"shape":"NonEmptyString"}, + "Message":{"shape":"NonEmptyString"} + } + }, + "AwsLambdaFunctionLayer":{ + "type":"structure", + "members":{ + "Arn":{"shape":"NonEmptyString"}, + "CodeSize":{"shape":"Integer"} + } + }, + "AwsLambdaFunctionLayerList":{ + "type":"list", + "member":{"shape":"AwsLambdaFunctionLayer"} + }, + "AwsLambdaFunctionTracingConfig":{ + "type":"structure", + "members":{ + "Mode":{"shape":"NonEmptyString"} + } + }, + "AwsLambdaFunctionVpcConfig":{ + "type":"structure", + "members":{ + "SecurityGroupIds":{"shape":"NonEmptyStringList"}, + "SubnetIds":{"shape":"NonEmptyStringList"}, + "VpcId":{"shape":"NonEmptyString"} + } + }, "AwsS3BucketDetails":{ "type":"structure", "members":{ @@ -840,6 +1047,35 @@ "type":"list", "member":{"shape":"AwsSecurityFinding"} }, + "AwsSnsTopicDetails":{ + "type":"structure", + "members":{ + "KmsMasterKeyId":{"shape":"NonEmptyString"}, + "Subscription":{"shape":"AwsSnsTopicSubscriptionList"}, + "TopicName":{"shape":"NonEmptyString"}, + "Owner":{"shape":"NonEmptyString"} + } + }, + "AwsSnsTopicSubscription":{ + "type":"structure", + "members":{ + "Endpoint":{"shape":"NonEmptyString"}, + "Protocol":{"shape":"NonEmptyString"} + } + }, + "AwsSnsTopicSubscriptionList":{ + "type":"list", + "member":{"shape":"AwsSnsTopicSubscription"} + }, + "AwsSqsQueueDetails":{ + "type":"structure", + "members":{ + "KmsDataKeyReusePeriodSeconds":{"shape":"Integer"}, + "KmsMasterKeyId":{"shape":"NonEmptyString"}, + "QueueName":{"shape":"NonEmptyString"}, + "DeadLetterTargetArn":{"shape":"NonEmptyString"} + } + }, "BatchDisableStandardsRequest":{ "type":"structure", "required":["StandardsSubscriptionArns"], @@ -914,6 +1150,13 @@ "LaunchedAt":{"shape":"NonEmptyString"} } }, + "ControlStatus":{ + "type":"string", + "enum":[ + "ENABLED", + "DISABLED" + ] + }, "CreateActionTargetRequest":{ "type":"structure", "required":[ @@ -1119,6 +1362,34 @@ "NextToken":{"shape":"NextToken"} } }, + "DescribeStandardsControlsRequest":{ + "type":"structure", + "required":["StandardsSubscriptionArn"], + "members":{ + "StandardsSubscriptionArn":{ + "shape":"NonEmptyString", + "location":"uri", + "locationName":"StandardsSubscriptionArn" + }, + "NextToken":{ + "shape":"NextToken", + "location":"querystring", + "locationName":"NextToken" + }, + "MaxResults":{ + "shape":"MaxResults", + "location":"querystring", + "locationName":"MaxResults" + } + } + }, + "DescribeStandardsControlsResponse":{ + "type":"structure", + "members":{ + "Controls":{"shape":"StandardsControls"}, + "NextToken":{"shape":"NextToken"} + } + }, "DisableImportFindingsForProductRequest":{ "type":"structure", "required":["ProductSubscriptionArn"], @@ -1532,6 +1803,13 @@ "Tags":{"shape":"TagMap"} } }, + "LoadBalancerState":{ + "type":"structure", + "members":{ + "Code":{"shape":"NonEmptyString"}, + "Reason":{"shape":"NonEmptyString"} + } + }, "Malware":{ "type":"structure", "required":["Name"], @@ -1638,6 +1916,10 @@ "type":"string", "pattern":".*\\S.*" }, + "NonEmptyStringList":{ + "type":"list", + "member":{"shape":"NonEmptyString"} + }, "Note":{ "type":"structure", "required":[ @@ -1781,9 +2063,16 @@ "ResourceDetails":{ "type":"structure", "members":{ + "AwsCloudFrontDistribution":{"shape":"AwsCloudFrontDistributionDetails"}, "AwsEc2Instance":{"shape":"AwsEc2InstanceDetails"}, + "AwsElbv2LoadBalancer":{"shape":"AwsElbv2LoadBalancerDetails"}, "AwsS3Bucket":{"shape":"AwsS3BucketDetails"}, "AwsIamAccessKey":{"shape":"AwsIamAccessKeyDetails"}, + "AwsIamRole":{"shape":"AwsIamRoleDetails"}, + "AwsKmsKey":{"shape":"AwsKmsKeyDetails"}, + "AwsLambdaFunction":{"shape":"AwsLambdaFunctionDetails"}, + "AwsSnsTopic":{"shape":"AwsSnsTopicDetails"}, + "AwsSqsQueue":{"shape":"AwsSqsQueueDetails"}, "Container":{"shape":"ContainerDetails"}, "Other":{"shape":"FieldMap"} } @@ -1812,6 +2101,10 @@ "type":"list", "member":{"shape":"Result"} }, + "SecurityGroups":{ + "type":"list", + "member":{"shape":"NonEmptyString"} + }, "Severity":{ "type":"structure", "required":["Normalized"], @@ -1820,6 +2113,15 @@ "Normalized":{"shape":"Integer"} } }, + "SeverityRating":{ + "type":"string", + "enum":[ + "LOW", + "MEDIUM", + "HIGH", + "CRITICAL" + ] + }, "SortCriteria":{ "type":"list", "member":{"shape":"SortCriterion"} @@ -1838,6 +2140,24 @@ "desc" ] }, + "StandardsControl":{ + "type":"structure", + "members":{ + "StandardsControlArn":{"shape":"NonEmptyString"}, + "ControlStatus":{"shape":"ControlStatus"}, + "DisabledReason":{"shape":"NonEmptyString"}, + "ControlStatusUpdatedAt":{"shape":"Timestamp"}, + "ControlId":{"shape":"NonEmptyString"}, + "Title":{"shape":"NonEmptyString"}, + "Description":{"shape":"NonEmptyString"}, + "RemediationUrl":{"shape":"NonEmptyString"}, + "SeverityRating":{"shape":"SeverityRating"} + } + }, + "StandardsControls":{ + "type":"list", + "member":{"shape":"StandardsControl"} + }, "StandardsInputParameterMap":{ "type":"map", "key":{"shape":"NonEmptyString"}, @@ -2082,6 +2402,24 @@ "members":{ } }, + "UpdateStandardsControlRequest":{ + "type":"structure", + "required":["StandardsControlArn"], + "members":{ + "StandardsControlArn":{ + "shape":"NonEmptyString", + "location":"uri", + "locationName":"StandardsControlArn" + }, + "ControlStatus":{"shape":"ControlStatus"}, + "DisabledReason":{"shape":"NonEmptyString"} + } + }, + "UpdateStandardsControlResponse":{ + "type":"structure", + "members":{ + } + }, "VerificationState":{ "type":"string", "enum":[ diff --git a/models/apis/securityhub/2018-10-26/docs-2.json b/models/apis/securityhub/2018-10-26/docs-2.json index 5498505aafd..99fe0f335ed 100644 --- a/models/apis/securityhub/2018-10-26/docs-2.json +++ b/models/apis/securityhub/2018-10-26/docs-2.json @@ -1,6 +1,6 @@ { "version": "2.0", - "service": "Security Hub provides you with a comprehensive view of the security state of your AWS environment and resources. It also provides you with the compliance status of your environment based on CIS AWS Foundations compliance checks. Security Hub collects security data from AWS accounts, services, and integrated third-party products and helps you analyze security trends in your environment to identify the highest priority security issues. For more information about Security Hub, see the AWS Security Hub User Guide .
When you use operations in the Security Hub API, the requests are executed only in the AWS Region that is currently active or in the specific AWS Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, execute the same command for each Region to apply the change to. For example, if your Region is set to us-west-2
, when you use CreateMembers
to add a member account to Security Hub, the association of the member account with the master account is created only in the us-west-2 Region. Security Hub must be enabled for the member account in the same Region that the invite was sent from.
Security Hub provides you with a comprehensive view of the security state of your AWS environment and resources. It also provides you with the compliance status of your environment based on CIS AWS Foundations compliance checks. Security Hub collects security data from AWS accounts, services, and integrated third-party products and helps you analyze security trends in your environment to identify the highest priority security issues. For more information about Security Hub, see the AWS Security Hub User Guide .
When you use operations in the Security Hub API, the requests are executed only in the AWS Region that is currently active or in the specific AWS Region that you specify in your request. Any configuration or settings change that results from the operation is applied only to that Region. To make the same change in other Regions, execute the same command for each Region to apply the change to. For example, if your Region is set to us-west-2
, when you use CreateMembers
to add a member account to Security Hub, the association of the member account with the master account is created only in the us-west-2 Region. Security Hub must be enabled for the member account in the same Region that the invite was sent from.
The following throttling limits apply to using Security Hub API operations:
GetFindings
- RateLimit of 3 requests per second, and a BurstLimit of 6 requests per second.
UpdateFindings
- RateLimit of 1 request per second, and a BurstLimit of 5 requests per second.
All other operations - RateLimit of 10 request per second, and a BurstLimit of 30 requests per second.
Accepts the invitation to be a member account and be monitored by the Security Hub master account that the invitation was sent from. When the member account accepts the invitation, permission is granted to the master account to view findings generated in the member account.
", "BatchDisableStandards": "Disables the standards specified by the provided StandardsSubscriptionArns
. For more information, see Standards Supported in AWS Security Hub.
Returns a list of the custom action targets in Security Hub in your account.
", "DescribeHub": "Returns details about the Hub resource in your account, including the HubArn
and the time when you enabled Security Hub.
Returns information about the products available that you can subscribe to and integrate with Security Hub to consolidate findings.
", + "DescribeStandardsControls": "Returns a list of compliance standards controls.
For each control, the results include information about whether it is currently enabled, the severity, and a link to remediation information.
", "DisableImportFindingsForProduct": "Disables the integration of the specified product with Security Hub. Findings from that product are no longer sent to Security Hub after the integration is disabled.
", "DisableSecurityHub": "Disables Security Hub in your account only in the current Region. To disable Security Hub in all Regions, you must submit one request per Region where you have enabled Security Hub. When you disable Security Hub for a master account, it doesn't disable Security Hub for any associated member accounts.
When you disable Security Hub, your existing findings and insights and any Security Hub configuration settings are deleted after 90 days and can't be recovered. Any standards that were enabled are disabled, and your master and member account associations are removed. If you want to save your existing findings, you must export them before you disable Security Hub.
", "DisassociateFromMasterAccount": "Disassociates the current Security Hub member account from the associated master account.
", "DisassociateMembers": "Disassociates the specified member accounts from the associated master account.
", "EnableImportFindingsForProduct": "Enables the integration of a partner product with Security Hub. Integrated products send findings to Security Hub. When you enable a product integration, a permission policy that grants permission for the product to send findings to Security Hub is applied.
", - "EnableSecurityHub": "Enables Security Hub for your account in the current Region or the Region you specify in the request. When you enable Security Hub, you grant to Security Hub the permissions necessary to gather findings from AWS Config, Amazon GuardDuty, Amazon Inspector, and Amazon Macie. To learn more, see Setting Up AWS Security Hub.
", + "EnableSecurityHub": "Enables Security Hub for your account in the current Region or the Region you specify in the request. Enabling Security Hub also enables the CIS AWS Foundations standard. When you enable Security Hub, you grant to Security Hub the permissions necessary to gather findings from AWS Config, Amazon GuardDuty, Amazon Inspector, and Amazon Macie. To learn more, see Setting Up AWS Security Hub.
", "GetEnabledStandards": "Returns a list of the standards that are currently enabled.
", "GetFindings": "Returns a list of findings that match the specified criteria.
", "GetInsightResults": "Lists the results of the Security Hub insight that the insight ARN specifies.
", @@ -39,7 +40,8 @@ "UntagResource": "Removes one or more tags from a resource.
", "UpdateActionTarget": "Updates the name and description of a custom action target in Security Hub.
", "UpdateFindings": "Updates the Note
and RecordState
of the Security Hub-aggregated findings that the filter attributes specify. Any member account that can view the finding also sees the update to the finding.
Updates the Security Hub insight that the insight ARN specifies.
" + "UpdateInsight": "Updates the Security Hub insight that the insight ARN specifies.
", + "UpdateStandardsControl": "Used to control whether an individual compliance standard control is enabled or disabled.
" }, "shapes": { "AcceptInvitationRequest": { @@ -108,12 +110,60 @@ "GetInsightsRequest$InsightArns": "The ARNs of the insights that you want to describe.
" } }, + "AvailabilityZone": { + "base": "Information about an Availability Zone.
", + "refs": { + "AvailabilityZones$member": null + } + }, + "AvailabilityZones": { + "base": null, + "refs": { + "AwsElbv2LoadBalancerDetails$AvailabilityZones": "The Availability Zones for the load balancer.
" + } + }, + "AwsCloudFrontDistributionDetails": { + "base": "A distribution configuration.
", + "refs": { + "ResourceDetails$AwsCloudFrontDistribution": "Details about a CloudFront distribution.
" + } + }, + "AwsCloudFrontDistributionLogging": { + "base": "A complex type that controls whether access logs are written for the distribution.
", + "refs": { + "AwsCloudFrontDistributionDetails$Logging": "A complex type that controls whether access logs are written for the distribution.
" + } + }, + "AwsCloudFrontDistributionOriginItem": { + "base": "A complex type that describes the Amazon S3 bucket, HTTP server (for example, a web server), Amazon MediaStore, or other server from which CloudFront gets your files.
", + "refs": { + "AwsCloudFrontDistributionOriginItemList$member": null + } + }, + "AwsCloudFrontDistributionOriginItemList": { + "base": null, + "refs": { + "AwsCloudFrontDistributionOrigins$Items": "A complex type that contains origins or origin groups for this distribution.
" + } + }, + "AwsCloudFrontDistributionOrigins": { + "base": "A complex type that contains information about origins and origin groups for this distribution.
", + "refs": { + "AwsCloudFrontDistributionDetails$Origins": "A complex type that contains information about origins for this distribution.
" + } + }, "AwsEc2InstanceDetails": { "base": "The details of an Amazon EC2 instance.
", "refs": { "ResourceDetails$AwsEc2Instance": "Details about an Amazon EC2 instance related to a finding.
" } }, + "AwsElbv2LoadBalancerDetails": { + "base": "Information about a load balancer.
", + "refs": { + "ResourceDetails$AwsElbv2LoadBalancer": "Details about a load balancer.
" + } + }, "AwsIamAccessKeyDetails": { "base": "IAM access key details related to a finding.
", "refs": { @@ -126,6 +176,78 @@ "AwsIamAccessKeyDetails$Status": "The status of the IAM access key related to a finding.
" } }, + "AwsIamRoleAssumeRolePolicyDocument": { + "base": null, + "refs": { + "AwsIamRoleDetails$AssumeRolePolicyDocument": "The trust policy that grants permission to assume the role.
" + } + }, + "AwsIamRoleDetails": { + "base": "Contains information about an IAM role, including all of the role's policies.
", + "refs": { + "ResourceDetails$AwsIamRole": "Details about an IAM role.
" + } + }, + "AwsKmsKeyDetails": { + "base": "Contains metadata about a customer master key (CMK).
", + "refs": { + "ResourceDetails$AwsKmsKey": "Details about a KMS key.
" + } + }, + "AwsLambdaFunctionCode": { + "base": "The code for the Lambda function. You can specify either an object in Amazon S3, or upload a deployment package directly.
", + "refs": { + "AwsLambdaFunctionDetails$Code": "An AwsLambdaFunctionCode
object.
The dead-letter queue for failed asynchronous invocations.
", + "refs": { + "AwsLambdaFunctionDetails$DeadLetterConfig": "The function's dead letter queue.
" + } + }, + "AwsLambdaFunctionDetails": { + "base": "Details about a function's configuration.
", + "refs": { + "ResourceDetails$AwsLambdaFunction": "Details about a Lambda function.
" + } + }, + "AwsLambdaFunctionEnvironment": { + "base": "A function's environment variable settings.
", + "refs": { + "AwsLambdaFunctionDetails$Environment": "The function's environment variables.
" + } + }, + "AwsLambdaFunctionEnvironmentError": { + "base": "Error messages for environment variables that couldn't be applied.
", + "refs": { + "AwsLambdaFunctionEnvironment$Error": "An AwsLambdaFunctionEnvironmentError
object.
An AWS Lambda layer.
", + "refs": { + "AwsLambdaFunctionLayerList$member": null + } + }, + "AwsLambdaFunctionLayerList": { + "base": null, + "refs": { + "AwsLambdaFunctionDetails$Layers": "The function's layers.
" + } + }, + "AwsLambdaFunctionTracingConfig": { + "base": "The function's AWS X-Ray tracing configuration.
", + "refs": { + "AwsLambdaFunctionDetails$TracingConfig": "The function's AWS X-Ray tracing configuration.
" + } + }, + "AwsLambdaFunctionVpcConfig": { + "base": "The VPC security groups and subnets that are attached to a Lambda function. For more information, see VPC Settings.
", + "refs": { + "AwsLambdaFunctionDetails$VpcConfig": "The function's networking configuration.
" + } + }, "AwsS3BucketDetails": { "base": "The details of an Amazon S3 bucket.
", "refs": { @@ -151,10 +273,34 @@ "AwsSecurityFindingList": { "base": null, "refs": { - "BatchImportFindingsRequest$Findings": "A list of findings to import. To successfully import a finding, it must follow the AWS Security Finding Format.
", + "BatchImportFindingsRequest$Findings": "A list of findings to import. To successfully import a finding, it must follow the AWS Security Finding Format. Maximum of 100 findings per request.
", "GetFindingsResponse$Findings": "The findings that matched the filters specified in the request.
" } }, + "AwsSnsTopicDetails": { + "base": "A wrapper type for the topic's Amazon Resource Name (ARN).
", + "refs": { + "ResourceDetails$AwsSnsTopic": "Details about an SNS topic.
" + } + }, + "AwsSnsTopicSubscription": { + "base": "A wrapper type for the attributes of an Amazon SNS subscription.
", + "refs": { + "AwsSnsTopicSubscriptionList$member": null + } + }, + "AwsSnsTopicSubscriptionList": { + "base": null, + "refs": { + "AwsSnsTopicDetails$Subscription": "Subscription is an embedded property that describes the subscription endpoints of an Amazon SNS topic.
" + } + }, + "AwsSqsQueueDetails": { + "base": "Data about a queue.
", + "refs": { + "ResourceDetails$AwsSqsQueue": "Details about an SQS queue.
" + } + }, "BatchDisableStandardsRequest": { "base": null, "refs": { @@ -188,6 +334,8 @@ "Boolean": { "base": null, "refs": { + "AwsCloudFrontDistributionLogging$Enabled": "With this field, you can enable or disable the selected distribution.
", + "AwsCloudFrontDistributionLogging$IncludeCookies": "Specifies whether you want CloudFront to include cookies in access logs.
", "ListMembersRequest$OnlyAssociated": "Specifies which member accounts the response includes based on their relationship status with the master account. The default value is TRUE
. If onlyAssociated
is set to TRUE
, the response includes member accounts whose relationship status with the master is set to ENABLED
or DISABLED
. If onlyAssociated
is set to FALSE
, the response includes all existing member accounts.
Exclusive to findings that are generated as the result of a check run against a specific rule in a supported standard (for example, CIS AWS Foundations). Contains compliance-related finding details.
", + "base": "Exclusive to findings that are generated as the result of a check run against a specific rule in a supported standard (for example, CIS AWS Foundations). Contains compliance-related finding details.
Values include the following:
Allowed values are the following:
PASSED
- Compliance check passed for all evaluated resources.
WARNING
- Some information is missing or this check is not supported given your configuration.
FAILED
- Compliance check failed for at least one evaluated resource.
NOT_AVAILABLE
- Check could not be performed due to a service outage or API error.
This data type is exclusive to findings that are generated as the result of a check run against a specific rule in a supported standard (for example, CIS AWS Foundations). Contains compliance-related finding details.
" } @@ -215,6 +363,13 @@ "ResourceDetails$Container": "Details about a container resource related to a finding.
" } }, + "ControlStatus": { + "base": null, + "refs": { + "StandardsControl$ControlStatus": "The current status of the compliance standard control. Indicates whether the control is enabled or disabled. Security Hub does not check against disabled controls.
", + "UpdateStandardsControlRequest$ControlStatus": "The updated status of the compliance standard control.
" + } + }, "CreateActionTargetRequest": { "base": null, "refs": { @@ -359,6 +514,16 @@ "refs": { } }, + "DescribeStandardsControlsRequest": { + "base": null, + "refs": { + } + }, + "DescribeStandardsControlsResponse": { + "base": null, + "refs": { + } + }, "DisableImportFindingsForProductRequest": { "base": null, "refs": { @@ -402,6 +567,7 @@ "Double": { "base": null, "refs": { + "AwsKmsKeyDetails$CreationDate": "The date and time when the CMK was created.
", "NumberFilter$Gte": "The greater-than-equal condition to be applied to a single field when querying for findings.
", "NumberFilter$Lte": "The less-than-equal condition to be applied to a single field when querying for findings.
", "NumberFilter$Eq": "The equal-to condition to be applied to a single field when querying for findings.
", @@ -431,6 +597,7 @@ "FieldMap": { "base": null, "refs": { + "AwsLambdaFunctionEnvironment$Variables": "Environment variable key-value pairs.
", "AwsSecurityFinding$ProductFields": "A data type where security-findings providers can include additional solution-specific details that aren't part of the defined AwsSecurityFinding
format.
A list of name/value string pairs associated with the finding. These are custom, user-defined fields added to a finding.
", "Resource$Tags": "A list of AWS tags associated with a resource at the time the finding was processed.
", @@ -552,8 +719,13 @@ "Integer": { "base": null, "refs": { + "AwsIamRoleDetails$MaxSessionDuration": "The maximum session duration (in seconds) that you want to set for the specified role.
", + "AwsLambdaFunctionDetails$MemorySize": "The memory that's allocated to the function.
", + "AwsLambdaFunctionDetails$Timeout": "The amount of time that Lambda allows a function to run before stopping it.
", + "AwsLambdaFunctionLayer$CodeSize": "The size of the layer archive in bytes.
", "AwsSecurityFinding$Confidence": "A finding's confidence. Confidence is defined as the likelihood that a finding accurately identifies the behavior or issue that it was intended to identify. Confidence is scored on a 0-100 basis using a ratio scale, where 0 means zero percent confidence and 100 means 100 percent confidence.
", "AwsSecurityFinding$Criticality": "The level of importance assigned to the resources associated with the finding. A score of 0 means that the underlying resources have no criticality, and a score of 100 is reserved for the most critical resources.
", + "AwsSqsQueueDetails$KmsDataKeyReusePeriodSeconds": "The length of time, in seconds, for which Amazon SQS can reuse a data key to encrypt or decrypt messages before calling AWS KMS again.
", "BatchImportFindingsResponse$FailedCount": "The number of findings that failed to import.
", "BatchImportFindingsResponse$SuccessCount": "The number of findings that were successfully imported.
", "DateRange$Value": "A date range value for the date filter.
", @@ -678,6 +850,12 @@ "refs": { } }, + "LoadBalancerState": { + "base": "Information about the state of the load balancer.
", + "refs": { + "AwsElbv2LoadBalancerDetails$State": "The state of the load balancer.
" + } + }, "Malware": { "base": "A list of malware related to a finding.
", "refs": { @@ -728,6 +906,7 @@ "refs": { "DescribeActionTargetsRequest$MaxResults": "The maximum number of results to return.
", "DescribeProductsRequest$MaxResults": "The maximum number of results to return.
", + "DescribeStandardsControlsRequest$MaxResults": "The maximum number of compliance standard controls to return.
", "GetEnabledStandardsRequest$MaxResults": "The maximum number of results to return in the response.
", "GetFindingsRequest$MaxResults": "The maximum number of findings to return.
", "GetInsightsRequest$MaxResults": "The maximum number of items that you want in the response.
", @@ -768,6 +947,8 @@ "DescribeActionTargetsResponse$NextToken": "The token that is required for pagination.
", "DescribeProductsRequest$NextToken": "The token that is required for pagination.
", "DescribeProductsResponse$NextToken": "The token that is required for pagination.
", + "DescribeStandardsControlsRequest$NextToken": "For requests to get the next page of results, the pagination token that was returned with the previous set of results. The initial request does not include a pagination token.
", + "DescribeStandardsControlsResponse$NextToken": "If there are more compliance standards control remaining in the results, then this is the pagination token to use to request the next page of compliance standard controls.
", "GetEnabledStandardsRequest$NextToken": "Paginates results. On your first call to the GetEnabledStandards
operation, set the value of this parameter to NULL
. For subsequent calls to the operation, fill nextToken
in the request with the value of nextToken
from the previous response to continue listing data.
The token that is required for pagination.
", "GetFindingsRequest$NextToken": "Paginates results. On your first call to the GetFindings
operation, set the value of this parameter to NULL
. For subsequent calls to the operation, fill nextToken
in the request with the value of nextToken
from the previous response to continue listing data.
The name of the action target.
", "ActionTarget$Description": "The description of the target action.
", "ArnList$member": null, + "AvailabilityZone$ZoneName": "The name of the Availability Zone.
", + "AvailabilityZone$SubnetId": "The ID of the subnet. You can specify one subnet per Availability Zone.
", + "AwsCloudFrontDistributionDetails$DomainName": "The domain name corresponding to the distribution.
", + "AwsCloudFrontDistributionDetails$ETag": "The entity tag is a hash of the object.
", + "AwsCloudFrontDistributionDetails$LastModifiedTime": "The date and time that the distribution was last modified.
", + "AwsCloudFrontDistributionDetails$Status": "Indicates the current status of the distribution.
", + "AwsCloudFrontDistributionDetails$WebAclId": "A unique identifier that specifies the AWS WAF web ACL, if any, to associate with this distribution.
", + "AwsCloudFrontDistributionLogging$Bucket": "The Amazon S3 bucket to store the access logs in.
", + "AwsCloudFrontDistributionLogging$Prefix": "An optional string that you want CloudFront to prefix to the access log filenames for this distribution.
", + "AwsCloudFrontDistributionOriginItem$DomainName": "Amazon S3 origins: The DNS name of the Amazon S3 bucket from which you want CloudFront to get objects for this origin.
", + "AwsCloudFrontDistributionOriginItem$Id": "A unique identifier for the origin or origin group.
", + "AwsCloudFrontDistributionOriginItem$OriginPath": "An optional element that causes CloudFront to request your content from a directory in your Amazon S3 bucket or your custom origin.
", "AwsEc2InstanceDetails$Type": "The instance type of the instance.
", "AwsEc2InstanceDetails$ImageId": "The Amazon Machine Image (AMI) ID of the instance.
", "AwsEc2InstanceDetails$KeyName": "The key name associated with the instance.
", @@ -800,8 +993,47 @@ "AwsEc2InstanceDetails$VpcId": "The identifier of the VPC that the instance was launched in.
", "AwsEc2InstanceDetails$SubnetId": "The identifier of the subnet that the instance was launched in.
", "AwsEc2InstanceDetails$LaunchedAt": "The date/time the instance was launched.
", - "AwsIamAccessKeyDetails$UserName": "The user associated with the IAM access key related to a finding.
", + "AwsElbv2LoadBalancerDetails$CanonicalHostedZoneId": "The ID of the Amazon Route 53 hosted zone associated with the load balancer.
", + "AwsElbv2LoadBalancerDetails$CreatedTime": "The date and time the load balancer was created.
", + "AwsElbv2LoadBalancerDetails$DNSName": "The public DNS name of the load balancer.
", + "AwsElbv2LoadBalancerDetails$IpAddressType": "The type of IP addresses used by the subnets for your load balancer. The possible values are ipv4 (for IPv4 addresses) and dualstack (for IPv4 and IPv6 addresses).
", + "AwsElbv2LoadBalancerDetails$Scheme": "The nodes of an Internet-facing load balancer have public IP addresses.
", + "AwsElbv2LoadBalancerDetails$Type": "The type of load balancer.
", + "AwsElbv2LoadBalancerDetails$VpcId": "The ID of the VPC for the load balancer.
", + "AwsIamAccessKeyDetails$UserName": "The user associated with the IAM access key related to a finding.
The UserName
parameter has been replaced with the PrincipalName
parameter because access keys can also be assigned to principals that are not IAM users.
The creation date/time of the IAM access key related to a finding.
", + "AwsIamAccessKeyDetails$PrincipalId": "The ID of the principal associated with an access key.
", + "AwsIamAccessKeyDetails$PrincipalType": "The type of principal associated with an access key.
", + "AwsIamAccessKeyDetails$PrincipalName": "The name of the principal.
", + "AwsIamRoleDetails$CreateDate": "The date and time, in ISO 8601 date-time format, when the role was created.
", + "AwsIamRoleDetails$RoleId": "The stable and unique string identifying the role.
", + "AwsIamRoleDetails$RoleName": "The friendly name that identifies the role.
", + "AwsIamRoleDetails$Path": "The path to the role.
", + "AwsKmsKeyDetails$AWSAccountId": "The twelve-digit account ID of the AWS account that owns the CMK.
", + "AwsKmsKeyDetails$KeyId": "The globally unique identifier for the CMK.
", + "AwsKmsKeyDetails$KeyManager": "The manager of the CMK. CMKs in your AWS account are either customer managed or AWS managed.
", + "AwsKmsKeyDetails$KeyState": "The state of the CMK.
", + "AwsKmsKeyDetails$Origin": "The source of the CMK's key material. When this value is AWS_KMS, AWS KMS created the key material. When this value is EXTERNAL, the key material was imported from your existing key management infrastructure or the CMK lacks key material. When this value is AWS_CLOUDHSM, the key material was created in the AWS CloudHSM cluster associated with a custom key store.
", + "AwsLambdaFunctionCode$S3Bucket": "An Amazon S3 bucket in the same AWS Region as your function. The bucket can be in a different AWS account.
", + "AwsLambdaFunctionCode$S3Key": "The Amazon S3 key of the deployment package.
", + "AwsLambdaFunctionCode$S3ObjectVersion": "For versioned objects, the version of the deployment package object to use.
", + "AwsLambdaFunctionCode$ZipFile": "The base64-encoded contents of the deployment package. AWS SDK and AWS CLI clients handle the encoding for you.
", + "AwsLambdaFunctionDeadLetterConfig$TargetArn": "The Amazon Resource Name (ARN) of an Amazon SQS queue or Amazon SNS topic.
", + "AwsLambdaFunctionDetails$CodeSha256": "The SHA256 hash of the function's deployment package.
", + "AwsLambdaFunctionDetails$FunctionName": "The name of the function.
", + "AwsLambdaFunctionDetails$Handler": "The function that Lambda calls to begin executing your function.
", + "AwsLambdaFunctionDetails$KmsKeyArn": "The KMS key that's used to encrypt the function's environment variables. This key is only returned if you've configured a customer managed CMK.
", + "AwsLambdaFunctionDetails$LastModified": "The date and time that the function was last updated, in ISO-8601 format (YYYY-MM-DDThh:mm:ss.sTZD).
", + "AwsLambdaFunctionDetails$MasterArn": "For Lambda@Edge functions, the ARN of the master function.
", + "AwsLambdaFunctionDetails$RevisionId": "The latest updated revision of the function or alias.
", + "AwsLambdaFunctionDetails$Role": "The function's execution role.
", + "AwsLambdaFunctionDetails$Runtime": "The runtime environment for the Lambda function.
", + "AwsLambdaFunctionDetails$Version": "The version of the Lambda function.
", + "AwsLambdaFunctionEnvironmentError$ErrorCode": "The error code.
", + "AwsLambdaFunctionEnvironmentError$Message": "The error message.
", + "AwsLambdaFunctionLayer$Arn": "The Amazon Resource Name (ARN) of the function layer.
", + "AwsLambdaFunctionTracingConfig$Mode": "The tracing mode.
", + "AwsLambdaFunctionVpcConfig$VpcId": "The ID of the VPC.
", "AwsS3BucketDetails$OwnerId": "The canonical user ID of the owner of the S3 bucket.
", "AwsS3BucketDetails$OwnerName": "The display name of the owner of the S3 bucket.
", "AwsSecurityFinding$SchemaVersion": "The schema version that a finding is formatted for.
", @@ -816,6 +1048,14 @@ "AwsSecurityFinding$Title": "A finding's title.
In this release, Title
is a required property.
A finding's description.
In this release, Description
is a required property.
A URL that links to a page about the current finding in the security-findings provider's solution.
", + "AwsSnsTopicDetails$KmsMasterKeyId": "The ID of an AWS-managed customer master key (CMK) for Amazon SNS or a custom CMK.
", + "AwsSnsTopicDetails$TopicName": "The name of the topic.
", + "AwsSnsTopicDetails$Owner": "The subscription's owner.
", + "AwsSnsTopicSubscription$Endpoint": "The subscription's endpoint (format depends on the protocol).
", + "AwsSnsTopicSubscription$Protocol": "The subscription's protocol.
", + "AwsSqsQueueDetails$KmsMasterKeyId": "The ID of an AWS-managed customer master key (CMK) for Amazon SQS or a custom CMK.
", + "AwsSqsQueueDetails$QueueName": "The name of the new queue.
", + "AwsSqsQueueDetails$DeadLetterTargetArn": "The Amazon Resource Name (ARN) of the dead-letter queue to which Amazon SQS moves messages after the value of maxReceiveCount is exceeded.
", "CategoryList$member": null, "ContainerDetails$Name": "The name of the container related to a finding.
", "ContainerDetails$ImageId": "The identifier of the image related to a finding.
", @@ -837,6 +1077,7 @@ "DescribeHubRequest$HubArn": "The ARN of the Hub resource to retrieve.
", "DescribeHubResponse$HubArn": "The ARN of the Hub resource retrieved.
", "DescribeHubResponse$SubscribedAt": "The date and time when Security Hub was enabled in the account.
", + "DescribeStandardsControlsRequest$StandardsSubscriptionArn": "The ARN of a resource that represents your subscription to a supported standard.
", "DisableImportFindingsForProductRequest$ProductSubscriptionArn": "The ARN of the integrated product to disable the integration for.
", "EnableImportFindingsForProductRequest$ProductArn": "The ARN of the product to enable the integration for.
", "EnableImportFindingsForProductResponse$ProductSubscriptionArn": "The ARN of your subscription to the product to enable integrations for.
", @@ -866,6 +1107,8 @@ "LimitExceededException$Code": null, "ListInvitationsResponse$NextToken": "The token that is required for pagination.
", "ListMembersResponse$NextToken": "The token that is required for pagination.
", + "LoadBalancerState$Code": "The state code. The initial state of the load balancer is provisioning. After the load balancer is fully set up and ready to route traffic, its state is active. If the load balancer could not be set up, its state is failed.
", + "LoadBalancerState$Reason": "A description of the state.
", "Malware$Name": "The name of the malware that was observed.
", "Malware$Path": "The file system path of the malware that was observed.
", "MapFilter$Key": "The key of the map filter.
", @@ -881,6 +1124,7 @@ "Network$DestinationIpV4": "The destination IPv4 address of network-related information about a finding.
", "Network$DestinationIpV6": "The destination IPv6 address of network-related information about a finding.
", "Network$DestinationDomain": "The destination domain of network-related information about a finding.
", + "NonEmptyStringList$member": null, "Note$Text": "The text of a note.
", "Note$UpdatedBy": "The principal that created a note.
", "Note$UpdatedAt": "The timestamp of when the note was updated.
", @@ -910,7 +1154,14 @@ "ResourceNotFoundException$Message": null, "ResourceNotFoundException$Code": null, "Result$ProcessingResult": "The reason that the account wasn't be processed.
", + "SecurityGroups$member": null, "SortCriterion$Field": "The finding attribute used to sort findings.
", + "StandardsControl$StandardsControlArn": "The ARN of the compliance standard control.
", + "StandardsControl$DisabledReason": "The reason provided for the most recent change in status for the control.
", + "StandardsControl$ControlId": "The identifier of the compliance standard control.
", + "StandardsControl$Title": "The title of the compliance standard control.
", + "StandardsControl$Description": "The longer description of the compliance standard control. Provides information about what the control is checking for.
", + "StandardsControl$RemediationUrl": "A link to remediation information for the control in the Security Hub user documentation
", "StandardsInputParameterMap$key": null, "StandardsInputParameterMap$value": null, "StandardsSubscription$StandardsSubscriptionArn": "The ARN of a resource that represents your subscription to a supported standard.
", @@ -929,7 +1180,16 @@ "UpdateActionTargetRequest$Description": "The updated description for the custom action target.
", "UpdateInsightRequest$InsightArn": "The ARN of the insight that you want to update.
", "UpdateInsightRequest$Name": "The updated name for the insight.
", - "UpdateInsightRequest$GroupByAttribute": "The updated GroupBy
attribute that defines this insight.
The updated GroupBy
attribute that defines this insight.
The ARN of the compliance standard control to enable or disable.
", + "UpdateStandardsControlRequest$DisabledReason": "A description of the reason why you are disabling a compliance standard control.
" + } + }, + "NonEmptyStringList": { + "base": null, + "refs": { + "AwsLambdaFunctionVpcConfig$SecurityGroupIds": "A list of VPC security groups IDs.
", + "AwsLambdaFunctionVpcConfig$SubnetIds": "A list of VPC subnet IDs.
" } }, "Note": { @@ -1077,12 +1337,24 @@ "InviteMembersResponse$UnprocessedAccounts": "A list of account ID and email address pairs of the AWS accounts that couldn't be processed.
" } }, + "SecurityGroups": { + "base": null, + "refs": { + "AwsElbv2LoadBalancerDetails$SecurityGroups": "The IDs of the security groups for the load balancer.
" + } + }, "Severity": { "base": "The severity of the finding.
", "refs": { "AwsSecurityFinding$Severity": "A finding's severity.
" } }, + "SeverityRating": { + "base": null, + "refs": { + "StandardsControl$SeverityRating": "The severity of findings generated from this compliance standard control.
The finding severity is based on an assessment of how easy it would be to compromise AWS resources if the compliance issue is detected.
" + } + }, "SortCriteria": { "base": null, "refs": { @@ -1101,6 +1373,18 @@ "SortCriterion$SortOrder": "The order used to sort findings.
" } }, + "StandardsControl": { + "base": "Details for an individual compliance standard control.
", + "refs": { + "StandardsControls$member": null + } + }, + "StandardsControls": { + "base": null, + "refs": { + "DescribeStandardsControlsResponse$Controls": "A list of compliance standards controls.
" + } + }, "StandardsInputParameterMap": { "base": null, "refs": { @@ -1290,7 +1574,8 @@ "refs": { "Invitation$InvitedAt": "The timestamp of when the invitation was sent.
", "Member$InvitedAt": "A timestamp for the date and time when the invitation was sent to the member account.
", - "Member$UpdatedAt": "The timestamp for the date and time when the member account was updated.
" + "Member$UpdatedAt": "The timestamp for the date and time when the member account was updated.
", + "StandardsControl$ControlStatusUpdatedAt": "The date and time that the status of the compliance standard control was most recently updated.
" } }, "TypeList": { @@ -1339,6 +1624,16 @@ "refs": { } }, + "UpdateStandardsControlRequest": { + "base": null, + "refs": { + } + }, + "UpdateStandardsControlResponse": { + "base": null, + "refs": { + } + }, "VerificationState": { "base": null, "refs": { diff --git a/models/apis/sesv2/2019-09-27/api-2.json b/models/apis/sesv2/2019-09-27/api-2.json index 3422598bfaa..694b87d8fac 100644 --- a/models/apis/sesv2/2019-09-27/api-2.json +++ b/models/apis/sesv2/2019-09-27/api-2.json @@ -91,6 +91,7 @@ "input":{"shape":"CreateEmailIdentityRequest"}, "output":{"shape":"CreateEmailIdentityResponse"}, "errors":[ + {"shape":"AlreadyExistsException"}, {"shape":"LimitExceededException"}, {"shape":"TooManyRequestsException"}, {"shape":"BadRequestException"}, @@ -599,6 +600,20 @@ {"shape":"BadRequestException"} ] }, + "PutEmailIdentityDkimSigningAttributes":{ + "name":"PutEmailIdentityDkimSigningAttributes", + "http":{ + "method":"PUT", + "requestUri":"/v1/email/identities/{EmailIdentity}/dkim/signing" + }, + "input":{"shape":"PutEmailIdentityDkimSigningAttributesRequest"}, + "output":{"shape":"PutEmailIdentityDkimSigningAttributesResponse"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"BadRequestException"} + ] + }, "PutEmailIdentityFeedbackAttributes":{ "name":"PutEmailIdentityFeedbackAttributes", "http":{ @@ -892,7 +907,8 @@ "required":["EmailIdentity"], "members":{ "EmailIdentity":{"shape":"Identity"}, - "Tags":{"shape":"TagList"} + "Tags":{"shape":"TagList"}, + "DkimSigningAttributes":{"shape":"DkimSigningAttributes"} } }, "CreateEmailIdentityResponse":{ @@ -1083,9 +1099,28 @@ "members":{ "SigningEnabled":{"shape":"Enabled"}, "Status":{"shape":"DkimStatus"}, - "Tokens":{"shape":"DnsTokenList"} + "Tokens":{"shape":"DnsTokenList"}, + "SigningAttributesOrigin":{"shape":"DkimSigningAttributesOrigin"} + } + }, + "DkimSigningAttributes":{ + "type":"structure", + "required":[ + "DomainSigningSelector", + "DomainSigningPrivateKey" + ], + "members":{ + "DomainSigningSelector":{"shape":"Selector"}, + "DomainSigningPrivateKey":{"shape":"PrivateKey"} } }, + "DkimSigningAttributesOrigin":{ + "type":"string", + "enum":[ + "AWS_SES", + "EXTERNAL" + ] + }, "DkimStatus":{ "type":"string", "enum":[ @@ -1848,6 +1883,13 @@ } }, "PoolName":{"type":"string"}, + "PrivateKey":{ + "type":"string", + "max":20480, + "min":1, + "pattern":"^[a-zA-Z0-9+\\/]+={0,2}$", + "sensitive":true + }, "PutAccountDedicatedIpWarmupAttributesRequest":{ "type":"structure", "members":{ @@ -2037,6 +2079,29 @@ "members":{ } }, + "PutEmailIdentityDkimSigningAttributesRequest":{ + "type":"structure", + "required":[ + "EmailIdentity", + "SigningAttributesOrigin" + ], + "members":{ + "EmailIdentity":{ + "shape":"Identity", + "location":"uri", + "locationName":"EmailIdentity" + }, + "SigningAttributesOrigin":{"shape":"DkimSigningAttributesOrigin"}, + "SigningAttributes":{"shape":"DkimSigningAttributes"} + } + }, + "PutEmailIdentityDkimSigningAttributesResponse":{ + "type":"structure", + "members":{ + "DkimStatus":{"shape":"DkimStatus"}, + "DkimTokens":{"shape":"DnsTokenList"} + } + }, "PutEmailIdentityFeedbackAttributesRequest":{ "type":"structure", "required":["EmailIdentity"], @@ -2106,6 +2171,12 @@ "LastFreshStart":{"shape":"LastFreshStart"} } }, + "Selector":{ + "type":"string", + "max":63, + "min":1, + "pattern":"^(([a-zA-Z0-9]|[a-zA-Z0-9][a-zA-Z0-9\\-]*[a-zA-Z0-9]))$" + }, "SendEmailRequest":{ "type":"structure", "required":[ diff --git a/models/apis/sesv2/2019-09-27/docs-2.json b/models/apis/sesv2/2019-09-27/docs-2.json index b24883ff7ca..42ee1353bbe 100644 --- a/models/apis/sesv2/2019-09-27/docs-2.json +++ b/models/apis/sesv2/2019-09-27/docs-2.json @@ -5,48 +5,49 @@ "CreateConfigurationSet": "Create a configuration set. Configuration sets are groups of rules that you can apply to the emails that you send. You apply a configuration set to an email by specifying the name of the configuration set when you call the Amazon SES API v2. When you apply a configuration set to an email, all of the rules in that configuration set are applied to the email.
", "CreateConfigurationSetEventDestination": "Create an event destination. Events include message sends, deliveries, opens, clicks, bounces, and complaints. Event destinations are places that you can send information about these events to. For example, you can send event data to Amazon SNS to receive notifications when you receive bounces or complaints, or you can use Amazon Kinesis Data Firehose to stream data to Amazon S3 for long-term storage.
A single configuration set can include more than one event destination.
", "CreateDedicatedIpPool": "Create a new pool of dedicated IP addresses. A pool can include one or more dedicated IP addresses that are associated with your AWS account. You can associate a pool with a configuration set. When you send an email that uses that configuration set, the message is sent from one of the addresses in the associated pool.
", - "CreateDeliverabilityTestReport": "Create a new predictive inbox placement test. Predictive inbox placement tests can help you predict how your messages will be handled by various email providers around the world. When you perform a predictive inbox placement test, you provide a sample message that contains the content that you plan to send to your customers. Amazon SES API v2 then sends that message to special email addresses spread across several major email providers. After about 24 hours, the test is complete, and you can use the GetDeliverabilityTestReport
operation to view the results of the test.
Starts the process of verifying an email identity. An identity is an email address or domain that you use when you send email. Before you can use an identity to send email, you first have to verify it. By verifying an identity, you demonstrate that you're the owner of the identity, and that you've given Amazon SES API v2 permission to send email from the identity.
When you verify an email address, Amazon SES sends an email to the address. Your email address is verified as soon as you follow the link in the verification email.
When you verify a domain, this operation provides a set of DKIM tokens, which you can convert into CNAME tokens. You add these CNAME tokens to the DNS configuration for your domain. Your domain is verified when Amazon SES detects these records in the DNS configuration for your domain. For some DNS providers, it can take 72 hours or more to complete the domain verification process.
", + "CreateDeliverabilityTestReport": "Create a new predictive inbox placement test. Predictive inbox placement tests can help you predict how your messages will be handled by various email providers around the world. When you perform a predictive inbox placement test, you provide a sample message that contains the content that you plan to send to your customers. Amazon SES then sends that message to special email addresses spread across several major email providers. After about 24 hours, the test is complete, and you can use the GetDeliverabilityTestReport
operation to view the results of the test.
Starts the process of verifying an email identity. An identity is an email address or domain that you use when you send email. Before you can use an identity to send email, you first have to verify it. By verifying an identity, you demonstrate that you're the owner of the identity, and that you've given Amazon SES API v2 permission to send email from the identity.
When you verify an email address, Amazon SES sends an email to the address. Your email address is verified as soon as you follow the link in the verification email.
When you verify a domain without specifying the DkimSigningAttributes
object, this operation provides a set of DKIM tokens. You can convert these tokens into CNAME records, which you then add to the DNS configuration for your domain. Your domain is verified when Amazon SES detects these records in the DNS configuration for your domain. This verification method is known as Easy DKIM.
Alternatively, you can perform the verification process by providing your own public-private key pair. This verification method is known as Bring Your Own DKIM (BYODKIM). To use BYODKIM, your call to the CreateEmailIdentity
operation has to include the DkimSigningAttributes
object. When you specify this object, you provide a selector (a component of the DNS record name that identifies the public key that you want to use for DKIM authentication) and a private key.
Delete an existing configuration set.
Configuration sets are groups of rules that you can apply to the emails you send. You apply a configuration set to an email by including a reference to the configuration set in the headers of the email. When you apply a configuration set to an email, all of the rules in that configuration set are applied to the email.
", "DeleteConfigurationSetEventDestination": "Delete an event destination.
Events include message sends, deliveries, opens, clicks, bounces, and complaints. Event destinations are places that you can send information about these events to. For example, you can send event data to Amazon SNS to receive notifications when you receive bounces or complaints, or you can use Amazon Kinesis Data Firehose to stream data to Amazon S3 for long-term storage.
", "DeleteDedicatedIpPool": "Delete a dedicated IP pool.
", "DeleteEmailIdentity": "Deletes an email identity. An identity can be either an email address or a domain name.
", - "DeleteSuppressedDestination": "Used to delete a suppressed email destination from your suppression list.
", + "DeleteSuppressedDestination": "Removes an email address from the suppression list for your account.
", "GetAccount": "Obtain information about the email-sending status and capabilities of your Amazon SES account in the current AWS Region.
", "GetBlacklistReports": "Retrieve a list of the blacklists that your dedicated IP addresses appear on.
", "GetConfigurationSet": "Get information about an existing configuration set, including the dedicated IP pool that it's associated with, whether or not it's enabled for sending email, and more.
Configuration sets are groups of rules that you can apply to the emails you send. You apply a configuration set to an email by including a reference to the configuration set in the headers of the email. When you apply a configuration set to an email, all of the rules in that configuration set are applied to the email.
", "GetConfigurationSetEventDestinations": "Retrieve a list of event destinations that are associated with a configuration set.
Events include message sends, deliveries, opens, clicks, bounces, and complaints. Event destinations are places that you can send information about these events to. For example, you can send event data to Amazon SNS to receive notifications when you receive bounces or complaints, or you can use Amazon Kinesis Data Firehose to stream data to Amazon S3 for long-term storage.
", "GetDedicatedIp": "Get information about a dedicated IP address, including the name of the dedicated IP pool that it's associated with, as well information about the automatic warm-up process for the address.
", "GetDedicatedIps": "List the dedicated IP addresses that are associated with your AWS account.
", - "GetDeliverabilityDashboardOptions": "Retrieve information about the status of the Deliverability dashboard for your account. When the Deliverability dashboard is enabled, you gain access to reputation, deliverability, and other metrics for the domains that you use to send email. You also gain the ability to perform predictive inbox placement tests.
When you use the Deliverability dashboard, you pay a monthly subscription charge, in addition to any other fees that you accrue by using Amazon SES and other AWS services. For more information about the features and cost of a Deliverability dashboard subscription, see Amazon Pinpoint Pricing.
", + "GetDeliverabilityDashboardOptions": "Retrieve information about the status of the Deliverability dashboard for your account. When the Deliverability dashboard is enabled, you gain access to reputation, deliverability, and other metrics for the domains that you use to send email. You also gain the ability to perform predictive inbox placement tests.
When you use the Deliverability dashboard, you pay a monthly subscription charge, in addition to any other fees that you accrue by using Amazon SES and other AWS services. For more information about the features and cost of a Deliverability dashboard subscription, see Amazon SES Pricing.
", "GetDeliverabilityTestReport": "Retrieve the results of a predictive inbox placement test.
", "GetDomainDeliverabilityCampaign": "Retrieve all the deliverability data for a specific campaign. This data is available for a campaign only if the campaign sent email by using a domain that the Deliverability dashboard is enabled for.
", "GetDomainStatisticsReport": "Retrieve inbox placement and engagement rates for the domains that you use to send email.
", "GetEmailIdentity": "Provides information about a specific identity, including the identity's verification status, its DKIM authentication status, and its custom Mail-From settings.
", - "GetSuppressedDestination": "Used to fetch a single suppressed email destination from your suppression list.
", + "GetSuppressedDestination": "Retrieves information about a specific email address that's on the suppression list for your account.
", "ListConfigurationSets": "List all of the configuration sets associated with your account in the current region.
Configuration sets are groups of rules that you can apply to the emails you send. You apply a configuration set to an email by including a reference to the configuration set in the headers of the email. When you apply a configuration set to an email, all of the rules in that configuration set are applied to the email.
", "ListDedicatedIpPools": "List all of the dedicated IP pools that exist in your AWS account in the current Region.
", "ListDeliverabilityTestReports": "Show a list of the predictive inbox placement tests that you've performed, regardless of their statuses. For predictive inbox placement tests that are complete, you can use the GetDeliverabilityTestReport
operation to view the results.
Retrieve deliverability data for all the campaigns that used a specific domain to send email during a specified time range. This data is available for a domain only if you enabled the Deliverability dashboard for the domain.
", "ListEmailIdentities": "Returns a list of all of the email identities that are associated with your AWS account. An identity can be either an email address or a domain. This operation returns identities that are verified as well as those that aren't. This operation returns identities that are associated with Amazon SES and Amazon Pinpoint.
", - "ListSuppressedDestinations": "Used to fetch a list suppressed email destinations from your suppression list.
", + "ListSuppressedDestinations": "Retrieves a list of email addresses that are on the suppression list for your account.
", "ListTagsForResource": "Retrieve a list of the tags (keys and values) that are associated with a specified resource. A tag is a label that you optionally define and associate with a resource. Each tag consists of a required tag key and an optional associated tag value. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key.
", "PutAccountDedicatedIpWarmupAttributes": "Enable or disable the automatic warm-up feature for dedicated IP addresses.
", "PutAccountSendingAttributes": "Enable or disable the ability of your account to send email.
", - "PutAccountSuppressionAttributes": "Change your account's suppression preferences for your account.
", + "PutAccountSuppressionAttributes": "Change the settings for the account-level suppression list.
", "PutConfigurationSetDeliveryOptions": "Associate a configuration set with a dedicated IP pool. You can use dedicated IP pools to create groups of dedicated IP addresses for sending specific types of email.
", "PutConfigurationSetReputationOptions": "Enable or disable collection of reputation metrics for emails that you send using a particular configuration set in a specific AWS Region.
", "PutConfigurationSetSendingOptions": "Enable or disable email sending for messages that use a particular configuration set in a specific AWS Region.
", - "PutConfigurationSetSuppressionOptions": "Specify your account's suppression preferences for a configuration set.
", + "PutConfigurationSetSuppressionOptions": "Specify the account suppression list preferences for a configuration set.
", "PutConfigurationSetTrackingOptions": "Specify a custom domain to use for open and click tracking elements in email that you send.
", "PutDedicatedIpInPool": "Move a dedicated IP address to an existing dedicated IP pool.
The dedicated IP address that you specify must already exist, and must be associated with your AWS account.
The dedicated IP pool you specify must already exist. You can create a new pool by using the CreateDedicatedIpPool
operation.
Enable or disable the Deliverability dashboard. When you enable the Deliverability dashboard, you gain access to reputation, deliverability, and other metrics for the domains that you use to send email. You also gain the ability to perform predictive inbox placement tests.
When you use the Deliverability dashboard, you pay a monthly subscription charge, in addition to any other fees that you accrue by using Amazon SES and other AWS services. For more information about the features and cost of a Deliverability dashboard subscription, see Amazon Pinpoint Pricing.
", + "PutDeliverabilityDashboardOption": "Enable or disable the Deliverability dashboard. When you enable the Deliverability dashboard, you gain access to reputation, deliverability, and other metrics for the domains that you use to send email. You also gain the ability to perform predictive inbox placement tests.
When you use the Deliverability dashboard, you pay a monthly subscription charge, in addition to any other fees that you accrue by using Amazon SES and other AWS services. For more information about the features and cost of a Deliverability dashboard subscription, see Amazon SES Pricing.
", "PutEmailIdentityDkimAttributes": "Used to enable or disable DKIM authentication for an email identity.
", + "PutEmailIdentityDkimSigningAttributes": "Used to configure or change the DKIM authentication settings for an email domain identity. You can use this operation to do any of the following:
Update the signing attributes for an identity that uses Bring Your Own DKIM (BYODKIM).
Change from using no DKIM authentication to using Easy DKIM.
Change from using no DKIM authentication to using BYODKIM.
Change from using Easy DKIM to using BYODKIM.
Change from using BYODKIM to using Easy DKIM.
Used to enable or disable feedback forwarding for an identity. This setting determines what happens when an identity is used to send an email that results in a bounce or complaint event.
If the value is true
, you receive email notifications when bounce or complaint events occur. These notifications are sent to the address that you specified in the Return-Path
header of the original email.
You're required to have a method of tracking bounces and complaints. If you haven't set up another mechanism for receiving bounce or complaint notifications (for example, by setting up an event destination), you receive an email notification when these events occur (even if this setting is disabled).
", "PutEmailIdentityMailFromAttributes": "Used to enable or disable the custom Mail-From domain configuration for an email identity.
", - "PutSuppressedDestination": "Puts (overwrites) an email destination in your suppression list.
", - "SendEmail": "Sends an email message. You can use the Amazon SES API v2 to send two types of messages:
Simple – A standard email message. When you create this type of message, you specify the sender, the recipient, and the message body, and the Amazon SES API v2 assembles the message for you.
Raw – A raw, MIME-formatted email message. When you send this type of email, you have to specify all of the message headers, as well as the message body. You can use this message type to send messages that contain attachments. The message that you specify has to be a valid MIME message.
Adds an email address to the suppression list for your account.
", + "SendEmail": "Sends an email message. You can use the Amazon SES API v2 to send two types of messages:
Simple – A standard email message. When you create this type of message, you specify the sender, the recipient, and the message body, and Amazon SES assembles the message for you.
Raw – A raw, MIME-formatted email message. When you send this type of email, you have to specify all of the message headers, as well as the message body. You can use this message type to send messages that contain attachments. The message that you specify has to be a valid MIME message.
Add one or more tags (keys and values) to a specified resource. A tag is a label that you optionally define and associate with a resource. Tags can help you categorize and manage resources in different ways, such as by purpose, owner, environment, or other criteria. A resource can have as many as 50 tags.
Each tag consists of a required tag key and an associated tag value, both of which you define. A tag key is a general label that acts as a category for more specific tag values. A tag value acts as a descriptor within a tag key.
", "UntagResource": "Remove one or more tags (keys and values) from a specified resource.
", "UpdateConfigurationSetEventDestination": "Update the configuration of an event destination for a configuration set.
Events include message sends, deliveries, opens, clicks, bounces, and complaints. Event destinations are places that you can send information about these events to. For example, you can send event data to Amazon SNS to receive notifications when you receive bounces or complaints, or you can use Amazon Kinesis Data Firehose to stream data to Amazon S3 for long-term storage.
" @@ -139,7 +140,7 @@ "Charset": { "base": null, "refs": { - "Content$Charset": "The character set for the content. Because of the constraints of the SMTP protocol, the Amazon SES API v2 uses 7-bit ASCII by default. If the text includes characters outside of the ASCII range, you have to specify a character set. For example, you could specify UTF-8
, ISO-8859-1
, or Shift_JIS
.
The character set for the content. Because of the constraints of the SMTP protocol, Amazon SES uses 7-bit ASCII by default. If the text includes characters outside of the ASCII range, you have to specify a character set. For example, you could specify UTF-8
, ISO-8859-1
, or Shift_JIS
.
The name of the configuration set that you want to associate with a dedicated IP pool.
", "PutConfigurationSetReputationOptionsRequest$ConfigurationSetName": "The name of the configuration set that you want to enable or disable reputation metric tracking for.
", "PutConfigurationSetSendingOptionsRequest$ConfigurationSetName": "The name of the configuration set that you want to enable or disable email sending for.
", - "PutConfigurationSetSuppressionOptionsRequest$ConfigurationSetName": "The name of the configuration set that you want to enable or disable email sending for.
", + "PutConfigurationSetSuppressionOptionsRequest$ConfigurationSetName": "The name of the configuration set that you want to change the suppression list preferences for.
", "PutConfigurationSetTrackingOptionsRequest$ConfigurationSetName": "The name of the configuration set that you want to add a custom tracking domain to.
", "SendEmailRequest$ConfigurationSetName": "The name of the configuration set that you want to use when sending the email.
", "UpdateConfigurationSetEventDestinationRequest$ConfigurationSetName": "The name of the configuration set that contains the event destination that you want to modify.
" @@ -246,7 +247,7 @@ } }, "CreateEmailIdentityResponse": { - "base": "If the email identity is a domain, this object contains tokens that you can use to create a set of CNAME records. To sucessfully verify your domain, you have to add these records to the DNS configuration for your domain.
If the email identity is an email address, this object is empty.
", + "base": "If the email identity is a domain, this object contains information about the DKIM verification status for the domain.
If the email identity is an email address, this object is empty.
", "refs": { } }, @@ -270,7 +271,7 @@ } }, "DedicatedIp": { - "base": "Contains information about a dedicated IP address that is associated with your Amazon SES API v2 account.
To learn more about requesting dedicated IP addresses, see Requesting and Relinquishing Dedicated IP Addresses in the Amazon SES Developer Guide.
", + "base": "Contains information about a dedicated IP address that is associated with your Amazon SES account.
To learn more about requesting dedicated IP addresses, see Requesting and Relinquishing Dedicated IP Addresses in the Amazon SES Developer Guide.
", "refs": { "DedicatedIpList$member": null, "GetDedicatedIpResponse$DedicatedIp": "An object that contains information about a dedicated IP address.
" @@ -329,7 +330,7 @@ } }, "DeleteSuppressedDestinationRequest": { - "base": "A request to delete a suppressed email destination.
", + "base": "A request to remove an email address from the suppression list for your account.
", "refs": { } }, @@ -396,16 +397,31 @@ } }, "DkimAttributes": { - "base": "An object that contains information about the DKIM configuration for an email identity.
", + "base": "An object that contains information about the DKIM authentication status for an email identity.
Amazon SES determines the authentication status by searching for specific records in the DNS configuration for the domain. If you used Easy DKIM to set up DKIM authentication, Amazon SES tries to find three unique CNAME records in the DNS configuration for your domain. If you provided a public key to perform DKIM authentication, Amazon SES tries to find a TXT record that uses the selector that you specified. The value of the TXT record must be a public key that's paired with the private key that you specified in the process of creating the identity
", "refs": { - "CreateEmailIdentityResponse$DkimAttributes": "An object that contains information about the DKIM attributes for the identity. This object includes the tokens that you use to create the CNAME records that are required to complete the DKIM verification process.
", - "GetEmailIdentityResponse$DkimAttributes": "An object that contains information about the DKIM attributes for the identity. This object includes the tokens that you use to create the CNAME records that are required to complete the DKIM verification process.
" + "CreateEmailIdentityResponse$DkimAttributes": "An object that contains information about the DKIM attributes for the identity.
", + "GetEmailIdentityResponse$DkimAttributes": "An object that contains information about the DKIM attributes for the identity.
" + } + }, + "DkimSigningAttributes": { + "base": "An object that contains information about the tokens used for setting up Bring Your Own DKIM (BYODKIM).
", + "refs": { + "CreateEmailIdentityRequest$DkimSigningAttributes": "If your request includes this object, Amazon SES configures the identity to use Bring Your Own DKIM (BYODKIM) for DKIM authentication purposes, as opposed to the default method, Easy DKIM.
You can only specify this object if the email identity is a domain, as opposed to an address.
", + "PutEmailIdentityDkimSigningAttributesRequest$SigningAttributes": "An object that contains information about the private key and selector that you want to use to configure DKIM for the identity. This object is only required if you want to configure Bring Your Own DKIM (BYODKIM) for the identity.
" + } + }, + "DkimSigningAttributesOrigin": { + "base": null, + "refs": { + "DkimAttributes$SigningAttributesOrigin": "A string that indicates how DKIM was configured for the identity. There are two possible values:
AWS_SES
– Indicates that DKIM was configured for the identity by using Easy DKIM.
EXTERNAL
– Indicates that DKIM was configured for the identity by using Bring Your Own DKIM (BYODKIM).
The method that you want to use to configure DKIM for the identity. There are two possible values:
AWS_SES
– Configure DKIM for the identity by using Easy DKIM.
EXTERNAL
– Configure DKIM for the identity by using Bring Your Own DKIM (BYODKIM).
The DKIM authentication status of the identity. The status can be one of the following:
PENDING
– The DKIM verification process was initiated, and Amazon SES hasn't yet detected the CNAME records in the DNS configuration for the domain.
SUCCESS
– The DKIM authentication process completed successfully.
FAILED
– The DKIM authentication process failed. This can happen when Amazon SES fails to find the required CNAME records in the DNS configuration of the domain.
TEMPORARY_FAILURE
– A temporary issue is preventing Amazon SES from determining the DKIM authentication status of the domain.
NOT_STARTED
– The DKIM verification process hasn't been initiated for the domain.
The DKIM authentication status of the identity. The status can be one of the following:
PENDING
– The verification process was initiated, but Amazon SES hasn't yet detected the DKIM records in the DNS configuration for the domain.
SUCCESS
– The verification process completed successfully.
FAILED
– The verification process failed. This typically occurs when Amazon SES fails to find the DKIM records in the DNS configuration of the domain.
TEMPORARY_FAILURE
– A temporary issue is preventing Amazon SES from determining the DKIM authentication status of the domain.
NOT_STARTED
– The DKIM verification process hasn't been initiated for the domain.
Describes whether or not Amazon SES has successfully located the DKIM records in the DNS records for the domain. The status can be one of the following:
PENDING
– Amazon SES hasn't yet detected the DKIM records in the DNS configuration for the domain, but will continue to attempt to locate them.
SUCCESS
– Amazon SES located the DKIM records in the DNS configuration for the domain and determined that they're correct. You can now send DKIM-signed email from the identity.
FAILED
– Amazon SES wasn't able to locate the DKIM records in the DNS settings for the domain, and won't continue to search for them.
TEMPORARY_FAILURE
– A temporary issue occurred, which prevented Amazon SES from determining the DKIM status for the domain.
NOT_STARTED
– Amazon SES hasn't yet started searching for the DKIM records in the DKIM records for the domain.
Describes whether or not Amazon SES has successfully located the DKIM records in the DNS records for the domain. The status can be one of the following:
PENDING
– The verification process was initiated, but Amazon SES hasn't yet detected the DKIM records in the DNS configuration for the domain.
SUCCESS
– The verification process completed successfully.
FAILED
– The verification process failed. This typically occurs when Amazon SES fails to find the DKIM records in the DNS configuration of the domain.
TEMPORARY_FAILURE
– A temporary issue is preventing Amazon SES from determining the DKIM authentication status of the domain.
NOT_STARTED
– The DKIM verification process hasn't been initiated for the domain.
The DKIM authentication status of the identity. Amazon SES determines the authentication status by searching for specific records in the DNS configuration for your domain. If you used Easy DKIM to set up DKIM authentication, Amazon SES tries to find three unique CNAME records in the DNS configuration for your domain.
If you provided a public key to perform DKIM authentication, Amazon SES tries to find a TXT record that uses the selector that you specified. The value of the TXT record must be a public key that's paired with the private key that you specified in the process of creating the identity.
The status can be one of the following:
PENDING
– The verification process was initiated, but Amazon SES hasn't yet detected the DKIM records in the DNS configuration for the domain.
SUCCESS
– The verification process completed successfully.
FAILED
– The verification process failed. This typically occurs when Amazon SES fails to find the DKIM records in the DNS configuration of the domain.
TEMPORARY_FAILURE
– A temporary issue is preventing Amazon SES from determining the DKIM authentication status of the domain.
NOT_STARTED
– The DKIM verification process hasn't been initiated for the domain.
A set of unique strings that you use to create a set of CNAME records that you add to the DNS configuration for your domain. When Amazon SES detects these records in the DNS configuration for your domain, the DKIM authentication process is complete. Amazon SES usually detects these records within about 72 hours of adding them to the DNS configuration for your domain.
" + "DkimAttributes$Tokens": "If you used Easy DKIM to configure DKIM authentication for the domain, then this object contains a set of unique strings that you use to create a set of CNAME records that you add to the DNS configuration for your domain. When Amazon SES detects these records in the DNS configuration for your domain, the DKIM authentication process is complete.
If you configured DKIM authentication for the domain by providing your own public-private key pair, then this object contains the selector for the public key.
Regardless of the DKIM authentication method you use, Amazon SES searches for the appropriate records in the DNS configuration of the domain for up to 72 hours.
", + "PutEmailIdentityDkimSigningAttributesResponse$DkimTokens": "If you used Easy DKIM to configure DKIM authentication for the domain, then this object contains a set of unique strings that you use to create a set of CNAME records that you add to the DNS configuration for your domain. When Amazon SES detects these records in the DNS configuration for your domain, the DKIM authentication process is complete.
If you configured DKIM authentication for the domain by providing your own public-private key pair, then this object contains the selector that's associated with your public key.
Regardless of the DKIM authentication method you use, Amazon SES searches for the appropriate records in the DNS configuration of the domain for up to 72 hours.
" } }, "Domain": { @@ -471,15 +488,15 @@ "base": null, "refs": { "CreateDeliverabilityTestReportRequest$FromEmailAddress": "The email address that the predictive inbox placement test email was sent from.
", - "DeleteSuppressedDestinationRequest$EmailAddress": "The suppressed email destination to delete.
", + "DeleteSuppressedDestinationRequest$EmailAddress": "The suppressed email destination to remove from the account suppression list.
", "DeliverabilityTestReport$FromEmailAddress": "The sender address that you specified for the predictive inbox placement test.
", "EmailAddressList$member": null, - "GetSuppressedDestinationRequest$EmailAddress": "Email destination to fetch from the suppression list.
", - "PutSuppressedDestinationRequest$EmailAddress": "Email destination to be suppressed.
", + "GetSuppressedDestinationRequest$EmailAddress": "The email address that's on the account suppression list.
", + "PutSuppressedDestinationRequest$EmailAddress": "The email address that should be added to the suppression list for your account.
", "SendEmailRequest$FromEmailAddress": "The email address that you want to use as the \"From\" address for the email. The address that you specify has to be verified.
", "SendEmailRequest$FeedbackForwardingEmailAddress": "The address that you want bounce and complaint notifications to be sent to.
", - "SuppressedDestination$EmailAddress": "The suppressed email destination.
", - "SuppressedDestinationSummary$EmailAddress": "The suppressed email destination.
" + "SuppressedDestination$EmailAddress": "The email address that is on the suppression list for your account.
", + "SuppressedDestinationSummary$EmailAddress": "The email address that's on the suppression list for your account.
" } }, "EmailAddressList": { @@ -580,7 +597,7 @@ "FeedbackId": { "base": null, "refs": { - "SuppressedDestinationAttributes$FeedbackId": "A unique identifier of the suppression cause.
" + "SuppressedDestinationAttributes$FeedbackId": "A unique identifier that's generated when an email address is added to the suppression list for your account.
" } }, "GeneralEnforcementStatus": { @@ -700,12 +717,12 @@ } }, "GetSuppressedDestinationRequest": { - "base": "A request to get a suppressed email destination.
", + "base": "A request to retrieve information about an email address that's on the suppression list for your account.
", "refs": { } }, "GetSuppressedDestinationResponse": { - "base": "Information about the suppressed email destination.
", + "base": "Information about the suppressed email address.
", "refs": { } }, @@ -719,6 +736,7 @@ "GetEmailIdentityRequest$EmailIdentity": "The email identity that you want to retrieve details for.
", "IdentityInfo$IdentityName": "The address or domain of the identity.
", "PutEmailIdentityDkimAttributesRequest$EmailIdentity": "The email identity that you want to change the DKIM settings for.
", + "PutEmailIdentityDkimSigningAttributesRequest$EmailIdentity": "The email identity that you want to configure DKIM for.
", "PutEmailIdentityFeedbackAttributesRequest$EmailIdentity": "The email identity that you want to configure bounce and complaint feedback forwarding for.
", "PutEmailIdentityMailFromAttributesRequest$EmailIdentity": "The verified email identity that you want to set up the custom MAIL FROM domain for.
" } @@ -756,7 +774,7 @@ } }, "InvalidNextTokenException": { - "base": "The specified request includes an invalid or expired token. Please attempt to get a new token.
", + "base": "The specified request includes an invalid or expired token.
", "refs": { } }, @@ -877,12 +895,12 @@ } }, "ListSuppressedDestinationsRequest": { - "base": "A request to obtain a list of suppressed email destinations.
", + "base": "A request to obtain a list of email destinations that are on the suppression list for your account.
", "refs": { } }, "ListSuppressedDestinationsResponse": { - "base": "A list of suppressed email destinations.
", + "base": "A list of suppressed email addresses.
", "refs": { } }, @@ -1006,8 +1024,8 @@ "ListDomainDeliverabilityCampaignsResponse$NextToken": "A token that’s returned from a previous call to the ListDomainDeliverabilityCampaigns
operation. This token indicates the position of the campaign in the list of campaigns.
A token returned from a previous call to ListEmailIdentities
to indicate the position in the list of identities.
A token that indicates that there are additional configuration sets to list. To view additional configuration sets, issue another request to ListEmailIdentities
, and pass this token in the NextToken
parameter.
A token returned from a previous call to ListSuppressedDestinations
to indicate the position in the list of suppressed email destinations.
A token that indicates that there are additional suppressed destinations to list. To view additional suppressed destinations, issue another request to ListSuppressedDestinations
, and pass this token in the NextToken
parameter.
A token returned from a previous call to ListSuppressedDestinations
to indicate the position in the list of suppressed email addresses.
A token that indicates that there are additional email addresses on the suppression list for your account. To view additional suppressed addresses, issue another request to ListSuppressedDestinations
, and pass this token in the NextToken
parameter.
A unique identifier for the message that is generated when the message is accepted.
It is possible for the Amazon SES API v2 to accept a message without sending it. This can happen when the message that you're trying to send has an attachment contains a virus, or when you send a templated email that contains invalid personalization content, for example.
A unique identifier of the message that caused the suppression of the email destination.
" + "SendEmailResponse$MessageId": "A unique identifier for the message that is generated when the message is accepted.
It's possible for Amazon SES to accept a message without sending it. This can happen when the message that you're trying to send has an attachment contains a virus, or when you send a templated email that contains invalid personalization content, for example.
The unique identifier of the email message that caused the email address to be added to the suppression list for your account.
" } }, "OverallVolume": { @@ -1077,6 +1095,12 @@ "PutDedicatedIpInPoolRequest$DestinationPoolName": "The name of the IP pool that you want to add the dedicated IP address to. You have to specify an IP pool that already exists.
" } }, + "PrivateKey": { + "base": null, + "refs": { + "DkimSigningAttributes$DomainSigningPrivateKey": "A private key that's used to generate a DKIM signature.
The private key must use 1024-bit RSA encryption, and must be encoded using base64 encoding.
" + } + }, "PutAccountDedicatedIpWarmupAttributesRequest": { "base": "A request to enable or disable the automatic IP address warm-up feature.
", "refs": { @@ -1138,7 +1162,7 @@ } }, "PutConfigurationSetSuppressionOptionsRequest": { - "base": "A request to change your account's suppression preferences for an specific configuration set.
", + "base": "A request to change the account suppression list preferences for a specific configuration set.
", "refs": { } }, @@ -1197,6 +1221,16 @@ "refs": { } }, + "PutEmailIdentityDkimSigningAttributesRequest": { + "base": "A request to change the DKIM attributes for an email identity.
", + "refs": { + } + }, + "PutEmailIdentityDkimSigningAttributesResponse": { + "base": "If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
", + "refs": { + } + }, "PutEmailIdentityFeedbackAttributesRequest": { "base": "A request to set the attributes that control how bounce and complaint events are processed.
", "refs": { @@ -1218,7 +1252,7 @@ } }, "PutSuppressedDestinationRequest": { - "base": "A request to suppress an email destination.
", + "base": "A request to add an email destination to the suppression list for your account.
", "refs": { } }, @@ -1228,7 +1262,7 @@ } }, "RawMessage": { - "base": "The raw email message.
", + "base": "Represents the raw content of an email message.
", "refs": { "EmailContent$Raw": "The raw email message. The message has to meet the following criteria:
The message has to contain a header and a body, separated by one blank line.
All of the required header fields must be present in the message.
Each part of a multipart MIME message must be formatted properly.
If you include attachments, they must be in a file format that the Amazon SES API v2 supports.
The entire message must be Base64 encoded.
If any of the MIME parts in your message contain content that is outside of the 7-bit ASCII character range, you should encode that content to ensure that recipients' email clients render the message properly.
The length of any single line of text in the message can't exceed 1,000 characters. This restriction is defined in RFC 5321.
The raw email message. The message has to meet the following criteria:
The message has to contain a header and a body, separated by one blank line.
All of the required header fields must be present in the message.
Each part of a multipart MIME message must be formatted properly.
Attachments must be in a file format that the Amazon SES API v2 supports.
The entire message must be Base64 encoded.
If any of the MIME parts in your message contain content that is outside of the 7-bit ASCII character range, you should encode that content to ensure that recipients' email clients render the message properly.
The length of any single line of text in the message can't exceed 1,000 characters. This restriction is defined in RFC 5321.
The raw email message. The message has to meet the following criteria:
The message has to contain a header and a body, separated by one blank line.
All of the required header fields must be present in the message.
Each part of a multipart MIME message must be formatted properly.
Attachments must be in a file format that the Amazon SES API v2 supports.
The entire message must be Base64 encoded.
If any of the MIME parts in your message contain content that is outside of the 7-bit ASCII character range, you should encode that content to ensure that recipients' email clients render the message properly.
The length of any single line of text in the message can't exceed 1,000 characters. This restriction is defined in RFC 5321.
The raw email message. The message has to meet the following criteria:
The message has to contain a header and a body, separated by one blank line.
All of the required header fields must be present in the message.
Each part of a multipart MIME message must be formatted properly.
Attachments must be in a file format that the Amazon SES supports.
The entire message must be Base64 encoded.
If any of the MIME parts in your message contain content that is outside of the 7-bit ASCII character range, you should encode that content to ensure that recipients' email clients render the message properly.
The length of any single line of text in the message can't exceed 1,000 characters. This restriction is defined in RFC 5321.
An object that defines whether or not Amazon SES collects reputation metrics for the emails that you send that use the configuration set.
" } }, + "Selector": { + "base": null, + "refs": { + "DkimSigningAttributes$DomainSigningSelector": "A string that's used to identify a public key in the DNS configuration for a domain.
" + } + }, "SendEmailRequest": { "base": "A request to send an email message.
", "refs": { @@ -1321,15 +1361,15 @@ } }, "SuppressedDestination": { - "base": "An object containing information about the suppressed email destination.
", + "base": "An object that contains information about an email address that is on the suppression list for your account.
", "refs": { - "GetSuppressedDestinationResponse$SuppressedDestination": "An object containing information about the suppressed email destination.
" + "GetSuppressedDestinationResponse$SuppressedDestination": "An object containing information about the suppressed email address.
" } }, "SuppressedDestinationAttributes": { - "base": "An object containing additional attributes related to a suppressed destination.
", + "base": "An object that contains additional attributes that are related an email address that is on the suppression list for your account.
", "refs": { - "SuppressedDestination$Attributes": " Optional
value with information about the sources of the suppression.
An optional value that can contain additional information about the reasons that the address was added to the suppression list for your account.
" } }, "SuppressedDestinationSummaries": { @@ -1339,41 +1379,41 @@ } }, "SuppressedDestinationSummary": { - "base": "A summary for the suppressed email destination.
", + "base": "A summary that describes the suppressed email address.
", "refs": { "SuppressedDestinationSummaries$member": null } }, "SuppressionAttributes": { - "base": "An object that contains information about your account's suppression preferences.
", + "base": "An object that contains information about the email address suppression preferences for your account in the current AWS Region.
", "refs": { - "GetAccountResponse$SuppressionAttributes": "An object that contains information about your account's suppression preferences.
" + "GetAccountResponse$SuppressionAttributes": "An object that contains information about the email address suppression preferences for your account in the current AWS Region.
" } }, "SuppressionListReason": { - "base": "A string representing the cause for suppression for an email destination. It can be one of the following:
COMPLAINT
– Amazon SES will suppress an email address that receive a complaint.
BOUNCE
– Amazon SES will suppress an email address that hard bounces.
The reason that the address was added to the suppression list for your account. The value can be one of the following:
COMPLAINT
– Amazon SES added an email address to the suppression list for your account because a message sent to that address results in a complaint.
BOUNCE
– Amazon SES added an email address to the suppression list for your account because a message sent to that address results in a hard bounce.
Reason for which the email destination is suppressed.
", - "SuppressedDestination$Reason": "The reason for which the email destination is suppressed.
", - "SuppressedDestinationSummary$Reason": "The reason for which the email destination is suppressed.
", + "PutSuppressedDestinationRequest$Reason": "The factors that should cause the email address to be added to the suppression list for your account.
", + "SuppressedDestination$Reason": "The reason that the address was added to the suppression list for your account.
", + "SuppressedDestinationSummary$Reason": "The reason that the address was added to the suppression list for your account.
", "SuppressionListReasons$member": null } }, "SuppressionListReasons": { "base": null, "refs": { - "ListSuppressedDestinationsRequest$Reasons": "Filters email destinations suppressed by the given reasons.
", - "PutAccountSuppressionAttributesRequest$SuppressedReasons": "A list of reasons to suppress email addresses. The only valid reasons are:
COMPLAINT
– Amazon SES will suppress an email address that receives a complaint.
BOUNCE
– Amazon SES will suppress an email address that hard bounces.
A list of reasons to suppress email addresses. The only valid reasons are:
COMPLAINT
– Amazon SES will suppress an email address that receives a complaint.
BOUNCE
– Amazon SES will suppress an email address that hard bounces.
A list of reasons to suppress email addresses. The only valid reasons are:
COMPLAINT
– Amazon SES will suppress an email address that receives a complaint.
BOUNCE
– Amazon SES will suppress an email address that hard bounces.
A list of reasons to suppress email addresses. The only valid reasons are:
COMPLAINT
– Amazon SES will suppress an email address that receives a complaint.
BOUNCE
– Amazon SES will suppress an email address that hard bounces.
The factors that caused the email address to be added to .
", + "PutAccountSuppressionAttributesRequest$SuppressedReasons": "A list that contains the reasons that email addresses will be automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
A list that contains the reasons that email addresses are automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
A list that contains the reasons that email addresses will be automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
A list that contains the reasons that email addresses are automatically added to the suppression list for your account. This list can contain any or all of the following:
COMPLAINT
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a complaint.
BOUNCE
– Amazon SES adds an email address to the suppression list for your account when a message sent to that address results in a hard bounce.
An object that contains information about your account's suppression preferences.
", + "base": "An object that contains information about the suppression list preferences for your account.
", "refs": { "CreateConfigurationSetRequest$SuppressionOptions": null, - "GetConfigurationSetResponse$SuppressionOptions": "An object that contains information about your account's suppression preferences.
" + "GetConfigurationSetResponse$SuppressionOptions": "An object that contains information about the suppression list preferences for your account.
" } }, "Tag": { @@ -1457,10 +1497,10 @@ "GetDomainStatisticsReportRequest$EndDate": "The last day (in Unix time) that you want to obtain domain deliverability metrics for. The EndDate
that you specify has to be less than or equal to 30 days after the StartDate
.
The first day, in Unix time format, that you want to obtain deliverability data for.
", "ListDomainDeliverabilityCampaignsRequest$EndDate": "The last day, in Unix time format, that you want to obtain deliverability data for. This value has to be less than or equal to 30 days after the value of the StartDate
parameter.
Filters email destinations suppressed before the given time.
", - "ListSuppressedDestinationsRequest$EndDate": "Filters email destinations suppressed after the given time.
", - "SuppressedDestination$LastUpdateTime": "The last time the suppressed destination was updated.
", - "SuppressedDestinationSummary$LastUpdateTime": "The last time the suppressed destination was updated.
" + "ListSuppressedDestinationsRequest$StartDate": "Used to filter the list of suppressed email destinations so that it only includes addresses that were added to the list after a specific date. The date that you specify should be in Unix time format.
", + "ListSuppressedDestinationsRequest$EndDate": "Used to filter the list of suppressed email destinations so that it only includes addresses that were added to the list before a specific date. The date that you specify should be in Unix time format.
", + "SuppressedDestination$LastUpdateTime": "The date and time when the suppressed destination was last updated, shown in Unix time format.
", + "SuppressedDestinationSummary$LastUpdateTime": "The date and time when the suppressed destination was last updated, shown in Unix time format.
" } }, "TlsPolicy": { diff --git a/models/apis/ssm/2014-11-06/api-2.json b/models/apis/ssm/2014-11-06/api-2.json index c750495b227..7cfc56e0ddf 100644 --- a/models/apis/ssm/2014-11-06/api-2.json +++ b/models/apis/ssm/2014-11-06/api-2.json @@ -2273,7 +2273,8 @@ "type":"string", "enum":[ "SourceUrl", - "S3FileUrl" + "S3FileUrl", + "AttachmentReference" ] }, "AttachmentsSourceList":{ @@ -2375,7 +2376,8 @@ "CurrentAction", "StartTimeBefore", "StartTimeAfter", - "AutomationType" + "AutomationType", + "TagKey" ] }, "AutomationExecutionFilterList":{ @@ -2565,8 +2567,7 @@ "CloudWatchLogGroupName":{ "type":"string", "max":512, - "min":1, - "pattern":"[\\.\\-_/#A-Za-z0-9]+" + "min":1 }, "CloudWatchOutputConfig":{ "type":"structure", @@ -6338,8 +6339,10 @@ "type":"structure", "members":{ "Comment":{"shape":"Comment"}, + "CloudWatchOutputConfig":{"shape":"CloudWatchOutputConfig"}, "DocumentHash":{"shape":"DocumentHash"}, "DocumentHashType":{"shape":"DocumentHashType"}, + "DocumentVersion":{"shape":"DocumentVersion"}, "NotificationConfig":{"shape":"NotificationConfig"}, "OutputS3BucketName":{"shape":"S3BucketName"}, "OutputS3KeyPrefix":{"shape":"S3KeyPrefix"}, @@ -8460,7 +8463,8 @@ "TargetLocations":{ "shape":"TargetLocations", "box":true - } + }, + "Tags":{"shape":"TagList"} } }, "StartAutomationExecutionResult":{ diff --git a/models/apis/ssm/2014-11-06/docs-2.json b/models/apis/ssm/2014-11-06/docs-2.json index be1b8ff7a94..1caee78b894 100644 --- a/models/apis/ssm/2014-11-06/docs-2.json +++ b/models/apis/ssm/2014-11-06/docs-2.json @@ -5,7 +5,7 @@ "AddTagsToResource": "Adds or overwrites one or more tags for the specified resource. Tags are metadata that you can assign to your documents, managed instances, maintenance windows, Parameter Store parameters, and patch baselines. Tags enable you to categorize your resources in different ways, for example, by purpose, owner, or environment. Each tag consists of a key and an optional value, both of which you define. For example, you could define a set of tags for your account's managed instances that helps you track each instance's owner and stack level. For example: Key=Owner and Value=DbAdmin, SysAdmin, or Dev. Or Key=Stack and Value=Production, Pre-Production, or Test.
Each resource can have a maximum of 50 tags.
We recommend that you devise a set of tag keys that meets your needs for each resource type. Using a consistent set of tag keys makes it easier for you to manage your resources. You can search and filter the resources based on the tags you add. Tags don't have any semantic meaning to Amazon EC2 and are interpreted strictly as a string of characters.
For more information about tags, see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide.
", "CancelCommand": "Attempts to cancel the command specified by the Command ID. There is no guarantee that the command will be terminated and the underlying process stopped.
", "CancelMaintenanceWindowExecution": "Stops a maintenance window execution that is already in progress and cancels any tasks in the window that have not already starting running. (Tasks already in progress will continue to completion.)
", - "CreateActivation": "Registers your on-premises server or virtual machine with Amazon EC2 so that you can manage these resources using Run Command. An on-premises server or virtual machine that has been registered with EC2 is called a managed instance. For more information about activations, see Setting Up AWS Systems Manager for Hybrid Environments.
", + "CreateActivation": "Generates an activation code and activation ID you can use to register your on-premises server or virtual machine (VM) with Systems Manager. Registering these machines with Systems Manager makes it possible to manage them using Systems Manager capabilities. You use the activation code and ID when installing SSM Agent on machines in your hybrid environment. For more information about requirements for managing on-premises instances and VMs using Systems Manager, see Setting Up AWS Systems Manager for Hybrid Environments in the AWS Systems Manager User Guide.
On-premises servers or VMs that are registered with Systems Manager and Amazon EC2 instances that you manage with Systems Manager are all called managed instances.
Associates the specified Systems Manager document with the specified instances or targets.
When you associate a document with one or more instances using instance IDs or tags, SSM Agent running on the instance processes the document and configures the instance as specified.
If you associate a document with an instance that already has an associated document, the system returns the AssociationAlreadyExists exception.
", "CreateAssociationBatch": "Associates the specified Systems Manager document with the specified instances or targets.
When you associate a document with one or more instances using instance IDs or tags, SSM Agent running on the instance processes the document and configures the instance as specified.
If you associate a document with an instance that already has an associated document, the system returns the AssociationAlreadyExists exception.
", "CreateDocument": "Creates a Systems Manager document.
After you create a document, you can use CreateAssociation to associate it with one or more running instances.
", @@ -83,13 +83,13 @@ "GetServiceSetting": " ServiceSetting
is an account-level setting for an AWS service. This setting defines how a user interacts with or uses a service or a feature of a service. For example, if an AWS service charges money to the account based on feature or service usage, then the AWS service team might create a default setting of \"false\". This means the user can't use this feature unless they change the setting to \"true\" and intentionally opt in for a paid feature.
Services map a SettingId
object to a setting value. AWS services teams define the default value for a SettingId
. You can't create a new SettingId
, but you can overwrite the default value if you have the ssm:UpdateServiceSetting
permission for the setting. Use the UpdateServiceSetting API action to change the default setting. Or use the ResetServiceSetting to change the value back to the original value defined by the AWS service team.
Query the current service setting for the account.
", "LabelParameterVersion": "A parameter label is a user-defined alias to help you manage different versions of a parameter. When you modify a parameter, Systems Manager automatically saves a new version and increments the version number by one. A label can help you remember the purpose of a parameter when there are multiple versions.
Parameter labels have the following requirements and restrictions.
A version of a parameter can have a maximum of 10 labels.
You can't attach the same label to different versions of the same parameter. For example, if version 1 has the label Production, then you can't attach Production to version 2.
You can move a label from one version of a parameter to another.
You can't create a label when you create a new parameter. You must attach a label to a specific version of a parameter.
You can't delete a parameter label. If you no longer want to use a parameter label, then you must move it to a different version of a parameter.
A label can have a maximum of 100 characters.
Labels can contain letters (case sensitive), numbers, periods (.), hyphens (-), or underscores (_).
Labels can't begin with a number, \"aws,\" or \"ssm\" (not case sensitive). If a label fails to meet these requirements, then the label is not associated with a parameter and the system displays it in the list of InvalidLabels.
Retrieves all versions of an association for a specific association ID.
", - "ListAssociations": "Lists the associations for the specified Systems Manager document or instance.
", + "ListAssociations": "Returns all State Manager associations in the current AWS account and Region. You can limit the results to a specific State Manager association document or instance by specifying a filter.
", "ListCommandInvocations": "An invocation is copy of a command sent to a specific instance. A command can apply to one or more instances. A command invocation applies to one instance. For example, if a user runs SendCommand against three instances, then a command invocation is created for each requested instance ID. ListCommandInvocations provide status about command execution.
", "ListCommands": "Lists the commands requested by users of the AWS account.
", "ListComplianceItems": "For a specified resource ID, this API action returns a list of compliance statuses for different resource types. Currently, you can only specify one resource ID per call. List results depend on the criteria specified in the filter.
", "ListComplianceSummaries": "Returns a summary count of compliant and non-compliant resources for a compliance type. For example, this call can return State Manager associations, patches, or custom compliance types according to the filter criteria that you specify.
", "ListDocumentVersions": "List all versions for a document.
", - "ListDocuments": "Describes one or more of your Systems Manager documents.
", + "ListDocuments": "Returns all Systems Manager (SSM) documents in the current AWS account and Region. You can limit the results of this request by using a filter.
", "ListInventoryEntries": "A list of inventory items returned by the request.
", "ListResourceComplianceSummaries": "Returns a resource-level summary count. The summary includes information about compliant and non-compliant statuses and detailed compliance-item severity counts, according to the filter criteria you specify.
", "ListResourceDataSync": "Lists your resource data sync configurations. Includes information about the last time a sync attempted to start, the last sync status, and the last time a sync successfully completed.
The number of sync configurations might be too large to return using a single call to ListResourceDataSync
. You can limit the number of sync configurations returned by using the MaxResults
parameter. To determine whether there are more sync configurations to list, check the value of NextToken
in the output. If there are more sync configurations to list, you can request them by specifying the NextToken
returned in the call to the parameter of a subsequent call.
A user defined description of the activation.
", - "CreateActivationRequest$Description": "A user-defined description of the resource that you want to register with Amazon EC2.
Do not enter personally identifiable information in this field.
A user-defined description of the resource that you want to register with Systems Manager.
Do not enter personally identifiable information in this field.
The value of a key-value pair that identifies the location of an attachment to a document. The format is the URL of the location of a document attachment, such as the URL of an Amazon S3 bucket.
" + "AttachmentsSource$Values": "The value of a key-value pair that identifies the location of an attachment to a document. The format for Value depends on the type of key you specify.
For the key SourceUrl, the value is an S3 bucket location. For example:
\"Values\": [ \"s3://my-bucket/my-folder\" ]
For the key S3FileUrl, the value is a file in an S3 bucket. For example:
\"Values\": [ \"s3://my-bucket/my-folder/my-file.py\" ]
For the key AttachmentReference, the value is constructed from the name of another SSM document in your account, a version number of that document, and a file attached to that document version that you want to reuse. For example:
\"Values\": [ \"MyOtherDocument/3/my-other-file.py\" ]
However, if the SSM document is shared with you from another account, the full SSM document ARN must be specified instead of the document name only. For example:
\"Values\": [ \"arn:aws:ssm:us-east-2:111122223333:document/OtherAccountDocument/3/their-file.py\" ]
The execution status of the Automation.
", - "AutomationExecutionMetadata$AutomationExecutionStatus": "The status of the execution. Valid values include: Running, Succeeded, Failed, Timed out, or Cancelled.
", - "StepExecution$StepStatus": "The execution status for this step. Valid values include: Pending, InProgress, Success, Cancelled, Failed, and TimedOut.
" + "AutomationExecutionMetadata$AutomationExecutionStatus": "The status of the execution.
", + "StepExecution$StepStatus": "The execution status for this step.
" } }, "AutomationParameterKey": { @@ -916,6 +916,7 @@ "Command$CloudWatchOutputConfig": "CloudWatch Logs information where you want Systems Manager to send the command output.
", "CommandInvocation$CloudWatchOutputConfig": "CloudWatch Logs information where you want Systems Manager to send the command output.
", "GetCommandInvocationResult$CloudWatchOutputConfig": "CloudWatch Logs information where Systems Manager sent the command output.
", + "MaintenanceWindowRunCommandParameters$CloudWatchOutputConfig": null, "SendCommandRequest$CloudWatchOutputConfig": "Enables Systems Manager to send Run Command output to Amazon CloudWatch Logs.
" } }, @@ -1393,7 +1394,7 @@ "AssociationVersionInfo$CreatedDate": "The date the association version was created.
", "AutomationExecution$ExecutionStartTime": "The time the execution started.
", "AutomationExecution$ExecutionEndTime": "The time the execution finished.
", - "AutomationExecutionMetadata$ExecutionStartTime": "The time the execution started.>
", + "AutomationExecutionMetadata$ExecutionStartTime": "The time the execution started.
", "AutomationExecutionMetadata$ExecutionEndTime": "The time the execution finished. This is not populated if the execution is still in progress.
", "Command$ExpiresAfter": "If this time is reached and the command has not already started running, it will not run. Calculated based on the ExpiresAfter user input provided as part of the SendCommand API.
", "Command$RequestedDateTime": "The date and time the command was requested.
", @@ -1420,7 +1421,7 @@ "InstanceInformation$LastSuccessfulAssociationExecutionDate": "The last date the association was successfully run.
", "InstancePatchState$OperationStartTime": "The time the most recent patching operation was started on the instance.
", "InstancePatchState$OperationEndTime": "The time the most recent patching operation completed on the instance.
", - "InstancePatchState$LastNoRebootInstallOperationTime": "Reserved for future use.
", + "InstancePatchState$LastNoRebootInstallOperationTime": "The time of the last attempt to patch the instance with NoReboot
specified as the reboot option.
The time the execution started.
", "MaintenanceWindowExecution$EndTime": "The time the execution finished.
", "MaintenanceWindowExecutionTaskIdentity$StartTime": "The time the task execution started.
", @@ -1456,7 +1457,7 @@ "base": null, "refs": { "Activation$DefaultInstanceName": "A name for the managed instance when it is created.
", - "CreateActivationRequest$DefaultInstanceName": "The name of the registered, managed instance as it will appear in the Amazon EC2 console or when you use the AWS command line tools to list EC2 resources.
Do not enter personally identifiable information in this field.
The name of the registered, managed instance as it will appear in the Systems Manager console or when you use the AWS command line tools to list Systems Manager resources.
Do not enter personally identifiable information in this field.
Specify the document format for the request. The document format can be either JSON or YAML. JSON is the default format.
", + "CreateDocumentRequest$DocumentFormat": "Specify the document format for the request. The document format can be JSON, YAML, or TEXT. JSON is the default format.
", "DocumentDescription$DocumentFormat": "The document format, either JSON or YAML.
", "DocumentIdentifier$DocumentFormat": "The document format, either JSON or YAML.
", "DocumentVersionInfo$DocumentFormat": "The document format, either JSON or YAML.
", @@ -2212,7 +2213,7 @@ "DocumentType": { "base": null, "refs": { - "CreateDocumentRequest$DocumentType": "The type of document to create. Valid document types include: Command
, Policy
, Automation
, Session
, and Package
.
The type of document to create.
", "DocumentDescription$DocumentType": "The type of document.
", "DocumentIdentifier$DocumentType": "The document type.
", "GetDocumentResult$DocumentType": "The document type.
" @@ -2244,6 +2245,7 @@ "GetDocumentResult$DocumentVersion": "The document version.
", "InstanceAssociationStatusInfo$DocumentVersion": "The association document versions.
", "MaintenanceWindowAutomationParameters$DocumentVersion": "The version of an Automation document to use during task execution.
", + "MaintenanceWindowRunCommandParameters$DocumentVersion": "The SSM document version to use in the request. You can specify $DEFAULT, $LATEST, or a specific version number. If you run commands by using the AWS CLI, then you must escape the first two options by using a backslash. If you specify a version number, then you don't need to use the backslash. For example:
--document-version \"\\$DEFAULT\"
--document-version \"\\$LATEST\"
--document-version \"3\"
", "SendCommandRequest$DocumentVersion": "The SSM document version to use in the request. You can specify $DEFAULT, $LATEST, or a specific version number. If you run commands by using the AWS CLI, then you must escape the first two options by using a backslash. If you specify a version number, then you don't need to use the backslash. For example:
--document-version \"\\$DEFAULT\"
--document-version \"\\$LATEST\"
--document-version \"3\"
", "StartAutomationExecutionRequest$DocumentVersion": "The version of the Automation document to use for this execution.
", "UpdateAssociationRequest$DocumentVersion": "The document version you want update for the association.
", @@ -2289,7 +2291,7 @@ } }, "DoesNotExistException": { - "base": "Error returned when the ID specified for a resource, such as a maintenance window or Patch baseline, doesn't exist.
For information about resource limits in Systems Manager, see AWS Systems Manager Limits.
", + "base": "Error returned when the ID specified for a resource, such as a maintenance window or Patch baseline, doesn't exist.
For information about resource quotas in Systems Manager, see Systems Manager Service Quotas in the AWS General Reference.
", "refs": { } }, @@ -2658,7 +2660,7 @@ "base": null, "refs": { "Activation$IamRole": "The Amazon Identity and Access Management (IAM) role to assign to the managed instance.
", - "CreateActivationRequest$IamRole": "The Amazon Identity and Access Management (IAM) role that you want to assign to the managed instance.
", + "CreateActivationRequest$IamRole": "The Amazon Identity and Access Management (IAM) role that you want to assign to the managed instance. This IAM role must provide AssumeRole permissions for the Systems Manager service principal ssm.amazonaws.com
. For more information, see Create an IAM Service Role for a Hybrid Environment in the AWS Systems Manager User Guide.
The Amazon Identity and Access Management (IAM) role assigned to the on-premises Systems Manager managed instances. This call does not return the IAM role for Amazon EC2 instances.
", "UpdateManagedInstanceRoleRequest$IamRole": "The IAM role you want to assign or change.
" } @@ -2757,7 +2759,7 @@ "AssociationDescription$InstanceId": "The ID of the instance.
", "CommandInvocation$InstanceId": "The instance ID in which this invocation was requested.
", "CreateAssociationBatchRequestEntry$InstanceId": "The ID of the instance.
", - "CreateAssociationRequest$InstanceId": "The instance ID.
InstanceId
has been deprecated. To specify an instance ID for an association, use the Targets
parameter. If you use the parameter InstanceId
, you cannot use the parameters AssociationName
, DocumentVersion
, MaxErrors
, MaxConcurrency
, OutputLocation
, or ScheduleExpression
. To use these parameters, you must use the Targets
parameter.
The instance ID.
InstanceId
has been deprecated. To specify an instance ID for an association, use the Targets
parameter. Requests that include the parameter InstanceID
with SSM documents that use schema version 2.0 or later will fail. In addition, if you use the parameter InstanceId
, you cannot use the parameters AssociationName
, DocumentVersion
, MaxErrors
, MaxConcurrency
, OutputLocation
, or ScheduleExpression
. To use these parameters, you must use the Targets
parameter.
The ID of the instance.
", "DescribeAssociationRequest$InstanceId": "The instance ID.
", "DescribeEffectiveInstanceAssociationsRequest$InstanceId": "The instance ID for which you want to view all associations.
", @@ -2914,7 +2916,7 @@ "InstancesCount": { "base": null, "refs": { - "DescribePatchGroupStateResult$InstancesWithInstalledPendingRebootPatches": "Reserved for future use.
", + "DescribePatchGroupStateResult$InstancesWithInstalledPendingRebootPatches": "The number of instances with patches installed by Patch Manager that have not been rebooted after the patch installation. The status of these instances is NON_COMPLIANT.
", "DescribePatchGroupStateResult$InstancesWithInstalledRejectedPatches": "The number of instances with patches installed that are specified in a RejectedPatches list. Patches with a status of INSTALLED_REJECTED were typically installed before they were added to a RejectedPatches list.
If ALLOW_AS_DEPENDENCY is the specified option for RejectedPatchesAction, the value of InstancesWithInstalledRejectedPatches will always be 0 (zero).
The type of filter. Valid values include the following: \"Equal\"|\"NotEqual\"|\"BeginWith\"|\"LessThan\"|\"GreaterThan\"
" + "InventoryFilter$Type": "The type of filter.
" } }, "InventoryResultEntity": { @@ -4723,7 +4725,7 @@ } }, "OpsItemLimitExceededException": { - "base": "The request caused OpsItems to exceed one or more limits. For information about OpsItem limits, see What are the resource limits for OpsCenter?.
", + "base": "The request caused OpsItems to exceed one or more quotas. For information about OpsItem quotas, see What are the resource limits for OpsCenter?.
", "refs": { } }, @@ -5375,7 +5377,7 @@ "PatchInstalledPendingRebootCount": { "base": null, "refs": { - "InstancePatchState$InstalledPendingRebootCount": "Reserved for future use.
" + "InstancePatchState$InstalledPendingRebootCount": "The number of patches installed by Patch Manager since the last time the instance was rebooted.
" } }, "PatchInstalledRejectedCount": { @@ -5670,7 +5672,7 @@ "RebootOption": { "base": null, "refs": { - "InstancePatchState$RebootOption": "Reserved for future use.
" + "InstancePatchState$RebootOption": "Indicates the reboot option specified in the patch baseline.
Reboot options apply to Install
operations only. Reboots are not attempted for Patch Manager Scan
operations.
RebootIfNeeded: Patch Manager tries to reboot the instance if it installed any patches, or if any patches are detected with a status of InstalledPendingReboot
.
NoReboot: Patch Manager attempts to install missing packages without trying to reboot the system. Patches installed with this option are assigned a status of InstalledPendingReboot
. These patches might not be in effect until a reboot is performed.
Error returned when the caller has exceeded the default resource limits. For example, too many maintenance windows or patch baselines have been created.
For information about resource limits in Systems Manager, see AWS Systems Manager Limits.
", + "base": "Error returned when the caller has exceeded the default resource quotas. For example, too many maintenance windows or patch baselines have been created.
For information about resource quotas in Systems Manager, see Systems Manager Service Quotas in the AWS General Reference.
", "refs": { } }, @@ -6496,8 +6498,8 @@ "StreamUrl": { "base": null, "refs": { - "ResumeSessionResponse$StreamUrl": "A URL back to SSM Agent on the instance that the Session Manager client uses to send commands and receive output from the instance. Format: wss://ssmmessages.region.amazonaws.com/v1/data-channel/session-id?stream=(input|output)
.
region represents the Region identifier for an AWS Region supported by AWS Systems Manager, such as us-east-2
for the US East (Ohio) Region. For a list of supported region values, see the Region column in the AWS Systems Manager table of regions and endpoints in the AWS General Reference.
session-id represents the ID of a Session Manager session, such as 1a2b3c4dEXAMPLE
.
A URL back to SSM Agent on the instance that the Session Manager client uses to send commands and receive output from the instance. Format: wss://ssmmessages.region.amazonaws.com/v1/data-channel/session-id?stream=(input|output)
region represents the Region identifier for an AWS Region supported by AWS Systems Manager, such as us-east-2
for the US East (Ohio) Region. For a list of supported region values, see the Region column in the AWS Systems Manager table of regions and endpoints in the AWS General Reference.
session-id represents the ID of a Session Manager session, such as 1a2b3c4dEXAMPLE
.
A URL back to SSM Agent on the instance that the Session Manager client uses to send commands and receive output from the instance. Format: wss://ssmmessages.region.amazonaws.com/v1/data-channel/session-id?stream=(input|output)
.
region represents the Region identifier for an AWS Region supported by AWS Systems Manager, such as us-east-2
for the US East (Ohio) Region. For a list of supported region values, see the Region column in Systems Manager Service Endpoints in the AWS General Reference.
session-id represents the ID of a Session Manager session, such as 1a2b3c4dEXAMPLE
.
A URL back to SSM Agent on the instance that the Session Manager client uses to send commands and receive output from the instance. Format: wss://ssmmessages.region.amazonaws.com/v1/data-channel/session-id?stream=(input|output)
region represents the Region identifier for an AWS Region supported by AWS Systems Manager, such as us-east-2
for the US East (Ohio) Region. For a list of supported region values, see the Region column in Systems Manager Service Endpoints in the AWS General Reference.
session-id represents the ID of a Session Manager session, such as 1a2b3c4dEXAMPLE
.
The tags, or metadata, that have been applied to the document.
", "DocumentIdentifier$Tags": "The tags, or metadata, that have been applied to the document.
", "ListTagsForResourceResult$TagList": "A list of tags.
", - "PutParameterRequest$Tags": "Optional metadata that you assign to a resource. Tags enable you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a Systems Manager parameter to identify the type of resource to which it applies, the environment, or the type of configuration data referenced by the parameter. In this case, you could specify the following key name/value pairs:
Key=Resource,Value=S3bucket
Key=OS,Value=Windows
Key=ParameterType,Value=LicenseKey
To add tags to an existing Systems Manager parameter, use the AddTagsToResource action.
Optional metadata that you assign to a resource. Tags enable you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag a Systems Manager parameter to identify the type of resource to which it applies, the environment, or the type of configuration data referenced by the parameter. In this case, you could specify the following key name/value pairs:
Key=Resource,Value=S3bucket
Key=OS,Value=Windows
Key=ParameterType,Value=LicenseKey
To add tags to an existing Systems Manager parameter, use the AddTagsToResource action.
Optional metadata that you assign to a resource. You can specify a maximum of five tags for an automation. Tags enable you to categorize a resource in different ways, such as by purpose, owner, or environment. For example, you might want to tag an automation to identify an environment or operating system. In this case, you could specify the following key name/value pairs:
Key=environment,Value=test
Key=OS,Value=Windows
To add tags to an existing patch baseline, use the AddTagsToResource action.
Operations and objects for transcribing streaming speech to text.
", + "operations": { + "StartStreamTranscription": "Starts a bidirectional HTTP2 stream where audio is streamed to Amazon Transcribe and the transcription results are streamed to your application.
The following are encoded as HTTP2 headers:
x-amzn-transcribe-language-code
x-amzn-transcribe-media-encoding
x-amzn-transcribe-sample-rate
x-amzn-transcribe-session-id
A list of possible transcriptions for the audio.
", + "refs": { + "AlternativeList$member": null + } + }, + "AlternativeList": { + "base": null, + "refs": { + "Result$Alternatives": "A list of possible transcriptions for the audio. Each alternative typically contains one item
that contains the result of the transcription.
An audio blob that contains the next part of the audio that you want to transcribe.
" + } + }, + "AudioEvent": { + "base": "Provides a wrapper for the audio chunks that you are sending.
", + "refs": { + "AudioStream$AudioEvent": "A blob of audio from your application. You audio stream consists of one or more audio events.
" + } + }, + "AudioStream": { + "base": "Represents the audio stream from your application to Amazon Transcribe.
", + "refs": { + "StartStreamTranscriptionRequest$AudioStream": "PCM-encoded stream of audio blobs. The audio stream is encoded as an HTTP2 data frame.
" + } + }, + "BadRequestException": { + "base": "One or more arguments to the StartStreamTranscription
operation was invalid. For example, MediaEncoding
was not set to pcm
or LanguageCode
was not set to a valid code. Check the parameters and try your request again.
A client error occurred when the stream was created. Check the parameters of the request and try your request again.
" + } + }, + "Boolean": { + "base": null, + "refs": { + "Result$IsPartial": "Amazon Transcribe divides the incoming audio stream into segments at natural points in the audio. Transcription results are returned based on these segments.
The IsPartial
field is true
to indicate that Amazon Transcribe has additional transcription data to send, false
to indicate that this is the last transcription result for the segment.
A new stream started with the same session ID. The current stream has been terminated.
", + "refs": { + "TranscriptResultStream$ConflictException": "A new stream started with the same session ID. The current stream has been terminated.
" + } + }, + "Double": { + "base": null, + "refs": { + "Item$StartTime": "The offset from the beginning of the audio stream to the beginning of the audio that resulted in the item.
", + "Item$EndTime": "The offset from the beginning of the audio stream to the end of the audio that resulted in the item.
", + "Result$StartTime": "The offset in seconds from the beginning of the audio stream to the beginning of the result.
", + "Result$EndTime": "The offset in seconds from the beginning of the audio stream to the end of the result.
" + } + }, + "InternalFailureException": { + "base": "A problem occurred while processing the audio. Amazon Transcribe terminated processing. Try your request again.
", + "refs": { + "TranscriptResultStream$InternalFailureException": "A problem occurred while processing the audio. Amazon Transcribe terminated processing.
" + } + }, + "Item": { + "base": "A word or phrase transcribed from the input audio.
", + "refs": { + "ItemList$member": null + } + }, + "ItemList": { + "base": null, + "refs": { + "Alternative$Items": "One or more alternative interpretations of the input audio.
" + } + }, + "ItemType": { + "base": null, + "refs": { + "Item$Type": "The type of the item. PRONUNCIATION
indicates that the item is a word that was recognized in the input audio. PUNCTUATION
indicates that the item was interpreted as a pause in the input audio.
Indicates the source language used in the input audio stream.
", + "StartStreamTranscriptionResponse$LanguageCode": "The language code for the input audio stream.
" + } + }, + "LimitExceededException": { + "base": "You have exceeded the maximum number of concurrent transcription streams, are starting transcription streams too quickly, or the maximum audio length of 4 hours. Wait until a stream has finished processing, or break your audio stream into smaller chunks and try your request again.
", + "refs": { + "TranscriptResultStream$LimitExceededException": "Your client has exceeded one of the Amazon Transcribe limits, typically the limit on audio length. Break your audio stream into smaller chunks and try your request again.
" + } + }, + "MediaEncoding": { + "base": null, + "refs": { + "StartStreamTranscriptionRequest$MediaEncoding": "The encoding used for the input audio.
", + "StartStreamTranscriptionResponse$MediaEncoding": "The encoding used for the input audio stream.
" + } + }, + "MediaSampleRateHertz": { + "base": null, + "refs": { + "StartStreamTranscriptionRequest$MediaSampleRateHertz": "The sample rate, in Hertz, of the input audio. We suggest that you use 8000 Hz for low quality audio and 16000 Hz for high quality audio.
", + "StartStreamTranscriptionResponse$MediaSampleRateHertz": "The sample rate for the input audio stream. Use 8000 Hz for low quality audio and 16000 Hz for high quality audio.
" + } + }, + "RequestId": { + "base": null, + "refs": { + "StartStreamTranscriptionResponse$RequestId": "An identifier for the streaming transcription.
" + } + }, + "Result": { + "base": "The result of transcribing a portion of the input audio stream.
", + "refs": { + "ResultList$member": null + } + }, + "ResultList": { + "base": null, + "refs": { + "Transcript$Results": "Result objects that contain the results of transcribing a portion of the input audio stream. The array can be empty.
" + } + }, + "SessionId": { + "base": null, + "refs": { + "StartStreamTranscriptionRequest$SessionId": "A identifier for the transcription session. Use this parameter when you want to retry a session. If you don't provide a session ID, Amazon Transcribe will generate one for you and return it in the response.
", + "StartStreamTranscriptionResponse$SessionId": "An identifier for a specific transcription session.
" + } + }, + "StartStreamTranscriptionRequest": { + "base": null, + "refs": { + } + }, + "StartStreamTranscriptionResponse": { + "base": null, + "refs": { + } + }, + "String": { + "base": null, + "refs": { + "Alternative$Transcript": "The text that was transcribed from the audio.
", + "BadRequestException$Message": null, + "ConflictException$Message": null, + "InternalFailureException$Message": null, + "Item$Content": "The word or punctuation that was recognized in the input audio.
", + "LimitExceededException$Message": null, + "Result$ResultId": "A unique identifier for the result.
" + } + }, + "Transcript": { + "base": "The transcription in a TranscriptEvent.
", + "refs": { + "TranscriptEvent$Transcript": "The transcription of the audio stream. The transcription is composed of all of the items in the results list.
" + } + }, + "TranscriptEvent": { + "base": "Represents a set of transcription results from the server to the client. It contains one or more segments of the transcription.
", + "refs": { + "TranscriptResultStream$TranscriptEvent": "A portion of the transcription of the audio stream. Events are sent periodically from Amazon Transcribe to your application. The event can be a partial transcription of a section of the audio stream, or it can be the entire transcription of that portion of the audio stream.
" + } + }, + "TranscriptResultStream": { + "base": "Represents the transcription result stream from Amazon Transcribe to your application.
", + "refs": { + "StartStreamTranscriptionResponse$TranscriptResultStream": "Represents the stream of transcription events from Amazon Transcribe to your application.
" + } + }, + "VocabularyName": { + "base": null, + "refs": { + "StartStreamTranscriptionRequest$VocabularyName": "The name of the vocabulary to use when processing the transcription job.
", + "StartStreamTranscriptionResponse$VocabularyName": "The name of the vocabulary used when processing the job.
" + } + } + } +} diff --git a/models/apis/transcribe-streaming/2017-10-26/examples-1.json b/models/apis/transcribe-streaming/2017-10-26/examples-1.json new file mode 100755 index 00000000000..0ea7e3b0bbe --- /dev/null +++ b/models/apis/transcribe-streaming/2017-10-26/examples-1.json @@ -0,0 +1,5 @@ +{ + "version": "1.0", + "examples": { + } +} diff --git a/models/apis/transcribe-streaming/2017-10-26/paginators-1.json b/models/apis/transcribe-streaming/2017-10-26/paginators-1.json new file mode 100755 index 00000000000..5677bd8e4a2 --- /dev/null +++ b/models/apis/transcribe-streaming/2017-10-26/paginators-1.json @@ -0,0 +1,4 @@ +{ + "pagination": { + } +} diff --git a/models/apis/transcribe/2017-10-26/api-2.json b/models/apis/transcribe/2017-10-26/api-2.json index 2f44a843da5..6e44f2d5f35 100644 --- a/models/apis/transcribe/2017-10-26/api-2.json +++ b/models/apis/transcribe/2017-10-26/api-2.json @@ -28,6 +28,21 @@ {"shape":"ConflictException"} ] }, + "CreateVocabularyFilter":{ + "name":"CreateVocabularyFilter", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"CreateVocabularyFilterRequest"}, + "output":{"shape":"CreateVocabularyFilterResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"}, + {"shape":"ConflictException"} + ] + }, "DeleteTranscriptionJob":{ "name":"DeleteTranscriptionJob", "http":{ @@ -55,6 +70,20 @@ {"shape":"InternalFailureException"} ] }, + "DeleteVocabularyFilter":{ + "name":"DeleteVocabularyFilter", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DeleteVocabularyFilterRequest"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"LimitExceededException"}, + {"shape":"BadRequestException"}, + {"shape":"InternalFailureException"} + ] + }, "GetTranscriptionJob":{ "name":"GetTranscriptionJob", "http":{ @@ -85,6 +114,21 @@ {"shape":"BadRequestException"} ] }, + "GetVocabularyFilter":{ + "name":"GetVocabularyFilter", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetVocabularyFilterRequest"}, + "output":{"shape":"GetVocabularyFilterResponse"}, + "errors":[ + {"shape":"NotFoundException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"}, + {"shape":"BadRequestException"} + ] + }, "ListTranscriptionJobs":{ "name":"ListTranscriptionJobs", "http":{ @@ -113,6 +157,20 @@ {"shape":"InternalFailureException"} ] }, + "ListVocabularyFilters":{ + "name":"ListVocabularyFilters", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListVocabularyFiltersRequest"}, + "output":{"shape":"ListVocabularyFiltersResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"} + ] + }, "StartTranscriptionJob":{ "name":"StartTranscriptionJob", "http":{ @@ -143,6 +201,21 @@ {"shape":"NotFoundException"}, {"shape":"ConflictException"} ] + }, + "UpdateVocabularyFilter":{ + "name":"UpdateVocabularyFilter", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateVocabularyFilterRequest"}, + "output":{"shape":"UpdateVocabularyFilterResponse"}, + "errors":[ + {"shape":"BadRequestException"}, + {"shape":"LimitExceededException"}, + {"shape":"InternalFailureException"}, + {"shape":"NotFoundException"} + ] } }, "shapes":{ @@ -161,6 +234,27 @@ }, "exception":true }, + "CreateVocabularyFilterRequest":{ + "type":"structure", + "required":[ + "VocabularyFilterName", + "LanguageCode" + ], + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"}, + "LanguageCode":{"shape":"LanguageCode"}, + "Words":{"shape":"Words"}, + "VocabularyFilterFileUri":{"shape":"Uri"} + } + }, + "CreateVocabularyFilterResponse":{ + "type":"structure", + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"}, + "LanguageCode":{"shape":"LanguageCode"}, + "LastModifiedTime":{"shape":"DateTime"} + } + }, "CreateVocabularyRequest":{ "type":"structure", "required":[ @@ -184,6 +278,10 @@ "FailureReason":{"shape":"FailureReason"} } }, + "DataAccessRoleArn":{ + "type":"string", + "pattern":"^arn:aws:iam::[0-9]{0,63}:role/[A-Za-z0-9:_/+=,@.-]{0,1023}$" + }, "DateTime":{"type":"timestamp"}, "DeleteTranscriptionJobRequest":{ "type":"structure", @@ -192,6 +290,13 @@ "TranscriptionJobName":{"shape":"TranscriptionJobName"} } }, + "DeleteVocabularyFilterRequest":{ + "type":"structure", + "required":["VocabularyFilterName"], + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"} + } + }, "DeleteVocabularyRequest":{ "type":"structure", "required":["VocabularyName"], @@ -213,6 +318,22 @@ "TranscriptionJob":{"shape":"TranscriptionJob"} } }, + "GetVocabularyFilterRequest":{ + "type":"structure", + "required":["VocabularyFilterName"], + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"} + } + }, + "GetVocabularyFilterResponse":{ + "type":"structure", + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"}, + "LanguageCode":{"shape":"LanguageCode"}, + "LastModifiedTime":{"shape":"DateTime"}, + "DownloadUri":{"shape":"Uri"} + } + }, "GetVocabularyRequest":{ "type":"structure", "required":["VocabularyName"], @@ -239,6 +360,13 @@ "exception":true, "fault":true }, + "JobExecutionSettings":{ + "type":"structure", + "members":{ + "AllowDeferredExecution":{"shape":"Boolean"}, + "DataAccessRoleArn":{"shape":"DataAccessRoleArn"} + } + }, "KMSKeyId":{ "type":"string", "max":2048, @@ -322,6 +450,21 @@ "Vocabularies":{"shape":"Vocabularies"} } }, + "ListVocabularyFiltersRequest":{ + "type":"structure", + "members":{ + "NextToken":{"shape":"NextToken"}, + "MaxResults":{"shape":"MaxResults"}, + "NameContains":{"shape":"VocabularyFilterName"} + } + }, + "ListVocabularyFiltersResponse":{ + "type":"structure", + "members":{ + "NextToken":{"shape":"NextToken"}, + "VocabularyFilters":{"shape":"VocabularyFilters"} + } + }, "MaxAlternatives":{ "type":"integer", "max":10, @@ -399,7 +542,9 @@ "MaxSpeakerLabels":{"shape":"MaxSpeakers"}, "ChannelIdentification":{"shape":"Boolean"}, "ShowAlternatives":{"shape":"Boolean"}, - "MaxAlternatives":{"shape":"MaxAlternatives"} + "MaxAlternatives":{"shape":"MaxAlternatives"}, + "VocabularyFilterName":{"shape":"VocabularyFilterName"}, + "VocabularyFilterMethod":{"shape":"VocabularyFilterMethod"} } }, "StartTranscriptionJobRequest":{ @@ -417,7 +562,8 @@ "Media":{"shape":"Media"}, "OutputBucketName":{"shape":"OutputBucketName"}, "OutputEncryptionKMSKeyId":{"shape":"KMSKeyId"}, - "Settings":{"shape":"Settings"} + "Settings":{"shape":"Settings"}, + "JobExecutionSettings":{"shape":"JobExecutionSettings"} } }, "StartTranscriptionJobResponse":{ @@ -443,10 +589,12 @@ "MediaFormat":{"shape":"MediaFormat"}, "Media":{"shape":"Media"}, "Transcript":{"shape":"Transcript"}, + "StartTime":{"shape":"DateTime"}, "CreationTime":{"shape":"DateTime"}, "CompletionTime":{"shape":"DateTime"}, "FailureReason":{"shape":"FailureReason"}, - "Settings":{"shape":"Settings"} + "Settings":{"shape":"Settings"}, + "JobExecutionSettings":{"shape":"JobExecutionSettings"} } }, "TranscriptionJobName":{ @@ -458,6 +606,7 @@ "TranscriptionJobStatus":{ "type":"string", "enum":[ + "QUEUED", "IN_PROGRESS", "FAILED", "COMPLETED" @@ -472,6 +621,7 @@ "members":{ "TranscriptionJobName":{"shape":"TranscriptionJobName"}, "CreationTime":{"shape":"DateTime"}, + "StartTime":{"shape":"DateTime"}, "CompletionTime":{"shape":"DateTime"}, "LanguageCode":{"shape":"LanguageCode"}, "TranscriptionJobStatus":{"shape":"TranscriptionJobStatus"}, @@ -479,6 +629,23 @@ "OutputLocationType":{"shape":"OutputLocationType"} } }, + "UpdateVocabularyFilterRequest":{ + "type":"structure", + "required":["VocabularyFilterName"], + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"}, + "Words":{"shape":"Words"}, + "VocabularyFilterFileUri":{"shape":"Uri"} + } + }, + "UpdateVocabularyFilterResponse":{ + "type":"structure", + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"}, + "LanguageCode":{"shape":"LanguageCode"}, + "LastModifiedTime":{"shape":"DateTime"} + } + }, "UpdateVocabularyRequest":{ "type":"structure", "required":[ @@ -511,6 +678,31 @@ "type":"list", "member":{"shape":"VocabularyInfo"} }, + "VocabularyFilterInfo":{ + "type":"structure", + "members":{ + "VocabularyFilterName":{"shape":"VocabularyFilterName"}, + "LanguageCode":{"shape":"LanguageCode"}, + "LastModifiedTime":{"shape":"DateTime"} + } + }, + "VocabularyFilterMethod":{ + "type":"string", + "enum":[ + "remove", + "mask" + ] + }, + "VocabularyFilterName":{ + "type":"string", + "max":200, + "min":1, + "pattern":"^[0-9a-zA-Z._-]+" + }, + "VocabularyFilters":{ + "type":"list", + "member":{"shape":"VocabularyFilterInfo"} + }, "VocabularyInfo":{ "type":"structure", "members":{ @@ -533,6 +725,16 @@ "READY", "FAILED" ] + }, + "Word":{ + "type":"string", + "max":256, + "min":1 + }, + "Words":{ + "type":"list", + "member":{"shape":"Word"}, + "min":1 } } } diff --git a/models/apis/transcribe/2017-10-26/docs-2.json b/models/apis/transcribe/2017-10-26/docs-2.json index f060d062f8f..fd99764b5e8 100644 --- a/models/apis/transcribe/2017-10-26/docs-2.json +++ b/models/apis/transcribe/2017-10-26/docs-2.json @@ -3,14 +3,19 @@ "service": "Operations and objects for transcribing speech to text.
", "operations": { "CreateVocabulary": "Creates a new custom vocabulary that you can use to change the way Amazon Transcribe handles transcription of an audio file.
", + "CreateVocabularyFilter": "Creates a new vocabulary filter that you can use to filter words, such as profane words, from the output of a transcription job.
", "DeleteTranscriptionJob": "Deletes a previously submitted transcription job along with any other generated results such as the transcription, models, and so on.
", "DeleteVocabulary": "Deletes a vocabulary from Amazon Transcribe.
", + "DeleteVocabularyFilter": "Removes a vocabulary filter.
", "GetTranscriptionJob": "Returns information about a transcription job. To see the status of the job, check the TranscriptionJobStatus
field. If the status is COMPLETED
, the job is finished and you can find the results at the location specified in the TranscriptionFileUri
field.
Gets information about a vocabulary.
", + "GetVocabularyFilter": "Returns information about a vocabulary filter.
", "ListTranscriptionJobs": "Lists transcription jobs with the specified status.
", "ListVocabularies": "Returns a list of vocabularies that match the specified criteria. If no criteria are specified, returns the entire list of vocabularies.
", + "ListVocabularyFilters": "Gets information about vocabulary filters.
", "StartTranscriptionJob": "Starts an asynchronous job to transcribe speech to text.
", - "UpdateVocabulary": "Updates an existing vocabulary with new values. The UpdateVocabulary
operation overwrites all of the existing information with the values that you provide in the request.
Updates an existing vocabulary with new values. The UpdateVocabulary
operation overwrites all of the existing information with the values that you provide in the request.
Updates a vocabulary filter with a new list of filtered words.
" }, "shapes": { "BadRequestException": { @@ -21,6 +26,7 @@ "Boolean": { "base": null, "refs": { + "JobExecutionSettings$AllowDeferredExecution": "Indicates whether a job should be queued by Amazon Transcribe when the concurrent execution limit is exceeded. When the AllowDeferredExecution
field is true, jobs are queued and will be executed when the number of executing jobs falls below the concurrent execution limit. If the field is false, Amazon Transcribe returns a LimitExceededException
exception.
If you specify the AllowDeferredExecution
field, you must specify the DataAccessRoleArn
field.
Determines whether the transcription job uses speaker recognition to identify different speakers in the input audio. Speaker recognition labels individual speakers in the audio file. If you set the ShowSpeakerLabels
field to true, you must also set the maximum number of speaker labels MaxSpeakerLabels
field.
You can't set both ShowSpeakerLabels
and ChannelIdentification
in the same request. If you set both, your request returns a BadRequestException
.
Instructs Amazon Transcribe to process each audio channel separately and then merge the transcription output of each channel into a single transcription.
Amazon Transcribe also produces a transcription of each item detected on an audio channel, including the start time and end time of the item and alternative transcriptions of the item including the confidence that Amazon Transcribe has in the transcription.
You can't set both ShowSpeakerLabels
and ChannelIdentification
in the same request. If you set both, your request returns a BadRequestException
.
Determines whether the transcription contains alternative transcriptions. If you set the ShowAlternatives
field to true, you must also set the maximum number of alternatives to return in the MaxAlternatives
field.
The Amazon Resource Name (ARN) of a role that has access to the S3 bucket that contains the input files. Amazon Transcribe will assume this role to read queued media files. If you have specified an output S3 bucket for the transcription results, this role should have access to the output bucket as well.
If you specify the AllowDeferredExecution
field, you must specify the DataAccessRoleArn
field.
The date and time that the vocabulary filter was modified.
", "CreateVocabularyResponse$LastModifiedTime": "The date and time that the vocabulary was created.
", + "GetVocabularyFilterResponse$LastModifiedTime": "The date and time that the contents of the vocabulary filter were updated.
", "GetVocabularyResponse$LastModifiedTime": "The date and time that the vocabulary was last modified.
", + "TranscriptionJob$StartTime": "A timestamp that shows with the job was started processing.
", "TranscriptionJob$CreationTime": "A timestamp that shows when the job was created.
", "TranscriptionJob$CompletionTime": "A timestamp that shows when the job was completed.
", "TranscriptionJobSummary$CreationTime": "A timestamp that shows when the job was created.
", + "TranscriptionJobSummary$StartTime": "A timestamp that shows when the job started processing.
", "TranscriptionJobSummary$CompletionTime": "A timestamp that shows when the job was completed.
", + "UpdateVocabularyFilterResponse$LastModifiedTime": "The date and time that the vocabulary filter was updated.
", "UpdateVocabularyResponse$LastModifiedTime": "The date and time that the vocabulary was updated.
", + "VocabularyFilterInfo$LastModifiedTime": "The date and time that the vocabulary was last updated.
", "VocabularyInfo$LastModifiedTime": "The date and time that the vocabulary was last modified.
" } }, @@ -59,6 +87,11 @@ "refs": { } }, + "DeleteVocabularyFilterRequest": { + "base": null, + "refs": { + } + }, "DeleteVocabularyRequest": { "base": null, "refs": { @@ -84,6 +117,16 @@ "refs": { } }, + "GetVocabularyFilterRequest": { + "base": null, + "refs": { + } + }, + "GetVocabularyFilterResponse": { + "base": null, + "refs": { + } + }, "GetVocabularyRequest": { "base": null, "refs": { @@ -99,6 +142,13 @@ "refs": { } }, + "JobExecutionSettings": { + "base": "Provides information about when a transcription job should be executed.
", + "refs": { + "StartTranscriptionJobRequest$JobExecutionSettings": "Provides information about how a transcription job is executed. Use this field to indicate that the job can be queued for deferred execution if the concurrency limit is reached and there are no slots available to immediately run the job.
", + "TranscriptionJob$JobExecutionSettings": "Provides information about how a transcription job is executed.
" + } + }, "KMSKeyId": { "base": null, "refs": { @@ -108,14 +158,19 @@ "LanguageCode": { "base": null, "refs": { + "CreateVocabularyFilterRequest$LanguageCode": "The language code of the words in the vocabulary filter. All words in the filter must be in the same language. The vocabulary filter can only be used with transcription jobs in the specified language.
", + "CreateVocabularyFilterResponse$LanguageCode": "The language code of the words in the collection.
", "CreateVocabularyRequest$LanguageCode": "The language code of the vocabulary entries.
", "CreateVocabularyResponse$LanguageCode": "The language code of the vocabulary entries.
", + "GetVocabularyFilterResponse$LanguageCode": "The language code of the words in the vocabulary filter.
", "GetVocabularyResponse$LanguageCode": "The language code of the vocabulary entries.
", "StartTranscriptionJobRequest$LanguageCode": "The language code for the language used in the input media file.
", "TranscriptionJob$LanguageCode": "The language code for the input speech.
", "TranscriptionJobSummary$LanguageCode": "The language code for the input speech.
", + "UpdateVocabularyFilterResponse$LanguageCode": "The language code of the words in the vocabulary filter.
", "UpdateVocabularyRequest$LanguageCode": "The language code of the vocabulary entries.
", "UpdateVocabularyResponse$LanguageCode": "The language code of the vocabulary entries.
", + "VocabularyFilterInfo$LanguageCode": "The language code of the words in the vocabulary filter.
", "VocabularyInfo$LanguageCode": "The language code of the vocabulary entries.
" } }, @@ -144,6 +199,16 @@ "refs": { } }, + "ListVocabularyFiltersRequest": { + "base": null, + "refs": { + } + }, + "ListVocabularyFiltersResponse": { + "base": null, + "refs": { + } + }, "MaxAlternatives": { "base": null, "refs": { @@ -154,7 +219,8 @@ "base": null, "refs": { "ListTranscriptionJobsRequest$MaxResults": "The maximum number of jobs to return in the response. If there are fewer results in the list, this response contains only the actual results.
", - "ListVocabulariesRequest$MaxResults": "The maximum number of vocabularies to return in the response. If there are fewer results in the list, this response contains only the actual results.
" + "ListVocabulariesRequest$MaxResults": "The maximum number of vocabularies to return in the response. If there are fewer results in the list, this response contains only the actual results.
", + "ListVocabularyFiltersRequest$MaxResults": "The maximum number of filters to return in the response. If there are fewer results in the list, this response contains only the actual results.
" } }, "MaxSpeakers": { @@ -190,7 +256,9 @@ "ListTranscriptionJobsRequest$NextToken": "If the result of the previous request to ListTranscriptionJobs
was truncated, include the NextToken
to fetch the next set of jobs.
The ListTranscriptionJobs
operation returns a page of jobs at a time. The maximum size of the page is set by the MaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage
token. Include the token in the next request to the ListTranscriptionJobs
operation to return in the next page of jobs.
If the result of the previous request to ListVocabularies
was truncated, include the NextToken
to fetch the next set of jobs.
The ListVocabularies
operation returns a page of vocabularies at a time. The maximum size of the page is set by the MaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage
token. Include the token in the next request to the ListVocabularies
operation to return in the next page of jobs.
The ListVocabularies
operation returns a page of vocabularies at a time. The maximum size of the page is set by the MaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage
token. Include the token in the next request to the ListVocabularies
operation to return in the next page of jobs.
If the result of the previous request to ListVocabularyFilters
was truncated, include the NextToken
to fetch the next set of collections.
The ListVocabularyFilters
operation returns a page of collections at a time. The maximum size of the page is set by the MaxResults
parameter. If there are more jobs in the list than the page size, Amazon Transcribe returns the NextPage
token. Include the token in the next request to the ListVocabularyFilters
operation to return in the next page of jobs.
The Amazon S3 location of a text file used as input to create the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
The specified file must be less than 50 KB of UTF-8 characters.
If you provide the location of a list of words in the VocabularyFilterFileUri
parameter, you can't use the Words
parameter.
The S3 location of the text file that contains the definition of the custom vocabulary. The URI must be in the same region as the API endpoint that you are calling. The general form is
https://s3.<aws-region>.amazonaws.com/<bucket-name>/<keyprefix>/<objectkey>
For example:
https://s3.us-east-1.amazonaws.com/examplebucket/vocab.txt
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
For more information about custom vocabularies, see Custom Vocabularies.
", + "GetVocabularyFilterResponse$DownloadUri": "The URI of the list of words in the vocabulary filter. You can use this URI to get the list of words.
", "GetVocabularyResponse$DownloadUri": "The S3 location where the vocabulary is stored. Use this URI to get the contents of the vocabulary. The URI is available for a limited time.
", "Media$MediaFileUri": "The S3 location of the input media file. The URI must be in the same region as the API endpoint that you are calling. The general form is:
https://s3.<aws-region>.amazonaws.com/<bucket-name>/<keyprefix>/<objectkey>
For example:
https://s3.us-east-1.amazonaws.com/examplebucket/example.mp4
https://s3.us-east-1.amazonaws.com/examplebucket/mediadocs/example.mp4
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
", "Transcript$TranscriptFileUri": "The location where the transcription is stored.
Use this URI to access the transcription. If you specified an S3 bucket in the OutputBucketName
field when you created the job, this is the URI of that bucket. If you chose to store the transcription in Amazon Transcribe, this is a shareable URL that provides secure access to that location.
The Amazon S3 location of a text file used as input to create the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
The specified file must be less than 50 KB of UTF-8 characters.
If you provide the location of a list of words in the VocabularyFilterFileUri
parameter, you can't use the Words
parameter.
The S3 location of the text file that contains the definition of the custom vocabulary. The URI must be in the same region as the API endpoint that you are calling. The general form is
https://s3.<aws-region>.amazonaws.com/<bucket-name>/<keyprefix>/<objectkey>
For example:
https://s3.us-east-1.amazonaws.com/examplebucket/vocab.txt
For more information about S3 object names, see Object Keys in the Amazon S3 Developer Guide.
For more information about custom vocabularies, see Custom Vocabularies.
" } }, @@ -321,6 +402,39 @@ "ListVocabulariesResponse$Vocabularies": "A list of objects that describe the vocabularies that match the search criteria in the request.
" } }, + "VocabularyFilterInfo": { + "base": "Provides information about a vocabulary filter.
", + "refs": { + "VocabularyFilters$member": null + } + }, + "VocabularyFilterMethod": { + "base": null, + "refs": { + "Settings$VocabularyFilterMethod": "Set to mask
to remove filtered text from the transcript and replace it with three asterisks (\"***\") as placeholder text. Set to remove
to remove filtered text from the transcript without using placeholder text.
The vocabulary filter name. The name must be unique within the account that contains it.
", + "CreateVocabularyFilterResponse$VocabularyFilterName": "The name of the vocabulary filter.
", + "DeleteVocabularyFilterRequest$VocabularyFilterName": "The name of the vocabulary filter to remove.
", + "GetVocabularyFilterRequest$VocabularyFilterName": "The name of the vocabulary filter for which to return information.
", + "GetVocabularyFilterResponse$VocabularyFilterName": "The name of the vocabulary filter.
", + "ListVocabularyFiltersRequest$NameContains": "Filters the response so that it only contains vocabulary filters whose name contains the specified string.
", + "Settings$VocabularyFilterName": "The name of the vocabulary filter to use when transcribing the audio. The filter that you specify must have the same language code as the transcription job.
", + "UpdateVocabularyFilterRequest$VocabularyFilterName": "The name of the vocabulary filter to update.
", + "UpdateVocabularyFilterResponse$VocabularyFilterName": "The name of the updated vocabulary filter.
", + "VocabularyFilterInfo$VocabularyFilterName": "The name of the vocabulary filter. The name must be unique in the account that holds the filter.
" + } + }, + "VocabularyFilters": { + "base": null, + "refs": { + "ListVocabularyFiltersResponse$VocabularyFilters": "The list of vocabulary filters. It will contain at most MaxResults
number of filters. If there are more filters, call the ListVocabularyFilters
operation again with the NextToken
parameter in the request set to the value of the NextToken
field in the response.
Provides information about a custom vocabulary.
", "refs": { @@ -351,6 +465,19 @@ "UpdateVocabularyResponse$VocabularyState": "The processing state of the vocabulary. When the VocabularyState
field contains READY
the vocabulary is ready to be used in a StartTranscriptionJob
request.
The processing state of the vocabulary. If the state is READY
you can use the vocabulary in a StartTranscriptionJob
request.
The words to use in the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
If you provide a list of words in the Words
parameter, you can't use the VocabularyFilterFileUri
parameter.
The words to use in the vocabulary filter. Only use characters from the character set defined for custom vocabularies. For a list of character sets, see Character Sets for Custom Vocabularies.
If you provide a list of words in the Words
parameter, you can't use the VocabularyFilterFileUri
parameter.
Deletes the Secure File Transfer Protocol (SFTP) server that you specify.
No response returns from this operation.
", "DeleteSshPublicKey": "Deletes a user's Secure Shell (SSH) public key.
No response is returned from this operation.
", "DeleteUser": "Deletes the user belonging to the server you specify.
No response returns from this operation.
When you delete a user from a server, the user's information is lost.
Describes the server that you specify by passing the ServerId
parameter.
The response contains a description of the server's properties.
", + "DescribeServer": "Describes the server that you specify by passing the ServerId
parameter.
The response contains a description of the server's properties. When you set EndpointType
to VPC, the response will contain the EndpointDetails
.
Describes the user assigned to a specific server, as identified by its ServerId
property.
The response from this call returns the properties of the user associated with the ServerId
value that was specified.
Adds a Secure Shell (SSH) public key to a user account identified by a UserName
value assigned to a specific server, identified by ServerId
.
The response returns the UserName
value, the ServerId
value, and the name of the SshPublicKeyId
.
Lists the Secure File Transfer Protocol (SFTP) servers that are associated with your AWS account.
", @@ -22,6 +22,18 @@ "UpdateUser": "Assigns new properties to a user. Parameters you pass modify any or all of the following: the home directory, role, and policy for the UserName
and ServerId
you specify.
The response returns the ServerId
and the UserName
for the updated user.
A list of address allocation IDs that are required to attach an Elastic IP address to your SFTP server's endpoint. This is only valid in the UpdateServer
API.
This property can only be use when EndpointType
is set to VPC
.
This is the value of the resource that will have the tag removed. An Amazon Resource Name (ARN) is an identifier for a specific AWS resource, such as a server, user, or role.
" } }, + "ConflictException": { + "base": "This exception is thrown when the UpdatServer
is called for a server that has VPC as the endpoint type and the server's VpcEndpointID
is not in the available state.
The configuration settings for the virtual private cloud (VPC) endpoint for your SFTP server.
", + "base": "The virtual private cloud (VPC) endpoint settings that are configured for your SFTP server. With a VPC endpoint, you can restrict access to your SFTP server and resources only within your VPC. To control incoming internet traffic, invoke the UpdateServer
API and attach an Elastic IP to your server's endpoint.
The virtual private cloud (VPC) endpoint settings that you want to configure for your SFTP server. This parameter is required when you specify a value for the EndpointType
parameter.
The virtual private cloud (VPC) endpoint settings that are configured for your SFTP server. With a VPC endpoint, you can restrict access to your SFTP server to resources only within your VPC. To control incoming internet traffic, you will need to invoke the UpdateServer
API and attach an Elastic IP to your server's endpoint.
The virtual private cloud (VPC) endpoint settings that you configured for your SFTP server.
", - "UpdateServerRequest$EndpointDetails": "The virtual private cloud (VPC) endpoint settings that are configured for your SFTP server. With a VPC endpoint, your SFTP server isn't accessible over the public internet.
" + "UpdateServerRequest$EndpointDetails": "The virtual private cloud (VPC) endpoint settings that are configured for your SFTP server. With a VPC endpoint, you can restrict access to your SFTP server to resources only within your VPC. To control incoming internet traffic, you will need to associate one or more Elastic IP addresses with your server's endpoint.
" } }, "EndpointType": { "base": null, "refs": { - "CreateServerRequest$EndpointType": "The type of VPC endpoint that you want your SFTP server to connect to. If you connect to a VPC endpoint, your SFTP server isn't accessible over the public internet.
", + "CreateServerRequest$EndpointType": "The type of VPC endpoint that you want your SFTP server to connect to. You can choose to connect to the public internet or a virtual private cloud (VPC) endpoint. With a VPC endpoint, you can restrict access to your SFTP server and resources only within your VPC.
", "DescribedServer$EndpointType": "The type of endpoint that your SFTP server is connected to. If your SFTP server is connected to a VPC endpoint, your server isn't accessible over the public internet.
", "ListedServer$EndpointType": "The type of VPC endpoint that your SFTP server is connected to. If your SFTP server is connected to a VPC endpoint, your server isn't accessible over the public internet.
", "UpdateServerRequest$EndpointType": "The type of endpoint that you want your SFTP server to connect to. You can choose to connect to the public internet or a virtual private cloud (VPC) endpoint. With a VPC endpoint, your SFTP server isn't accessible over the public internet.
" @@ -143,9 +160,9 @@ "HomeDirectoryMappings": { "base": null, "refs": { - "CreateUserRequest$HomeDirectoryMappings": "Logical directory mappings that specify what S3 paths and keys should be visible to your user and how you want to make them visible. You will need to specify the \"Entry
\" and \"Target
\" pair, where Entry
shows how the path is made visible and Target
is the actual S3 path. If you only specify a target, it will be displayed as is. You will need to also make sure that your AWS IAM Role provides access to paths in Target
. The following is an example.
'[ \"/bucket2/documentation\", { \"Entry\": \"your-personal-report.pdf\", \"Target\": \"/bucket3/customized-reports/${transfer:UserName}.pdf\" } ]'
In most cases, you can use this value instead of the scope down policy to lock your user down to the designated home directory (\"chroot\"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
Logical directory mappings that specify what S3 paths and keys should be visible to your user and how you want to make them visible. You will need to specify the \"Entry
\" and \"Target
\" pair, where Entry
shows how the path is made visible and Target
is the actual S3 path. If you only specify a target, it will be displayed as is. You will need to also make sure that your AWS IAM Role provides access to paths in Target
. The following is an example.
'[ \"/bucket2/documentation\", { \"Entry\": \"your-personal-report.pdf\", \"Target\": \"/bucket3/customized-reports/${transfer:UserName}.pdf\" } ]'
In most cases, you can use this value instead of the scope down policy to lock your user down to the designated home directory (\"chroot\"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
If the target of a logical directory entry does not exist in S3, the entry will be ignored. As a workaround, you can use the S3 api to create 0 byte objects as place holders for your directory. If using the CLI, use the s3api call instead of s3 so you can use the put-object operation. For example, you use the following: aws s3api put-object --bucket bucketname --key path/to/folder/
. Make sure that the end of the key name ends in a / for it to be considered a folder.
Logical directory mappings that you specified for what S3 paths and keys should be visible to your user and how you want to make them visible. You will need to specify the \"Entry
\" and \"Target
\" pair, where Entry
shows how the path is made visible and Target
is the actual S3 path. If you only specify a target, it will be displayed as is. You will need to also make sure that your AWS IAM Role provides access to paths in Target
.
In most cases, you can use this value instead of the scope down policy to lock your user down to the designated home directory (\"chroot\"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
In most cases, you can use this value instead of the scope down policy to lock your user down to the designated home directory (\"chroot\"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
Logical directory mappings that specify what S3 paths and keys should be visible to your user and how you want to make them visible. You will need to specify the \"Entry
\" and \"Target
\" pair, where Entry
shows how the path is made visible and Target
is the actual S3 path. If you only specify a target, it will be displayed as is. You will need to also make sure that your AWS IAM Role provides access to paths in Target
. The following is an example.
'[ \"/bucket2/documentation\", { \"Entry\": \"your-personal-report.pdf\", \"Target\": \"/bucket3/customized-reports/${transfer:UserName}.pdf\" } ]'
In most cases, you can use this value instead of the scope down policy to lock your user down to the designated home directory (\"chroot\"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
" + "UpdateUserRequest$HomeDirectoryMappings": "
Logical directory mappings that specify what S3 paths and keys should be visible to your user and how you want to make them visible. You will need to specify the \"Entry
\" and \"Target
\" pair, where Entry
shows how the path is made visible and Target
is the actual S3 path. If you only specify a target, it will be displayed as is. You will need to also make sure that your AWS IAM Role provides access to paths in Target
. The following is an example.
'[ \"/bucket2/documentation\", { \"Entry\": \"your-personal-report.pdf\", \"Target\": \"/bucket3/customized-reports/${transfer:UserName}.pdf\" } ]'
In most cases, you can use this value instead of the scope down policy to lock your user down to the designated home directory (\"chroot\"). To do this, you can set Entry
to '/' and set Target
to the HomeDirectory parameter value.
If the target of a logical directory entry does not exist in S3, the entry will be ignored. As a workaround, you can use the S3 api to create 0 byte objects as place holders for your directory. If using the CLI, use the s3api call instead of s3 so you can use the put-object operation. For example, you use the following: aws s3api put-object --bucket bucketname --key path/to/folder/
. Make sure that the end of the key name ends in a / for it to be considered a folder.
A list of subnet IDs that are required to host your SFTP server endpoint in your VPC.
" + } + }, "Tag": { "base": "Creates a key-value pair for a specific resource. Tags are metadata that you can use to search for and group a resource for various purposes. You can apply tags to servers, users, and roles. A tag key can take more than one value. For example, to group servers for accounting purposes, you might create a tag called Group
and assign the values Research
and Accounting
to that group.
The ID of the VPC endpoint.
" } + }, + "VpcId": { + "base": null, + "refs": { + "EndpointDetails$VpcId": "The VPC ID of the virtual private cloud in which the SFTP server's endpoint will be hosted.
" + } } } } diff --git a/models/apis/translate/2017-07-01/api-2.json b/models/apis/translate/2017-07-01/api-2.json index 6ee4aada910..ba57a1863dd 100644 --- a/models/apis/translate/2017-07-01/api-2.json +++ b/models/apis/translate/2017-07-01/api-2.json @@ -26,6 +26,20 @@ {"shape":"InternalServerException"} ] }, + "DescribeTextTranslationJob":{ + "name":"DescribeTextTranslationJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"DescribeTextTranslationJobRequest"}, + "output":{"shape":"DescribeTextTranslationJobResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalServerException"} + ] + }, "GetTerminology":{ "name":"GetTerminology", "http":{ @@ -70,6 +84,51 @@ {"shape":"InternalServerException"} ] }, + "ListTextTranslationJobs":{ + "name":"ListTextTranslationJobs", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"ListTextTranslationJobsRequest"}, + "output":{"shape":"ListTextTranslationJobsResponse"}, + "errors":[ + {"shape":"InvalidRequestException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InvalidFilterException"}, + {"shape":"InternalServerException"} + ] + }, + "StartTextTranslationJob":{ + "name":"StartTextTranslationJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StartTextTranslationJobRequest"}, + "output":{"shape":"StartTextTranslationJobResponse"}, + "errors":[ + {"shape":"TooManyRequestsException"}, + {"shape":"UnsupportedLanguagePairException"}, + {"shape":"InvalidRequestException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InternalServerException"} + ] + }, + "StopTextTranslationJob":{ + "name":"StopTextTranslationJob", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"StopTextTranslationJobRequest"}, + "output":{"shape":"StopTextTranslationJobResponse"}, + "errors":[ + {"shape":"ResourceNotFoundException"}, + {"shape":"TooManyRequestsException"}, + {"shape":"InternalServerException"} + ] + }, "TranslateText":{ "name":"TranslateText", "http":{ @@ -108,6 +167,17 @@ "min":1, "pattern":"[\\P{M}\\p{M}]{1,5000}" }, + "ClientTokenString":{ + "type":"string", + "max":64, + "min":1, + "pattern":"^[a-zA-Z0-9-]+$" + }, + "ContentType":{ + "type":"string", + "max":256, + "pattern":"^[-\\w.]+\\/[-\\w.+]+$" + }, "DeleteTerminologyRequest":{ "type":"structure", "required":["Name"], @@ -115,6 +185,19 @@ "Name":{"shape":"ResourceName"} } }, + "DescribeTextTranslationJobRequest":{ + "type":"structure", + "required":["JobId"], + "members":{ + "JobId":{"shape":"JobId"} + } + }, + "DescribeTextTranslationJobResponse":{ + "type":"structure", + "members":{ + "TextTranslationJobProperties":{"shape":"TextTranslationJobProperties"} + } + }, "Description":{ "type":"string", "max":256, @@ -143,7 +226,7 @@ "type":"string", "max":400, "min":1, - "pattern":"(arn:aws((-us-gov)|(-cn))?:kms:)?([a-z]{2}-[a-z]+-\\d:)?(\\d{12}:)?(((key/)?[a-zA-Z0-9-_]+)|(alias/[a-zA-Z0-9:/_-]+))" + "pattern":"(arn:aws((-us-gov)|(-iso)|(-iso-b)|(-cn))?:kms:)?([a-z]{2}-[a-z]+(-[a-z]+)?-\\d:)?(\\d{12}:)?(((key/)?[a-zA-Z0-9-_]+)|(alias/[a-zA-Z0-9:/_-]+))" }, "EncryptionKeyType":{ "type":"string", @@ -167,6 +250,12 @@ "TerminologyDataLocation":{"shape":"TerminologyDataLocation"} } }, + "IamRoleArn":{ + "type":"string", + "max":2048, + "min":20, + "pattern":"arn:aws(-[^:]+)?:iam::[0-9]{12}:role/.+" + }, "ImportTerminologyRequest":{ "type":"structure", "required":[ @@ -188,6 +277,17 @@ "TerminologyProperties":{"shape":"TerminologyProperties"} } }, + "InputDataConfig":{ + "type":"structure", + "required":[ + "S3Uri", + "ContentType" + ], + "members":{ + "S3Uri":{"shape":"S3Uri"}, + "ContentType":{"shape":"ContentType"} + } + }, "Integer":{"type":"integer"}, "InternalServerException":{ "type":"structure", @@ -197,6 +297,13 @@ "exception":true, "fault":true }, + "InvalidFilterException":{ + "type":"structure", + "members":{ + "Message":{"shape":"String"} + }, + "exception":true + }, "InvalidParameterValueException":{ "type":"structure", "members":{ @@ -211,6 +318,38 @@ }, "exception":true }, + "JobDetails":{ + "type":"structure", + "members":{ + "TranslatedDocumentsCount":{"shape":"Integer"}, + "DocumentsWithErrorsCount":{"shape":"Integer"}, + "InputDocumentsCount":{"shape":"Integer"} + } + }, + "JobId":{ + "type":"string", + "max":32, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-%@]*)$" + }, + "JobName":{ + "type":"string", + "max":256, + "min":1, + "pattern":"^([\\p{L}\\p{Z}\\p{N}_.:/=+\\-%@]*)$" + }, + "JobStatus":{ + "type":"string", + "enum":[ + "SUBMITTED", + "IN_PROGRESS", + "COMPLETED", + "COMPLETED_WITH_ERROR", + "FAILED", + "STOP_REQUESTED", + "STOPPED" + ] + }, "LanguageCodeString":{ "type":"string", "max":5, @@ -241,6 +380,21 @@ "NextToken":{"shape":"NextToken"} } }, + "ListTextTranslationJobsRequest":{ + "type":"structure", + "members":{ + "Filter":{"shape":"TextTranslationJobFilter"}, + "NextToken":{"shape":"NextToken"}, + "MaxResults":{"shape":"MaxResultsInteger"} + } + }, + "ListTextTranslationJobsResponse":{ + "type":"structure", + "members":{ + "TextTranslationJobPropertiesList":{"shape":"TextTranslationJobPropertiesList"}, + "NextToken":{"shape":"NextToken"} + } + }, "MaxResultsInteger":{ "type":"integer", "max":500, @@ -255,6 +409,13 @@ "max":8192, "pattern":"\\p{ASCII}{0,8192}" }, + "OutputDataConfig":{ + "type":"structure", + "required":["S3Uri"], + "members":{ + "S3Uri":{"shape":"S3Uri"} + } + }, "ResourceName":{ "type":"string", "max":256, @@ -272,18 +433,75 @@ }, "exception":true }, + "S3Uri":{ + "type":"string", + "max":1024, + "pattern":"s3://[a-z0-9][\\.\\-a-z0-9]{1,61}[a-z0-9](/.*)?" + }, "ServiceUnavailableException":{ "type":"structure", "members":{ "Message":{"shape":"String"} }, - "exception":true + "exception":true, + "fault":true + }, + "StartTextTranslationJobRequest":{ + "type":"structure", + "required":[ + "InputDataConfig", + "OutputDataConfig", + "DataAccessRoleArn", + "SourceLanguageCode", + "TargetLanguageCodes", + "ClientToken" + ], + "members":{ + "JobName":{"shape":"JobName"}, + "InputDataConfig":{"shape":"InputDataConfig"}, + "OutputDataConfig":{"shape":"OutputDataConfig"}, + "DataAccessRoleArn":{"shape":"IamRoleArn"}, + "SourceLanguageCode":{"shape":"LanguageCodeString"}, + "TargetLanguageCodes":{"shape":"TargetLanguageCodeStringList"}, + "TerminologyNames":{"shape":"ResourceNameList"}, + "ClientToken":{ + "shape":"ClientTokenString", + "idempotencyToken":true + } + } + }, + "StartTextTranslationJobResponse":{ + "type":"structure", + "members":{ + "JobId":{"shape":"JobId"}, + "JobStatus":{"shape":"JobStatus"} + } + }, + "StopTextTranslationJobRequest":{ + "type":"structure", + "required":["JobId"], + "members":{ + "JobId":{"shape":"JobId"} + } + }, + "StopTextTranslationJobResponse":{ + "type":"structure", + "members":{ + "JobId":{"shape":"JobId"}, + "JobStatus":{"shape":"JobStatus"} + } }, "String":{ "type":"string", "max":10000, "pattern":"[\\P{M}\\p{M}]{0,10000}" }, + "TargetLanguageCodeStringList":{ + "type":"list", + "member":{"shape":"LanguageCodeString"}, + "max":1, + "min":1 + }, "Term":{ "type":"structure", "members":{ @@ -297,7 +515,7 @@ }, "TerminologyArn":{ "type":"string", - "pattern":"^arn:aws((-us-gov)|(-cn))?:translate:[a-zA-Z0-9-]+:[0-9]{12}:terminology/.+?/.+?$" + "pattern":"^arn:aws((-us-gov)|(-iso)|(-iso-b)|(-cn))?:translate:[a-zA-Z0-9-]+:[0-9]{12}:terminology/.+?/.+?$" }, "TerminologyData":{ "type":"structure", @@ -359,6 +577,37 @@ }, "exception":true }, + "TextTranslationJobFilter":{ + "type":"structure", + "members":{ + "JobName":{"shape":"JobName"}, + "JobStatus":{"shape":"JobStatus"}, + "SubmittedBeforeTime":{"shape":"Timestamp"}, + "SubmittedAfterTime":{"shape":"Timestamp"} + } + }, + "TextTranslationJobProperties":{ + "type":"structure", + "members":{ + "JobId":{"shape":"JobId"}, + "JobName":{"shape":"JobName"}, + "JobStatus":{"shape":"JobStatus"}, + "JobDetails":{"shape":"JobDetails"}, + "SourceLanguageCode":{"shape":"LanguageCodeString"}, + "TargetLanguageCodes":{"shape":"TargetLanguageCodeStringList"}, + "TerminologyNames":{"shape":"ResourceNameList"}, + "Message":{"shape":"UnboundedLengthString"}, + "SubmittedTime":{"shape":"Timestamp"}, + "EndTime":{"shape":"Timestamp"}, + "InputDataConfig":{"shape":"InputDataConfig"}, + "OutputDataConfig":{"shape":"OutputDataConfig"}, + "DataAccessRoleArn":{"shape":"IamRoleArn"} + } + }, + "TextTranslationJobPropertiesList":{ + "type":"list", + "member":{"shape":"TextTranslationJobProperties"} + }, "Timestamp":{"type":"timestamp"}, "TooManyRequestsException":{ "type":"structure", @@ -395,6 +644,7 @@ "AppliedTerminologies":{"shape":"AppliedTerminologyList"} } }, + "UnboundedLengthString":{"type":"string"}, "UnsupportedLanguagePairException":{ "type":"structure", "members":{ diff --git a/models/apis/translate/2017-07-01/docs-2.json b/models/apis/translate/2017-07-01/docs-2.json index ecfaf025554..2359c36b068 100644 --- a/models/apis/translate/2017-07-01/docs-2.json +++ b/models/apis/translate/2017-07-01/docs-2.json @@ -3,10 +3,14 @@ "service": "Provides translation between one source language and another of the same set of languages.
", "operations": { "DeleteTerminology": "A synchronous action that deletes a custom terminology.
", + "DescribeTextTranslationJob": "Gets the properties associated with an asycnhronous batch translation job including name, ID, status, source and target languages, input/output S3 buckets, and so on.
", "GetTerminology": "Retrieves a custom terminology.
", "ImportTerminology": "Creates or updates a custom terminology, depending on whether or not one already exists for the given terminology name. Importing a terminology with the same name as an existing one will merge the terminologies based on the chosen merge strategy. Currently, the only supported merge strategy is OVERWRITE, and so the imported terminology will overwrite an existing terminology of the same name.
If you import a terminology that overwrites an existing one, the new terminology take up to 10 minutes to fully propagate and be available for use in a translation due to cache policies with the DataPlane service that performs the translations.
", "ListTerminologies": "Provides a list of custom terminologies associated with your account.
", - "TranslateText": "Translates input text from the source language to the target language. It is not necessary to use English (en) as either the source or the target language but not all language combinations are supported by Amazon Translate. For more information, see Supported Language Pairs.
Arabic (ar)
Chinese (Simplified) (zh)
Chinese (Traditional) (zh-TW)
Czech (cs)
Danish (da)
Dutch (nl)
English (en)
Finnish (fi)
French (fr)
German (de)
Hebrew (he)
Indonesian (id)
Italian (it)
Japanese (ja)
Korean (ko)
Polish (pl)
Portuguese (pt)
Russian (ru)
Spanish (es)
Swedish (sv)
Turkish (tr)
To have Amazon Translate determine the source language of your text, you can specify auto
in the SourceLanguageCode
field. If you specify auto
, Amazon Translate will call Amazon Comprehend to determine the source language.
Gets a list of the batch translation jobs that you have submitted.
", + "StartTextTranslationJob": "Starts an asynchronous batch translation job. Batch translation jobs can be used to translate large volumes of text across multiple documents at once. For more information, see async.
Batch translation jobs can be described with the DescribeTextTranslationJob operation, listed with the ListTextTranslationJobs operation, and stopped with the StopTextTranslationJob operation.
Amazon Translate does not support batch translation of multiple source languages at once.
Stops an asynchronous batch translation job that is in progress.
If the job's state is IN_PROGRESS
, the job will be marked for termination and put into the STOP_REQUESTED
state. If the job completes before it can be stopped, it is put into the COMPLETED
state. Otherwise, the job is put into the STOPPED
state.
Asynchronous batch translation jobs are started with the StartTextTranslationJob operation. You can use the DescribeTextTranslationJob or ListTextTranslationJobs operations to get a batch translation job's JobId
.
Translates input text from the source language to the target language. For a list of available languages and language codes, see what-is-languages.
" }, "shapes": { "AppliedTerminology": { @@ -27,11 +31,33 @@ "TranslateTextRequest$Text": "The text to translate. The text string can be a maximum of 5,000 bytes long. Depending on your character set, this may be fewer than 5,000 characters.
" } }, + "ClientTokenString": { + "base": null, + "refs": { + "StartTextTranslationJobRequest$ClientToken": "The client token of the EC2 instance calling the request. This token is auto-generated when using the Amazon Translate SDK. Otherwise, use the DescribeInstances EC2 operation to retreive an instance's client token. For more information, see Client Tokens in the EC2 User Guide.
" + } + }, + "ContentType": { + "base": null, + "refs": { + "InputDataConfig$ContentType": "The multipurpose internet mail extension (MIME) type of the input files. Valid values are text/plain
for plaintext files and text/html
for HTML files.
The Amazon Resource Name (ARN) of an AWS Identity Access and Management (IAM) role that grants Amazon Translate read access to your input data. For more nformation, see identity-and-access-management.
", + "TextTranslationJobProperties$DataAccessRoleArn": "The Amazon Resource Name (ARN) of an AWS Identity Access and Management (IAM) role that granted Amazon Translate read access to the job's input data.
" + } + }, "ImportTerminologyRequest": { "base": null, "refs": { @@ -83,15 +116,30 @@ "refs": { } }, + "InputDataConfig": { + "base": "The input configuration properties for requesting a batch translation job.
", + "refs": { + "StartTextTranslationJobRequest$InputDataConfig": "Specifies the format and S3 location of the input documents for the translation job.
", + "TextTranslationJobProperties$InputDataConfig": "The input configuration properties that were specified when the job was requested.
" + } + }, "Integer": { "base": null, "refs": { + "JobDetails$TranslatedDocumentsCount": "The number of documents successfully processed during a translation job.
", + "JobDetails$DocumentsWithErrorsCount": "The number of documents that could not be processed during a translation job.
", + "JobDetails$InputDocumentsCount": "The number of documents used as input in a translation job.
", "TerminologyProperties$SizeBytes": "The size of the file used when importing a custom terminology.
", "TerminologyProperties$TermCount": "The number of terms included in the custom terminology.
" } }, "InternalServerException": { - "base": "An internal server error occurred. Retry your request.
", + "base": "An internal server error occurred. Retry your request.
", + "refs": { + } + }, + "InvalidFilterException": { + "base": "The filter specified for the operation is invalid. Specify a different filter.
", "refs": { } }, @@ -105,15 +153,51 @@ "refs": { } }, + "JobDetails": { + "base": "The number of documents successfully and unsuccessfully processed during a translation job.
", + "refs": { + "TextTranslationJobProperties$JobDetails": "The number of documents successfully and unsuccessfully processed during the translation job.
" + } + }, + "JobId": { + "base": null, + "refs": { + "DescribeTextTranslationJobRequest$JobId": "The identifier that Amazon Translate generated for the job. The StartTextTranslationJob operation returns this identifier in its response.
", + "StartTextTranslationJobResponse$JobId": "The identifier generated for the job. To get the status of a job, use this ID with the DescribeTextTranslationJob operation.
", + "StopTextTranslationJobRequest$JobId": "The job ID of the job to be stopped.
", + "StopTextTranslationJobResponse$JobId": "The job ID of the stopped batch translation job.
", + "TextTranslationJobProperties$JobId": "The ID of the translation job.
" + } + }, + "JobName": { + "base": null, + "refs": { + "StartTextTranslationJobRequest$JobName": "The name of the batch translation job to be performed.
", + "TextTranslationJobFilter$JobName": "Filters the list of jobs by name.
", + "TextTranslationJobProperties$JobName": "The user-defined name of the translation job.
" + } + }, + "JobStatus": { + "base": null, + "refs": { + "StartTextTranslationJobResponse$JobStatus": "The status of the job. Possible values include:
SUBMITTED
- The job has been received and is queued for processing.
IN_PROGRESS
- Amazon Translate is processing the job.
COMPLETED
- The job was successfully completed and the output is available.
COMPLETED_WITH_ERRORS
- The job was completed with errors. The errors can be analyzed in the job's output.
FAILED
- The job did not complete. To get details, use the DescribeTextTranslationJob operation.
STOP_REQUESTED
- The user who started the job has requested that it be stopped.
STOPPED
- The job has been stopped.
The status of the designated job. Upon successful completion, the job's status will be STOPPED
.
Filters the list of jobs based by job status.
", + "TextTranslationJobProperties$JobStatus": "The status of the translation job.
" + } + }, "LanguageCodeString": { "base": null, "refs": { "DetectedLanguageLowConfidenceException$DetectedLanguageCode": "The language code of the auto-detected language from Amazon Comprehend.
", "LanguageCodeStringList$member": null, + "StartTextTranslationJobRequest$SourceLanguageCode": "The language code of the input language. For a list of language codes, see what-is-languages.
Amazon Translate does not automatically detect a source language during batch translation jobs.
", + "TargetLanguageCodeStringList$member": null, "TerminologyProperties$SourceLanguageCode": "The language code for the source text of the translation request for which the custom terminology is being used.
", - "TranslateTextRequest$SourceLanguageCode": "The language code for the language of the source text. The language must be a language supported by Amazon Translate.
To have Amazon Translate determine the source language of your text, you can specify auto
in the SourceLanguageCode
field. If you specify auto
, Amazon Translate will call Amazon Comprehend to determine the source language.
The language code of the language of the source text. The language must be a language supported by Amazon Translate.
", + "TranslateTextRequest$SourceLanguageCode": "The language code for the language of the source text. The language must be a language supported by Amazon Translate. For a list of language codes, see what-is-languages.
To have Amazon Translate determine the source language of your text, you can specify auto
in the SourceLanguageCode
field. If you specify auto
, Amazon Translate will call Amazon Comprehend to determine the source language.
The language code requested for the language of the target text. The language must be a language supported by Amazon Translate.
", - "TranslateTextResponse$SourceLanguageCode": "The language code for the language of the source text.
", + "TranslateTextResponse$SourceLanguageCode": "The language code for the language of the source text.
", "TranslateTextResponse$TargetLanguageCode": "The language code for the language of the target text.
", "UnsupportedLanguagePairException$SourceLanguageCode": "The language code for the language of the input text.
", "UnsupportedLanguagePairException$TargetLanguageCode": "The language code for the language of the translated text.
" @@ -140,10 +224,21 @@ "refs": { } }, + "ListTextTranslationJobsRequest": { + "base": null, + "refs": { + } + }, + "ListTextTranslationJobsResponse": { + "base": null, + "refs": { + } + }, "MaxResultsInteger": { "base": null, "refs": { - "ListTerminologiesRequest$MaxResults": "The maximum number of custom terminologies returned per list request.
" + "ListTerminologiesRequest$MaxResults": "The maximum number of custom terminologies returned per list request.
", + "ListTextTranslationJobsRequest$MaxResults": "The maximum number of results to return in each page. The default value is 100.
" } }, "MergeStrategy": { @@ -156,7 +251,16 @@ "base": null, "refs": { "ListTerminologiesRequest$NextToken": "If the result of the request to ListTerminologies was truncated, include the NextToken to fetch the next group of custom terminologies.
", - "ListTerminologiesResponse$NextToken": "If the response to the ListTerminologies was truncated, the NextToken fetches the next group of custom terminologies.
" + "ListTerminologiesResponse$NextToken": "If the response to the ListTerminologies was truncated, the NextToken fetches the next group of custom terminologies.
", + "ListTextTranslationJobsRequest$NextToken": "The token to request the next page of results.
", + "ListTextTranslationJobsResponse$NextToken": "The token to use to retreive the next page of results. This value is null
when there are no more results to return.
The output configuration properties for a batch translation job.
", + "refs": { + "StartTextTranslationJobRequest$OutputDataConfig": "Specifies the S3 folder to which your job output will be saved.
", + "TextTranslationJobProperties$OutputDataConfig": "The output configuration properties that were specified when the job was requested.
" } }, "ResourceName": { @@ -173,12 +277,21 @@ "ResourceNameList": { "base": null, "refs": { - "TranslateTextRequest$TerminologyNames": "The TerminologyNames list that is taken as input to the TranslateText request. This has a minimum length of 0 and a maximum length of 1.
" + "StartTextTranslationJobRequest$TerminologyNames": "The name of the terminology to use in the batch translation job. For a list of available terminologies, use the ListTerminologies operation.
", + "TextTranslationJobProperties$TerminologyNames": "A list containing the names of the terminologies applied to a translation job. Only one terminology can be applied per StartTextTranslationJob request at this time.
", + "TranslateTextRequest$TerminologyNames": "The name of the terminology list file to be used in the TranslateText request. You can use 1 terminology list at most in a TranslateText
request. Terminology lists can contain a maximum of 256 terms.
The resource you are looking for has not been found. Review the resource you're looking for and see if a different resource will accomplish your needs before retrying the revised request. .
", + "base": "The resource you are looking for has not been found. Review the resource you're looking for and see if a different resource will accomplish your needs before retrying the revised request.
", + "refs": { + } + }, + "S3Uri": { + "base": null, "refs": { + "InputDataConfig$S3Uri": "The URI of the AWS S3 folder that contains the input file. The folder must be in the same Region as the API endpoint you are calling.
", + "OutputDataConfig$S3Uri": "The URI of the S3 folder that contains a translation job's output file. The folder must be in the same Region as the API endpoint that you are calling.
" } }, "ServiceUnavailableException": { @@ -186,11 +299,32 @@ "refs": { } }, + "StartTextTranslationJobRequest": { + "base": null, + "refs": { + } + }, + "StartTextTranslationJobResponse": { + "base": null, + "refs": { + } + }, + "StopTextTranslationJobRequest": { + "base": null, + "refs": { + } + }, + "StopTextTranslationJobResponse": { + "base": null, + "refs": { + } + }, "String": { "base": null, "refs": { "DetectedLanguageLowConfidenceException$Message": null, "InternalServerException$Message": null, + "InvalidFilterException$Message": null, "InvalidParameterValueException$Message": null, "InvalidRequestException$Message": null, "LimitExceededException$Message": null, @@ -202,10 +336,17 @@ "TerminologyDataLocation$Location": "The location of the custom terminology data.
", "TextSizeLimitExceededException$Message": null, "TooManyRequestsException$Message": null, - "TranslateTextResponse$TranslatedText": "The the translated text. The maximum length of this text is 5kb.
", + "TranslateTextResponse$TranslatedText": "The translated text.
", "UnsupportedLanguagePairException$Message": null } }, + "TargetLanguageCodeStringList": { + "base": null, + "refs": { + "StartTextTranslationJobRequest$TargetLanguageCodes": "The language code of the output language.
", + "TextTranslationJobProperties$TargetLanguageCodes": "The language code of the language of the target text. The language must be a language supported by Amazon Translate.
" + } + }, "Term": { "base": "The term being translated by the custom terminology.
", "refs": { @@ -246,7 +387,7 @@ "TerminologyFile": { "base": null, "refs": { - "TerminologyData$File": "The file containing the custom terminology data.
" + "TerminologyData$File": "The file containing the custom terminology data. Your version of the AWS SDK performs a Base64-encoding on this field before sending a request to the AWS service. Users of the SDK should not perform Base64-encoding themselves.
" } }, "TerminologyProperties": { @@ -268,11 +409,34 @@ "refs": { } }, + "TextTranslationJobFilter": { + "base": "Provides information for filtering a list of translation jobs. For more information, see ListTextTranslationJobs.
", + "refs": { + "ListTextTranslationJobsRequest$Filter": "The parameters that specify which batch translation jobs to retrieve. Filters include job name, job status, and submission time. You can only set one filter at a time.
" + } + }, + "TextTranslationJobProperties": { + "base": "Provides information about a translation job.
", + "refs": { + "DescribeTextTranslationJobResponse$TextTranslationJobProperties": "An object that contains the properties associated with an asynchronous batch translation job.
", + "TextTranslationJobPropertiesList$member": null + } + }, + "TextTranslationJobPropertiesList": { + "base": null, + "refs": { + "ListTextTranslationJobsResponse$TextTranslationJobPropertiesList": "A list containing the properties of each job that is returned.
" + } + }, "Timestamp": { "base": null, "refs": { "TerminologyProperties$CreatedAt": "The time at which the custom terminology was created, based on the timestamp.
", - "TerminologyProperties$LastUpdatedAt": "The time at which the custom terminology was last update, based on the timestamp.
" + "TerminologyProperties$LastUpdatedAt": "The time at which the custom terminology was last update, based on the timestamp.
", + "TextTranslationJobFilter$SubmittedBeforeTime": "Filters the list of jobs based on the time that the job was submitted for processing and returns only the jobs submitted before the specified time. Jobs are returned in ascending order, oldest to newest.
", + "TextTranslationJobFilter$SubmittedAfterTime": "Filters the list of jobs based on the time that the job was submitted for processing and returns only the jobs submitted after the specified time. Jobs are returned in descending order, newest to oldest.
", + "TextTranslationJobProperties$SubmittedTime": "The time at which the translation job was submitted.
", + "TextTranslationJobProperties$EndTime": "The time at which the translation job ended.
" } }, "TooManyRequestsException": { @@ -290,6 +454,12 @@ "refs": { } }, + "UnboundedLengthString": { + "base": null, + "refs": { + "TextTranslationJobProperties$Message": "An explanation of any errors that may have occured during the translation job.
" + } + }, "UnsupportedLanguagePairException": { "base": "Amazon Translate does not support translation from the language of the source text into the requested target language. For more information, see how-to-error-msg.
", "refs": { diff --git a/models/apis/translate/2017-07-01/paginators-1.json b/models/apis/translate/2017-07-01/paginators-1.json index 5677bd8e4a2..67f44714aca 100644 --- a/models/apis/translate/2017-07-01/paginators-1.json +++ b/models/apis/translate/2017-07-01/paginators-1.json @@ -1,4 +1,14 @@ { "pagination": { + "ListTerminologies": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + }, + "ListTextTranslationJobs": { + "input_token": "NextToken", + "limit_key": "MaxResults", + "output_token": "NextToken" + } } -} +} \ No newline at end of file diff --git a/models/apis/workspaces/2015-04-08/api-2.json b/models/apis/workspaces/2015-04-08/api-2.json index e4df4d15c39..7878c1552bf 100644 --- a/models/apis/workspaces/2015-04-08/api-2.json +++ b/models/apis/workspaces/2015-04-08/api-2.json @@ -347,6 +347,23 @@ {"shape":"AccessDeniedException"} ] }, + "MigrateWorkspace":{ + "name":"MigrateWorkspace", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"MigrateWorkspaceRequest"}, + "output":{"shape":"MigrateWorkspaceResult"}, + "errors":[ + {"shape":"InvalidParameterValuesException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"AccessDeniedException"}, + {"shape":"OperationNotSupportedException"}, + {"shape":"OperationInProgressException"}, + {"shape":"ResourceUnavailableException"} + ] + }, "ModifyAccount":{ "name":"ModifyAccount", "http":{ @@ -1180,6 +1197,24 @@ "max":5, "min":1 }, + "MigrateWorkspaceRequest":{ + "type":"structure", + "required":[ + "SourceWorkspaceId", + "BundleId" + ], + "members":{ + "SourceWorkspaceId":{"shape":"WorkspaceId"}, + "BundleId":{"shape":"BundleId"} + } + }, + "MigrateWorkspaceResult":{ + "type":"structure", + "members":{ + "SourceWorkspaceId":{"shape":"WorkspaceId"}, + "TargetWorkspaceId":{"shape":"WorkspaceId"} + } + }, "ModificationResourceEnum":{ "type":"string", "enum":[ diff --git a/models/apis/workspaces/2015-04-08/docs-2.json b/models/apis/workspaces/2015-04-08/docs-2.json index 794f3f8cfc5..bfd3190dd96 100644 --- a/models/apis/workspaces/2015-04-08/docs-2.json +++ b/models/apis/workspaces/2015-04-08/docs-2.json @@ -26,17 +26,18 @@ "DisassociateIpGroups": "Disassociates the specified IP access control group from the specified directory.
", "ImportWorkspaceImage": "Imports the specified Windows 7 or Windows 10 Bring Your Own License (BYOL) image into Amazon WorkSpaces. The image must be an already licensed EC2 image that is in your AWS account, and you must own the image.
", "ListAvailableManagementCidrRanges": "Retrieves a list of IP address ranges, specified as IPv4 CIDR blocks, that you can use for the network management interface when you enable Bring Your Own License (BYOL).
The management network interface is connected to a secure Amazon WorkSpaces management network. It is used for interactive streaming of the WorkSpace desktop to Amazon WorkSpaces clients, and to allow Amazon WorkSpaces to manage the WorkSpace.
", + "MigrateWorkspace": "Migrates a WorkSpace from one operating system or bundle type to another, while retaining the data on the user volume.
The migration process recreates the WorkSpace by using a new root volume from the target bundle image and the user volume from the last available snapshot of the original WorkSpace. During migration, the original D:\\Users\\%USERNAME%
user profile folder is renamed to D:\\Users\\%USERNAME%MMddyyTHHmmss%.NotMigrated
. A new D:\\Users\\%USERNAME%\\
folder is generated by the new OS. Certain files in the old user profile are moved to the new user profile.
For available migration scenarios, details about what happens during migration, and best practices, see Migrate a WorkSpace.
", "ModifyAccount": "Modifies the configuration of Bring Your Own License (BYOL) for the specified account.
", "ModifyClientProperties": "Modifies the properties of the specified Amazon WorkSpaces clients.
", "ModifySelfservicePermissions": "Modifies the self-service WorkSpace management capabilities for your users. For more information, see Enable Self-Service WorkSpace Management Capabilities for Your Users.
", - "ModifyWorkspaceAccessProperties": "Specifies which devices and operating systems users can use to access their Workspaces. For more information, see Control Device Access.
", + "ModifyWorkspaceAccessProperties": "Specifies which devices and operating systems users can use to access their WorkSpaces. For more information, see Control Device Access.
", "ModifyWorkspaceCreationProperties": "Modify the default properties used to create WorkSpaces.
", "ModifyWorkspaceProperties": "Modifies the specified WorkSpace properties.
", "ModifyWorkspaceState": "Sets the state of the specified WorkSpace.
To maintain a WorkSpace without being interrupted, set the WorkSpace state to ADMIN_MAINTENANCE
. WorkSpaces in this state do not respond to requests to reboot, stop, start, rebuild, or restore. An AutoStop WorkSpace in this state is not stopped. Users cannot log into a WorkSpace in the ADMIN_MAINTENANCE
state.
Reboots the specified WorkSpaces.
You cannot reboot a WorkSpace unless its state is AVAILABLE
or UNHEALTHY
.
This operation is asynchronous and returns before the WorkSpaces have rebooted.
", - "RebuildWorkspaces": "Rebuilds the specified WorkSpace.
You cannot rebuild a WorkSpace unless its state is AVAILABLE
, ERROR
, or UNHEALTHY
.
Rebuilding a WorkSpace is a potentially destructive action that can result in the loss of data. For more information, see Rebuild a WorkSpace.
This operation is asynchronous and returns before the WorkSpaces have been completely rebuilt.
", + "RebuildWorkspaces": "Rebuilds the specified WorkSpace.
You cannot rebuild a WorkSpace unless its state is AVAILABLE
, ERROR
, UNHEALTHY
, or STOPPED
.
Rebuilding a WorkSpace is a potentially destructive action that can result in the loss of data. For more information, see Rebuild a WorkSpace.
This operation is asynchronous and returns before the WorkSpaces have been completely rebuilt.
", "RegisterWorkspaceDirectory": "Registers the specified directory. This operation is asynchronous and returns before the WorkSpace directory is registered. If this is the first time you are registering a directory, you will need to create the workspaces_DefaultRole role before you can register a directory. For more information, see Creating the workspaces_DefaultRole Role.
", - "RestoreWorkspace": "Restores the specified WorkSpace to its last known healthy state.
You cannot restore a WorkSpace unless its state is AVAILABLE
, ERROR
, or UNHEALTHY
.
Restoring a WorkSpace is a potentially destructive action that can result in the loss of data. For more information, see Restore a WorkSpace.
This operation is asynchronous and returns before the WorkSpace is completely restored.
", + "RestoreWorkspace": "Restores the specified WorkSpace to its last known healthy state.
You cannot restore a WorkSpace unless its state is AVAILABLE
, ERROR
, UNHEALTHY
, or STOPPED
.
Restoring a WorkSpace is a potentially destructive action that can result in the loss of data. For more information, see Restore a WorkSpace.
This operation is asynchronous and returns before the WorkSpace is completely restored.
", "RevokeIpRules": "Removes one or more rules from the specified IP access control group.
", "StartWorkspaces": "Starts the specified WorkSpaces.
You cannot start a WorkSpace unless it has a running mode of AutoStop
and a state of STOPPED
.
Stops the specified WorkSpaces.
You cannot stop a WorkSpace unless it has a running mode of AutoStop
and a state of AVAILABLE
, IMPAIRED
, UNHEALTHY
, or ERROR
.
The identifier of the bundle. All WorkSpaces that are created from this bundle are retrieved. You cannot combine this parameter with any other filter.
", + "MigrateWorkspaceRequest$BundleId": "The identifier of the target bundle type to migrate the WorkSpace to.
", "Workspace$BundleId": "The identifier of the bundle used to create the WorkSpace.
", "WorkspaceBundle$BundleId": "The bundle identifier.
", "WorkspaceRequest$BundleId": "The identifier of the bundle for the WorkSpace. You can use DescribeWorkspaceBundles to list the available bundles.
" @@ -706,6 +708,16 @@ "ListAvailableManagementCidrRangesRequest$MaxResults": "The maximum number of items to return.
" } }, + "MigrateWorkspaceRequest": { + "base": null, + "refs": { + } + }, + "MigrateWorkspaceResult": { + "base": null, + "refs": { + } + }, "ModificationResourceEnum": { "base": null, "refs": { @@ -1126,7 +1138,7 @@ "refs": { "CopyWorkspaceImageRequest$Tags": "The tags for the image.
", "CreateIpGroupRequest$Tags": "The tags. Each WorkSpaces resource can have a maximum of 50 tags.
", - "CreateTagsRequest$Tags": "The tags. Each WorkSpaces resource can have a maximum of 50 tags.
", + "CreateTagsRequest$Tags": "The tags. Each WorkSpaces resource can have a maximum of 50 tags. If you want to add new tags to a set of existing tags, you must submit all of the existing tags along with the new ones.
", "DescribeTagsResult$TagList": "The tags.
", "ImportWorkspaceImageRequest$Tags": "The tags. Each WorkSpaces resource can have a maximum of 50 tags.
", "RegisterWorkspaceDirectoryRequest$Tags": "The tags associated with the directory.
", @@ -1228,8 +1240,8 @@ "VolumeEncryptionKey": { "base": null, "refs": { - "Workspace$VolumeEncryptionKey": "The KMS key used to encrypt data stored on your WorkSpace.
", - "WorkspaceRequest$VolumeEncryptionKey": "The KMS key used to encrypt data stored on your WorkSpace.
" + "Workspace$VolumeEncryptionKey": "The symmetric AWS KMS customer master key (CMK) used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric CMKs.
", + "WorkspaceRequest$VolumeEncryptionKey": "The symmetric AWS KMS customer master key (CMK) used to encrypt data stored on your WorkSpace. Amazon WorkSpaces does not support asymmetric CMKs.
" } }, "Workspace": { @@ -1242,7 +1254,7 @@ "base": "The device types and operating systems that can be used to access a WorkSpace. For more information, see Amazon WorkSpaces Client Network Requirements.
", "refs": { "ModifyWorkspaceAccessPropertiesRequest$WorkspaceAccessProperties": "The device types and operating systems to enable or disable for access.
", - "WorkspaceDirectory$WorkspaceAccessProperties": "The devices and operating systems that users can use to access Workspaces.
" + "WorkspaceDirectory$WorkspaceAccessProperties": "The devices and operating systems that users can use to access WorkSpaces.
" } }, "WorkspaceBundle": { @@ -1299,6 +1311,9 @@ "refs": { "DescribeWorkspaceSnapshotsRequest$WorkspaceId": "The identifier of the WorkSpace.
", "FailedWorkspaceChangeRequest$WorkspaceId": "The identifier of the WorkSpace.
", + "MigrateWorkspaceRequest$SourceWorkspaceId": "The identifier of the WorkSpace to migrate from.
", + "MigrateWorkspaceResult$SourceWorkspaceId": "The original identifier of the WorkSpace that is being migrated.
", + "MigrateWorkspaceResult$TargetWorkspaceId": "The new identifier of the WorkSpace that is being migrated. If the migration does not succeed, the target WorkSpace ID will not be used, and the WorkSpace will still have the original WorkSpace ID.
", "ModifyWorkspacePropertiesRequest$WorkspaceId": "The identifier of the WorkSpace.
", "ModifyWorkspaceStateRequest$WorkspaceId": "The identifier of the WorkSpace.
", "RebootRequest$WorkspaceId": "The identifier of the WorkSpace.
", diff --git a/models/apis/xray/2016-04-12/docs-2.json b/models/apis/xray/2016-04-12/docs-2.json index 72f0b73111f..6b134701cbf 100644 --- a/models/apis/xray/2016-04-12/docs-2.json +++ b/models/apis/xray/2016-04-12/docs-2.json @@ -13,13 +13,13 @@ "GetSamplingRules": "Retrieves all sampling rules.
", "GetSamplingStatisticSummaries": "Retrieves information about recent sampling results for all sampling rules.
", "GetSamplingTargets": "Requests a sampling quota for rules that the service is using to sample requests.
", - "GetServiceGraph": "Retrieves a document that describes services that process incoming requests, and downstream services that they call as a result. Root services process incoming requests and make calls to downstream services. Root services are applications that use the AWS X-Ray SDK. Downstream services can be other applications, AWS resources, HTTP web APIs, or SQL databases.
", + "GetServiceGraph": "Retrieves a document that describes services that process incoming requests, and downstream services that they call as a result. Root services process incoming requests and make calls to downstream services. Root services are applications that use the AWS X-Ray SDK. Downstream services can be other applications, AWS resources, HTTP web APIs, or SQL databases.
", "GetTimeSeriesServiceStatistics": "Get an aggregation of service statistics defined by a specific time range.
", "GetTraceGraph": "Retrieves a service graph for one or more specific trace IDs.
", - "GetTraceSummaries": "Retrieves IDs and metadata for traces available for a specified time frame using an optional filter. To get the full traces, pass the trace IDs to BatchGetTraces
.
A filter expression can target traced requests that hit specific service nodes or edges, have errors, or come from a known user. For example, the following filter expression targets traces that pass through api.example.com
:
service(\"api.example.com\")
This filter expression finds traces that have an annotation named account
with the value 12345
:
annotation.account = \"12345\"
For a full list of indexed fields and keywords that you can use in filter expressions, see Using Filter Expressions in the AWS X-Ray Developer Guide.
", + "GetTraceSummaries": "Retrieves IDs and annotations for traces available for a specified time frame using an optional filter. To get the full traces, pass the trace IDs to BatchGetTraces
.
A filter expression can target traced requests that hit specific service nodes or edges, have errors, or come from a known user. For example, the following filter expression targets traces that pass through api.example.com
:
service(\"api.example.com\")
This filter expression finds traces that have an annotation named account
with the value 12345
:
annotation.account = \"12345\"
For a full list of indexed fields and keywords that you can use in filter expressions, see Using Filter Expressions in the AWS X-Ray Developer Guide.
", "PutEncryptionConfig": "Updates the encryption configuration for X-Ray data.
", "PutTelemetryRecords": "Used by the AWS X-Ray daemon to upload telemetry.
", - "PutTraceSegments": "Uploads segment documents to AWS X-Ray. The X-Ray SDK generates segment documents and sends them to the X-Ray daemon, which uploads them in batches. A segment document can be a completed segment, an in-progress segment, or an array of subsegments.
Segments must include the following fields. For the full segment document schema, see AWS X-Ray Segment Documents in the AWS X-Ray Developer Guide.
Required Segment Document Fields
name
- The name of the service that handled the request.
id
- A 64-bit identifier for the segment, unique among segments in the same trace, in 16 hexadecimal digits.
trace_id
- A unique identifier that connects all segments and subsegments originating from a single client request.
start_time
- Time the segment or subsegment was created, in floating point seconds in epoch time, accurate to milliseconds. For example, 1480615200.010
or 1.480615200010E9
.
end_time
- Time the segment or subsegment was closed. For example, 1480615200.090
or 1.480615200090E9
. Specify either an end_time
or in_progress
.
in_progress
- Set to true
instead of specifying an end_time
to record that a segment has been started, but is not complete. Send an in progress segment when your application receives a request that will take a long time to serve, to trace the fact that the request was received. When the response is sent, send the complete segment to overwrite the in-progress segment.
A trace_id
consists of three numbers separated by hyphens. For example, 1-58406520-a006649127e371903a2de979. This includes:
Trace ID Format
The version number, i.e. 1
.
The time of the original request, in Unix epoch time, in 8 hexadecimal digits. For example, 10:00AM December 2nd, 2016 PST in epoch time is 1480615200
seconds, or 58406520
in hexadecimal.
A 96-bit identifier for the trace, globally unique, in 24 hexadecimal digits.
Uploads segment documents to AWS X-Ray. The X-Ray SDK generates segment documents and sends them to the X-Ray daemon, which uploads them in batches. A segment document can be a completed segment, an in-progress segment, or an array of subsegments.
Segments must include the following fields. For the full segment document schema, see AWS X-Ray Segment Documents in the AWS X-Ray Developer Guide.
Required Segment Document Fields
name
- The name of the service that handled the request.
id
- A 64-bit identifier for the segment, unique among segments in the same trace, in 16 hexadecimal digits.
trace_id
- A unique identifier that connects all segments and subsegments originating from a single client request.
start_time
- Time the segment or subsegment was created, in floating point seconds in epoch time, accurate to milliseconds. For example, 1480615200.010
or 1.480615200010E9
.
end_time
- Time the segment or subsegment was closed. For example, 1480615200.090
or 1.480615200090E9
. Specify either an end_time
or in_progress
.
in_progress
- Set to true
instead of specifying an end_time
to record that a segment has been started, but is not complete. Send an in progress segment when your application receives a request that will take a long time to serve, to trace the fact that the request was received. When the response is sent, send the complete segment to overwrite the in-progress segment.
A trace_id
consists of three numbers separated by hyphens. For example, 1-58406520-a006649127e371903a2de979. This includes:
Trace ID Format
The version number, i.e. 1
.
The time of the original request, in Unix epoch time, in 8 hexadecimal digits. For example, 10:00AM December 2nd, 2016 PST in epoch time is 1480615200
seconds, or 58406520
in hexadecimal.
A 96-bit identifier for the trace, globally unique, in 24 hexadecimal digits.
Updates a group resource.
", "UpdateSamplingRule": "Modifies a sampling rule's configuration.
" }, @@ -202,7 +202,7 @@ "EncryptionKeyId": { "base": null, "refs": { - "PutEncryptionConfigRequest$KeyId": "An AWS KMS customer master key (CMK) in one of the following formats:
Alias - The name of the key. For example, alias/MyKey
.
Key ID - The KMS key ID of the key. For example, ae4aa6d49-a4d8-9df9-a475-4ff6d7898456
.
ARN - The full Amazon Resource Name of the key ID or alias. For example, arn:aws:kms:us-east-2:123456789012:key/ae4aa6d49-a4d8-9df9-a475-4ff6d7898456
. Use this format to specify a key in a different account.
Omit this key if you set Type
to NONE
.
An AWS KMS customer master key (CMK) in one of the following formats:
Alias - The name of the key. For example, alias/MyKey
.
Key ID - The KMS key ID of the key. For example, ae4aa6d49-a4d8-9df9-a475-4ff6d7898456
. AWS X-Ray does not support asymmetric CMKs.
ARN - The full Amazon Resource Name of the key ID or alias. For example, arn:aws:kms:us-east-2:123456789012:key/ae4aa6d49-a4d8-9df9-a475-4ff6d7898456
. Use this format to specify a key in a different account.
Omit this key if you set Type
to NONE
.
Pagination token. Not used.
" + "GetGroupsRequest$NextToken": "Pagination token.
" } }, "GetGroupsRequest": { @@ -553,8 +553,8 @@ "ResponseTimeRootCauseEntity$Remote": "A flag that denotes a remote subsegment.
", "ResponseTimeRootCauseService$Inferred": "A Boolean value indicating if the service is inferred from the trace.
", "Service$Root": "Indicates that the service was the first service to process a request.
", - "TraceSummary$HasFault": "One or more of the segment documents has a 500 series error.
", - "TraceSummary$HasError": "One or more of the segment documents has a 400 series error.
", + "TraceSummary$HasFault": "The root segment document has a 500 series error.
", + "TraceSummary$HasError": "The root segment document has a 400 series error.
", "TraceSummary$HasThrottle": "One or more of the segment documents has a 429 throttling error.
", "TraceSummary$IsPartial": "One or more of the segment documents is in progress.
" } @@ -908,8 +908,8 @@ "AliasNames$member": null, "AnnotationValue$StringValue": "Value for a String annotation.
", "AvailabilityZoneDetail$Name": "The name of a corresponding availability zone.
", - "BatchGetTracesRequest$NextToken": "Pagination token. Not used.
", - "BatchGetTracesResult$NextToken": "Pagination token. Not used.
", + "BatchGetTracesRequest$NextToken": "Pagination token.
", + "BatchGetTracesResult$NextToken": "Pagination token.
", "DeleteSamplingRuleRequest$RuleName": "The name of the sampling rule. Specify a rule by either name or ARN, but not both.
", "DeleteSamplingRuleRequest$RuleARN": "The ARN of the sampling rule. Specify a rule by either name or ARN, but not both.
", "EncryptionConfig$KeyId": "The ID of the customer master key (CMK) used for encryption, if applicable.
", @@ -921,17 +921,17 @@ "FaultRootCauseService$Name": "The service name.
", "FaultRootCauseService$Type": "The type associated to the service.
", "FaultRootCauseService$AccountId": "The account ID associated to the service.
", - "GetGroupsResult$NextToken": "Pagination token. Not used.
", - "GetSamplingRulesRequest$NextToken": "Pagination token. Not used.
", - "GetSamplingRulesResult$NextToken": "Pagination token. Not used.
", - "GetSamplingStatisticSummariesRequest$NextToken": "Pagination token. Not used.
", - "GetSamplingStatisticSummariesResult$NextToken": "Pagination token. Not used.
", - "GetServiceGraphRequest$NextToken": "Pagination token. Not used.
", - "GetServiceGraphResult$NextToken": "Pagination token. Not used.
", - "GetTimeSeriesServiceStatisticsRequest$NextToken": "Pagination token. Not used.
", - "GetTimeSeriesServiceStatisticsResult$NextToken": "Pagination token. Not used.
", - "GetTraceGraphRequest$NextToken": "Pagination token. Not used.
", - "GetTraceGraphResult$NextToken": "Pagination token. Not used.
", + "GetGroupsResult$NextToken": "Pagination token.
", + "GetSamplingRulesRequest$NextToken": "Pagination token.
", + "GetSamplingRulesResult$NextToken": "Pagination token.
", + "GetSamplingStatisticSummariesRequest$NextToken": "Pagination token.
", + "GetSamplingStatisticSummariesResult$NextToken": "Pagination token.
", + "GetServiceGraphRequest$NextToken": "Pagination token.
", + "GetServiceGraphResult$NextToken": "Pagination token.
", + "GetTimeSeriesServiceStatisticsRequest$NextToken": "Pagination token.
", + "GetTimeSeriesServiceStatisticsResult$NextToken": "Pagination token.
", + "GetTraceGraphRequest$NextToken": "Pagination token.
", + "GetTraceGraphResult$NextToken": "Pagination token.
", "GetTraceSummariesRequest$NextToken": "Specify the pagination token returned by a previous request to retrieve the next page of results.
", "GetTraceSummariesResult$NextToken": "If the requested time frame contained more than one page of results, you can use this token to retrieve the next page. The first page contains the most most recent results, closest to the end of the time frame.
", "Group$GroupName": "The unique case-sensitive name of the group.
", @@ -1102,7 +1102,7 @@ "TraceSummaryList": { "base": null, "refs": { - "GetTraceSummariesResult$TraceSummaries": "Trace IDs and metadata for traces that were found in the specified time frame.
" + "GetTraceSummariesResult$TraceSummaries": "Trace IDs and annotations for traces that were found in the specified time frame.
" } }, "TraceUser": { diff --git a/models/endpoints/endpoints.json b/models/endpoints/endpoints.json index d075bfa9980..286ce357f61 100644 --- a/models/endpoints/endpoints.json +++ b/models/endpoints/endpoints.json @@ -2060,6 +2060,7 @@ "eu-west-1" : { }, "eu-west-2" : { }, "eu-west-3" : { }, + "me-south-1" : { }, "sa-east-1" : { }, "us-east-1" : { }, "us-east-2" : { }, @@ -4273,6 +4274,12 @@ "cn-northwest-1" : { } } }, + "elasticfilesystem" : { + "endpoints" : { + "cn-north-1" : { }, + "cn-northwest-1" : { } + } + }, "elasticloadbalancing" : { "defaults" : { "protocols" : [ "https" ] @@ -4713,6 +4720,15 @@ } } }, + "autoscaling-plans" : { + "defaults" : { + "protocols" : [ "http", "https" ] + }, + "endpoints" : { + "us-gov-east-1" : { }, + "us-gov-west-1" : { } + } + }, "clouddirectory" : { "endpoints" : { "us-gov-west-1" : { } diff --git a/service/alexaforbusiness/api_types.go b/service/alexaforbusiness/api_types.go index 2bce6d8adf0..108dc0d76c4 100644 --- a/service/alexaforbusiness/api_types.go +++ b/service/alexaforbusiness/api_types.go @@ -634,6 +634,9 @@ func (s Device) String() string { type DeviceData struct { _ struct{} `type:"structure"` + // The time (in epoch) when the device data was created. + CreatedTime *time.Time `type:"timestamp"` + // The ARN of a device. DeviceArn *string `type:"string"` @@ -737,6 +740,9 @@ type DeviceStatusInfo struct { // The latest available information about the connection status of a device. ConnectionStatus ConnectionStatus `type:"string" enum:"true"` + // The time (in epoch) when the device connection status changed. + ConnectionStatusUpdatedTime *time.Time `type:"timestamp"` + // One or more device status detail descriptions. DeviceStatusDetails []DeviceStatusDetail `type:"list"` } diff --git a/service/applicationdiscoveryservice/api_doc.go b/service/applicationdiscoveryservice/api_doc.go index 3f575c2673c..01f106bc1fa 100644 --- a/service/applicationdiscoveryservice/api_doc.go +++ b/service/applicationdiscoveryservice/api_doc.go @@ -3,51 +3,47 @@ // Package applicationdiscoveryservice provides the client and types for making API // requests to AWS Application Discovery Service. // -// AWS Application Discovery Service helps you plan application migration projects -// by automatically identifying servers, virtual machines (VMs), software, and -// software dependencies running in your on-premises data centers. Application -// Discovery Service also collects application performance data, which can help -// you assess the outcome of your migration. The data collected by Application -// Discovery Service is securely retained in an AWS-hosted and managed database -// in the cloud. You can export the data as a CSV or XML file into your preferred -// visualization tool or cloud-migration solution to plan your migration. For -// more information, see AWS Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). -// -// Application Discovery Service offers two modes of operation: -// -// * Agentless discovery mode is recommended for environments that use VMware +// AWS Application Discovery Service helps you plan application migration projects. +// It automatically identifies servers, virtual machines (VMs), and network +// dependencies in your on-premises data centers. For more information, see +// the AWS Application Discovery Service FAQ (http://aws.amazon.com/application-discovery/faqs/). +// Application Discovery Service offers three ways of performing discovery and +// collecting data about your on-premises servers: +// +// * Agentless discovery is recommended for environments that use VMware // vCenter Server. This mode doesn't require you to install an agent on each -// host. Agentless discovery gathers server information regardless of the -// operating systems, which minimizes the time required for initial on-premises -// infrastructure assessment. Agentless discovery doesn't collect information -// about software and software dependencies. It also doesn't work in non-VMware -// environments. -// -// * Agent-based discovery mode collects a richer set of data than agentless -// discovery by using the AWS Application Discovery Agent, which you install -// on one or more hosts in your data center. The agent captures infrastructure -// and application information, including an inventory of installed software -// applications, system and process performance, resource utilization, and -// network dependencies between workloads. The information collected by agents -// is secured at rest and in transit to the Application Discovery Service -// database in the cloud. -// -// We recommend that you use agent-based discovery for non-VMware environments -// and to collect information about software and software dependencies. You -// can also run agent-based and agentless discovery simultaneously. Use agentless -// discovery to quickly complete the initial infrastructure assessment and then -// install agents on select hosts. -// -// Application Discovery Service integrates with application discovery solutions -// from AWS Partner Network (APN) partners. Third-party application discovery -// tools can query Application Discovery Service and write to the Application -// Discovery Service database using a public API. You can then import the data -// into either a visualization tool or cloud-migration solution. -// -// Application Discovery Service doesn't gather sensitive information. All data -// is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). -// You can operate Application Discovery Service offline to inspect collected -// data before it is shared with the service. +// host. It does not work in non-VMware environments. Agentless discovery +// gathers server information regardless of the operating systems, which +// minimizes the time required for initial on-premises infrastructure assessment. +// Agentless discovery doesn't collect information about network dependencies, +// only agent-based discovery collects that information. +// +// * Agent-based discovery collects a richer set of data than agentless discovery +// by using the AWS Application Discovery Agent, which you install on one +// or more hosts in your data center. The agent captures infrastructure and +// application information, including an inventory of running processes, +// system performance information, resource utilization, and network dependencies. +// The information collected by agents is secured at rest and in transit +// to the Application Discovery Service database in the cloud. +// +// * AWS Partner Network (APN) solutions integrate with Application Discovery +// Service, enabling you to import details of your on-premises environment +// directly into Migration Hub without using the discovery connector or discovery +// agent. Third-party application discovery tools can query AWS Application +// Discovery Service, and they can write to the Application Discovery Service +// database using the public API. In this way, you can import data into Migration +// Hub and view it, so that you can associate applications with servers and +// track migrations. +// +// Recommendations +// +// We recommend that you use agent-based discovery for non-VMware environments, +// and whenever you want to collect information about network dependencies. +// You can run agent-based and agentless discovery simultaneously. Use agentless +// discovery to complete the initial infrastructure assessment quickly, and +// then install agents on select hosts to collect additional information. +// +// Working With This Guide // // This API reference provides descriptions, syntax, and usage examples for // each of the actions and data types for Application Discovery Service. The @@ -56,12 +52,29 @@ // to the programming language or platform that you're using. For more information, // see AWS SDKs (http://aws.amazon.com/tools/#SDKs). // +// * Remember that you must set your Migration Hub home region before you +// call any of these APIs. +// +// * You must make API calls for write actions (create, notify, associate, +// disassociate, import, or put) while in your home region, or a HomeRegionNotSetException +// error is returned. +// +// * API calls for read actions (list, describe, stop, and delete) are permitted +// outside of your home region. +// +// * Although it is unlikely, the Migration Hub home region could change. +// If you call APIs outside the home region, an InvalidInputException is +// returned. +// +// * You must call GetHomeRegion to obtain the latest Migration Hub home +// region. +// // This guide is intended for use with the AWS Application Discovery Service // User Guide (http://docs.aws.amazon.com/application-discovery/latest/userguide/). // -// Remember that you must set your AWS Migration Hub home region before you -// call any of these APIs, or a HomeRegionNotSetException error will be returned. -// Also, you must make the API calls while in your home region. +// All data is handled according to the AWS Privacy Policy (http://aws.amazon.com/privacy/). +// You can operate Application Discovery Service offline to inspect collected +// data before it is shared with the service. // // See https://docs.aws.amazon.com/goto/WebAPI/discovery-2015-11-01 for more information on this service. // diff --git a/service/applicationdiscoveryservice/api_op_DescribeConfigurations.go b/service/applicationdiscoveryservice/api_op_DescribeConfigurations.go index 88c7f1a3be6..ed42dd19fe3 100644 --- a/service/applicationdiscoveryservice/api_op_DescribeConfigurations.go +++ b/service/applicationdiscoveryservice/api_op_DescribeConfigurations.go @@ -72,7 +72,8 @@ const opDescribeConfigurations = "DescribeConfigurations" // etc. // // For a complete list of outputs for each asset type, see Using the DescribeConfigurations -// Action (http://docs.aws.amazon.com/application-discovery/latest/APIReference/discovery-api-queries.html#DescribeConfigurations). +// Action (https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-api-queries.html#DescribeConfigurations) +// in the AWS Application Discovery Service User Guide. // // // Example sending a request using DescribeConfigurationsRequest. // req := client.DescribeConfigurationsRequest(params) diff --git a/service/applicationdiscoveryservice/api_op_ListConfigurations.go b/service/applicationdiscoveryservice/api_op_ListConfigurations.go index a8be0c58a57..e77c34a340d 100644 --- a/service/applicationdiscoveryservice/api_op_ListConfigurations.go +++ b/service/applicationdiscoveryservice/api_op_ListConfigurations.go @@ -24,7 +24,8 @@ type ListConfigurationsInput struct { // {"key": "serverType", "value": "webServer"} // // For a complete list of filter options and guidance about using them with - // this action, see Querying Discovered Configuration Items (http://docs.aws.amazon.com/application-discovery/latest/APIReference/discovery-api-queries.html#ListConfigurations). + // this action, see Using the ListConfigurations Action (https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-api-queries.html#ListConfigurations) + // in the AWS Application Discovery Service User Guide. Filters []Filter `locationName:"filters" type:"list"` // The total number of items to return. The maximum value is 100. @@ -38,7 +39,8 @@ type ListConfigurationsInput struct { // Certain filter criteria return output that can be sorted in ascending or // descending order. For a list of output characteristics for each filter, see - // Using the ListConfigurations Action (http://docs.aws.amazon.com/application-discovery/latest/APIReference/discovery-api-queries.html#ListConfigurations). + // Using the ListConfigurations Action (https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-api-queries.html#ListConfigurations) + // in the AWS Application Discovery Service User Guide. OrderBy []OrderByElement `locationName:"orderBy" type:"list"` } @@ -99,7 +101,7 @@ const opListConfigurations = "ListConfigurations" // AWS Application Discovery Service. // // Retrieves a list of configuration items as specified by the value passed -// to the required paramater configurationType. Optional filtering may be applied +// to the required parameter configurationType. Optional filtering may be applied // to refine search results. // // // Example sending a request using ListConfigurationsRequest. diff --git a/service/applicationdiscoveryservice/api_op_StartImportTask.go b/service/applicationdiscoveryservice/api_op_StartImportTask.go index 193a1b68ef0..04df62322c4 100644 --- a/service/applicationdiscoveryservice/api_op_StartImportTask.go +++ b/service/applicationdiscoveryservice/api_op_StartImportTask.go @@ -89,11 +89,11 @@ const opStartImportTask = "StartImportTask" // AWS Application Discovery Service. // // Starts an import task, which allows you to import details of your on-premises -// environment directly into AWS without having to use the Application Discovery -// Service (ADS) tools such as the Discovery Connector or Discovery Agent. This -// gives you the option to perform migration assessment and planning directly -// from your imported data, including the ability to group your devices as applications -// and track their migration status. +// environment directly into AWS Migration Hub without having to use the Application +// Discovery Service (ADS) tools such as the Discovery Connector or Discovery +// Agent. This gives you the option to perform migration assessment and planning +// directly from your imported data, including the ability to group your devices +// as applications and track their migration status. // // To start an import request, do this: // diff --git a/service/applicationdiscoveryservice/api_types.go b/service/applicationdiscoveryservice/api_types.go index 5fde39e4c4e..e1fc7e1c398 100644 --- a/service/applicationdiscoveryservice/api_types.go +++ b/service/applicationdiscoveryservice/api_types.go @@ -429,7 +429,8 @@ func (s ExportInfo) String() string { // A filter that can use conditional operators. // // For more information about filters, see Querying Discovered Configuration -// Items (http://docs.aws.amazon.com/application-discovery/latest/APIReference/discovery-api-queries.html). +// Items (https://docs.aws.amazon.com/application-discovery/latest/userguide/discovery-api-queries.html) +// in the AWS Application Discovery Service User Guide. type Filter struct { _ struct{} `type:"structure"` diff --git a/service/applicationinsights/api_enums.go b/service/applicationinsights/api_enums.go index a647f07c981..4feee704199 100644 --- a/service/applicationinsights/api_enums.go +++ b/service/applicationinsights/api_enums.go @@ -2,6 +2,42 @@ package applicationinsights +type ConfigurationEventResourceType string + +// Enum values for ConfigurationEventResourceType +const ( + ConfigurationEventResourceTypeCloudwatchAlarm ConfigurationEventResourceType = "CLOUDWATCH_ALARM" + ConfigurationEventResourceTypeCloudformation ConfigurationEventResourceType = "CLOUDFORMATION" + ConfigurationEventResourceTypeSsmAssociation ConfigurationEventResourceType = "SSM_ASSOCIATION" +) + +func (enum ConfigurationEventResourceType) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum ConfigurationEventResourceType) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + +type ConfigurationEventStatus string + +// Enum values for ConfigurationEventStatus +const ( + ConfigurationEventStatusInfo ConfigurationEventStatus = "INFO" + ConfigurationEventStatusWarn ConfigurationEventStatus = "WARN" + ConfigurationEventStatusError ConfigurationEventStatus = "ERROR" +) + +func (enum ConfigurationEventStatus) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum ConfigurationEventStatus) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type FeedbackKey string // Enum values for FeedbackKey diff --git a/service/applicationinsights/api_op_ListConfigurationHistory.go b/service/applicationinsights/api_op_ListConfigurationHistory.go new file mode 100644 index 00000000000..4310e4cb75a --- /dev/null +++ b/service/applicationinsights/api_op_ListConfigurationHistory.go @@ -0,0 +1,211 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package applicationinsights + +import ( + "context" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" +) + +type ListConfigurationHistoryInput struct { + _ struct{} `type:"structure"` + + // The end time of the event. + EndTime *time.Time `type:"timestamp"` + + // The status of the configuration update event. Possible values include INFO, + // WARN, and ERROR. + EventStatus ConfigurationEventStatus `type:"string" enum:"true"` + + // The maximum number of results returned by ListConfigurationHistory in paginated + // output. When this parameter is used, ListConfigurationHistory returns only + // MaxResults in a single page along with a NextToken response element. The + // remaining results of the initial request can be seen by sending another ListConfigurationHistory + // request with the returned NextToken value. If this parameter is not used, + // then ListConfigurationHistory returns all results. + MaxResults *int64 `min:"1" type:"integer"` + + // The NextToken value returned from a previous paginated ListConfigurationHistory + // request where MaxResults was used and the results exceeded the value of that + // parameter. Pagination continues from the end of the previous results that + // returned the NextToken value. This value is null when there are no more results + // to return. + NextToken *string `type:"string"` + + // Resource group to which the application belongs. + ResourceGroupName *string `min:"1" type:"string"` + + // The start time of the event. + StartTime *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s ListConfigurationHistoryInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListConfigurationHistoryInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "ListConfigurationHistoryInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(aws.NewErrParamMinValue("MaxResults", 1)) + } + if s.ResourceGroupName != nil && len(*s.ResourceGroupName) < 1 { + invalidParams.Add(aws.NewErrParamMinLen("ResourceGroupName", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +type ListConfigurationHistoryOutput struct { + _ struct{} `type:"structure"` + + // The list of configuration events and their corresponding details. + EventList []ConfigurationEvent `type:"list"` + + // The NextToken value to include in a future ListConfigurationHistory request. + // When the results of a ListConfigurationHistory request exceed MaxResults, + // this value can be used to retrieve the next page of results. This value is + // null when there are no more results to return. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListConfigurationHistoryOutput) String() string { + return awsutil.Prettify(s) +} + +const opListConfigurationHistory = "ListConfigurationHistory" + +// ListConfigurationHistoryRequest returns a request value for making API operation for +// Amazon CloudWatch Application Insights. +// +// Lists the INFO, WARN, and ERROR events for periodic configuration updates +// performed by Application Insights. Examples of events represented are: +// +// * INFO: creating a new alarm or updating an alarm threshold. +// +// * WARN: alarm not created due to insufficient data points used to predict +// thresholds. +// +// * ERROR: alarm not created due to permission errors or exceeding quotas. +// +// // Example sending a request using ListConfigurationHistoryRequest. +// req := client.ListConfigurationHistoryRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/application-insights-2018-11-25/ListConfigurationHistory +func (c *Client) ListConfigurationHistoryRequest(input *ListConfigurationHistoryInput) ListConfigurationHistoryRequest { + op := &aws.Operation{ + Name: opListConfigurationHistory, + HTTPMethod: "POST", + HTTPPath: "/", + Paginator: &aws.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListConfigurationHistoryInput{} + } + + req := c.newRequest(op, input, &ListConfigurationHistoryOutput{}) + return ListConfigurationHistoryRequest{Request: req, Input: input, Copy: c.ListConfigurationHistoryRequest} +} + +// ListConfigurationHistoryRequest is the request type for the +// ListConfigurationHistory API operation. +type ListConfigurationHistoryRequest struct { + *aws.Request + Input *ListConfigurationHistoryInput + Copy func(*ListConfigurationHistoryInput) ListConfigurationHistoryRequest +} + +// Send marshals and sends the ListConfigurationHistory API request. +func (r ListConfigurationHistoryRequest) Send(ctx context.Context) (*ListConfigurationHistoryResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &ListConfigurationHistoryResponse{ + ListConfigurationHistoryOutput: r.Request.Data.(*ListConfigurationHistoryOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// NewListConfigurationHistoryRequestPaginator returns a paginator for ListConfigurationHistory. +// Use Next method to get the next page, and CurrentPage to get the current +// response page from the paginator. Next will return false, if there are +// no more pages, or an error was encountered. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over pages. +// req := client.ListConfigurationHistoryRequest(input) +// p := applicationinsights.NewListConfigurationHistoryRequestPaginator(req) +// +// for p.Next(context.TODO()) { +// page := p.CurrentPage() +// } +// +// if err := p.Err(); err != nil { +// return err +// } +// +func NewListConfigurationHistoryPaginator(req ListConfigurationHistoryRequest) ListConfigurationHistoryPaginator { + return ListConfigurationHistoryPaginator{ + Pager: aws.Pager{ + NewRequest: func(ctx context.Context) (*aws.Request, error) { + var inCpy *ListConfigurationHistoryInput + if req.Input != nil { + tmp := *req.Input + inCpy = &tmp + } + + newReq := req.Copy(inCpy) + newReq.SetContext(ctx) + return newReq.Request, nil + }, + }, + } +} + +// ListConfigurationHistoryPaginator is used to paginate the request. This can be done by +// calling Next and CurrentPage. +type ListConfigurationHistoryPaginator struct { + aws.Pager +} + +func (p *ListConfigurationHistoryPaginator) CurrentPage() *ListConfigurationHistoryOutput { + return p.Pager.CurrentPage().(*ListConfigurationHistoryOutput) +} + +// ListConfigurationHistoryResponse is the response type for the +// ListConfigurationHistory API operation. +type ListConfigurationHistoryResponse struct { + *ListConfigurationHistoryOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// ListConfigurationHistory request. +func (r *ListConfigurationHistoryResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/applicationinsights/api_types.go b/service/applicationinsights/api_types.go index 46defac311f..1b26a4155f6 100644 --- a/service/applicationinsights/api_types.go +++ b/service/applicationinsights/api_types.go @@ -52,7 +52,11 @@ type ApplicationInfo struct { OpsItemSNSTopicArn *string `min:"20" type:"string"` // The issues on the user side that block Application Insights from successfully - // monitoring an application. + // monitoring an application. Example remarks include: + // + // * “Configuring application, detected 1 Errors, 3 Warnings” + // + // * “Configuring application, detected 1 Unconfigured Components” Remarks *string `type:"string"` // The name of the resource group used for the application. @@ -64,6 +68,36 @@ func (s ApplicationInfo) String() string { return awsutil.Prettify(s) } +// The event information. +type ConfigurationEvent struct { + _ struct{} `type:"structure"` + + // The details of the event in plain text. + EventDetail *string `type:"string"` + + // The name of the resource Application Insights attempted to configure. + EventResourceName *string `type:"string"` + + // The resource type that Application Insights attempted to configure, for example, + // CLOUDWATCH_ALARM. + EventResourceType ConfigurationEventResourceType `type:"string" enum:"true"` + + // The status of the configuration update event. Possible values include INFO, + // WARN, and ERROR. + EventStatus ConfigurationEventStatus `type:"string" enum:"true"` + + // The timestamp of the event. + EventTime *time.Time `type:"timestamp"` + + // The resource monitored by Application Insights. + MonitoredResourceARN *string `type:"string"` +} + +// String returns the string representation +func (s ConfigurationEvent) String() string { + return awsutil.Prettify(s) +} + // An object that defines the log patterns that belongs to a LogPatternSet. type LogPattern struct { _ struct{} `type:"structure"` diff --git a/service/applicationinsights/applicationinsightsiface/interface.go b/service/applicationinsights/applicationinsightsiface/interface.go index 04d2e65c760..de77082570c 100644 --- a/service/applicationinsights/applicationinsightsiface/interface.go +++ b/service/applicationinsights/applicationinsightsiface/interface.go @@ -93,6 +93,8 @@ type ClientAPI interface { ListComponentsRequest(*applicationinsights.ListComponentsInput) applicationinsights.ListComponentsRequest + ListConfigurationHistoryRequest(*applicationinsights.ListConfigurationHistoryInput) applicationinsights.ListConfigurationHistoryRequest + ListLogPatternSetsRequest(*applicationinsights.ListLogPatternSetsInput) applicationinsights.ListLogPatternSetsRequest ListLogPatternsRequest(*applicationinsights.ListLogPatternsInput) applicationinsights.ListLogPatternsRequest diff --git a/service/backup/api_enums.go b/service/backup/api_enums.go index 8494310040e..50fc7c583d1 100644 --- a/service/backup/api_enums.go +++ b/service/backup/api_enums.go @@ -31,8 +31,16 @@ type BackupVaultEvent string const ( BackupVaultEventBackupJobStarted BackupVaultEvent = "BACKUP_JOB_STARTED" BackupVaultEventBackupJobCompleted BackupVaultEvent = "BACKUP_JOB_COMPLETED" + BackupVaultEventBackupJobSuccessful BackupVaultEvent = "BACKUP_JOB_SUCCESSFUL" + BackupVaultEventBackupJobFailed BackupVaultEvent = "BACKUP_JOB_FAILED" + BackupVaultEventBackupJobExpired BackupVaultEvent = "BACKUP_JOB_EXPIRED" BackupVaultEventRestoreJobStarted BackupVaultEvent = "RESTORE_JOB_STARTED" BackupVaultEventRestoreJobCompleted BackupVaultEvent = "RESTORE_JOB_COMPLETED" + BackupVaultEventRestoreJobSuccessful BackupVaultEvent = "RESTORE_JOB_SUCCESSFUL" + BackupVaultEventRestoreJobFailed BackupVaultEvent = "RESTORE_JOB_FAILED" + BackupVaultEventCopyJobStarted BackupVaultEvent = "COPY_JOB_STARTED" + BackupVaultEventCopyJobSuccessful BackupVaultEvent = "COPY_JOB_SUCCESSFUL" + BackupVaultEventCopyJobFailed BackupVaultEvent = "COPY_JOB_FAILED" BackupVaultEventRecoveryPointModified BackupVaultEvent = "RECOVERY_POINT_MODIFIED" BackupVaultEventBackupPlanCreated BackupVaultEvent = "BACKUP_PLAN_CREATED" BackupVaultEventBackupPlanModified BackupVaultEvent = "BACKUP_PLAN_MODIFIED" @@ -63,6 +71,25 @@ func (enum ConditionType) MarshalValueBuf(b []byte) ([]byte, error) { return append(b, enum...), nil } +type CopyJobState string + +// Enum values for CopyJobState +const ( + CopyJobStateCreated CopyJobState = "CREATED" + CopyJobStateRunning CopyJobState = "RUNNING" + CopyJobStateCompleted CopyJobState = "COMPLETED" + CopyJobStateFailed CopyJobState = "FAILED" +) + +func (enum CopyJobState) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum CopyJobState) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type RecoveryPointStatus string // Enum values for RecoveryPointStatus diff --git a/service/backup/api_op_CreateBackupPlan.go b/service/backup/api_op_CreateBackupPlan.go index e7f7eb9f2e9..279908f0388 100644 --- a/service/backup/api_op_CreateBackupPlan.go +++ b/service/backup/api_op_CreateBackupPlan.go @@ -151,8 +151,8 @@ const opCreateBackupPlan = "CreateBackupPlan" // Backup plans are documents that contain information that AWS Backup uses // to schedule tasks that create recovery points of resources. // -// If you call CreateBackupPlan with a plan that already exists, the existing -// backupPlanId is returned. +// If you call CreateBackupPlan with a plan that already exists, an AlreadyExistsException +// is returned. // // // Example sending a request using CreateBackupPlanRequest. // req := client.CreateBackupPlanRequest(params) diff --git a/service/backup/api_op_CreateBackupSelection.go b/service/backup/api_op_CreateBackupSelection.go index 20e8c6bc008..1e3c8f23ffc 100644 --- a/service/backup/api_op_CreateBackupSelection.go +++ b/service/backup/api_op_CreateBackupSelection.go @@ -23,10 +23,6 @@ type CreateBackupSelectionInput struct { // Specifies the body of a request to assign a set of resources to a backup // plan. // - // It includes an array of resources, an optional array of patterns to exclude - // resources, an optional role to provide access to the AWS service the resource - // belongs to, and an optional array of tags used to identify a set of resources. - // // BackupSelection is a required field BackupSelection *BackupSelection `type:"structure" required:"true"` @@ -147,9 +143,9 @@ const opCreateBackupSelection = "CreateBackupSelection" // // * Resources: "arn:aws:ec2:region:account-id:volume/volume-id" // -// * ConditionKey:"department" ConditionValue:"finance" ConditionType:"StringEquals" +// * ConditionKey:"department" ConditionValue:"finance" ConditionType:"STRINGEQUALS" // -// * ConditionKey:"importance" ConditionValue:"critical" ConditionType:"StringEquals" +// * ConditionKey:"importance" ConditionValue:"critical" ConditionType:"STRINGEQUALS" // // Using these patterns would back up all Amazon Elastic Block Store (Amazon // EBS) volumes that are tagged as "department=finance", "importance=critical", diff --git a/service/backup/api_op_DescribeCopyJob.go b/service/backup/api_op_DescribeCopyJob.go new file mode 100644 index 00000000000..2ae4765848b --- /dev/null +++ b/service/backup/api_op_DescribeCopyJob.go @@ -0,0 +1,143 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package backup + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" + "github.com/aws/aws-sdk-go-v2/private/protocol" +) + +type DescribeCopyJobInput struct { + _ struct{} `type:"structure"` + + // Uniquely identifies a request to AWS Backup to copy a resource. + // + // CopyJobId is a required field + CopyJobId *string `location:"uri" locationName:"copyJobId" type:"string" required:"true"` +} + +// String returns the string representation +func (s DescribeCopyJobInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DescribeCopyJobInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "DescribeCopyJobInput"} + + if s.CopyJobId == nil { + invalidParams.Add(aws.NewErrParamRequired("CopyJobId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s DescribeCopyJobInput) MarshalFields(e protocol.FieldEncoder) error { + e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + + if s.CopyJobId != nil { + v := *s.CopyJobId + + metadata := protocol.Metadata{} + e.SetValue(protocol.PathTarget, "copyJobId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +type DescribeCopyJobOutput struct { + _ struct{} `type:"structure"` + + // Contains detailed information about a copy job. + CopyJob *CopyJob `type:"structure"` +} + +// String returns the string representation +func (s DescribeCopyJobOutput) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s DescribeCopyJobOutput) MarshalFields(e protocol.FieldEncoder) error { + if s.CopyJob != nil { + v := s.CopyJob + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "CopyJob", v, metadata) + } + return nil +} + +const opDescribeCopyJob = "DescribeCopyJob" + +// DescribeCopyJobRequest returns a request value for making API operation for +// AWS Backup. +// +// Returns metadata associated with creating a copy of a resource. +// +// // Example sending a request using DescribeCopyJobRequest. +// req := client.DescribeCopyJobRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/DescribeCopyJob +func (c *Client) DescribeCopyJobRequest(input *DescribeCopyJobInput) DescribeCopyJobRequest { + op := &aws.Operation{ + Name: opDescribeCopyJob, + HTTPMethod: "GET", + HTTPPath: "/copy-jobs/{copyJobId}", + } + + if input == nil { + input = &DescribeCopyJobInput{} + } + + req := c.newRequest(op, input, &DescribeCopyJobOutput{}) + return DescribeCopyJobRequest{Request: req, Input: input, Copy: c.DescribeCopyJobRequest} +} + +// DescribeCopyJobRequest is the request type for the +// DescribeCopyJob API operation. +type DescribeCopyJobRequest struct { + *aws.Request + Input *DescribeCopyJobInput + Copy func(*DescribeCopyJobInput) DescribeCopyJobRequest +} + +// Send marshals and sends the DescribeCopyJob API request. +func (r DescribeCopyJobRequest) Send(ctx context.Context) (*DescribeCopyJobResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &DescribeCopyJobResponse{ + DescribeCopyJobOutput: r.Request.Data.(*DescribeCopyJobOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// DescribeCopyJobResponse is the response type for the +// DescribeCopyJob API operation. +type DescribeCopyJobResponse struct { + *DescribeCopyJobOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// DescribeCopyJob request. +func (r *DescribeCopyJobResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/backup/api_op_GetBackupSelection.go b/service/backup/api_op_GetBackupSelection.go index 131919a4bf2..a589de53020 100644 --- a/service/backup/api_op_GetBackupSelection.go +++ b/service/backup/api_op_GetBackupSelection.go @@ -76,11 +76,6 @@ type GetBackupSelectionOutput struct { // Specifies the body of a request to assign a set of resources to a backup // plan. - // - // It includes an array of resources, an optional array of patterns to exclude - // resources, an optional role to provide access to the AWS service that the - // resource belongs to, and an optional array of tags used to identify a set - // of resources. BackupSelection *BackupSelection `type:"structure"` // The date and time a backup selection is created, in Unix format and Coordinated diff --git a/service/backup/api_op_GetRecoveryPointRestoreMetadata.go b/service/backup/api_op_GetRecoveryPointRestoreMetadata.go index 184be9c54d5..3f0a6396e09 100644 --- a/service/backup/api_op_GetRecoveryPointRestoreMetadata.go +++ b/service/backup/api_op_GetRecoveryPointRestoreMetadata.go @@ -79,9 +79,10 @@ type GetRecoveryPointRestoreMetadataOutput struct { // An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45. RecoveryPointArn *string `type:"string"` - // A set of metadata key-value pairs that lists the metadata key-value pairs - // that are required to restore the recovery point. - RestoreMetadata map[string]string `type:"map"` + // The set of metadata key-value pairs that describes the original configuration + // of the backed-up resource. These values vary depending on the service that + // is being restored. + RestoreMetadata map[string]string `type:"map" sensitive:"true"` } // String returns the string representation @@ -123,14 +124,7 @@ const opGetRecoveryPointRestoreMetadata = "GetRecoveryPointRestoreMetadata" // GetRecoveryPointRestoreMetadataRequest returns a request value for making API operation for // AWS Backup. // -// Returns two sets of metadata key-value pairs. The first set lists the metadata -// that the recovery point was created with. The second set lists the metadata -// key-value pairs that are required to restore the recovery point. -// -// These sets can be the same, or the restore metadata set can contain different -// values if the target service to be restored has changed since the recovery -// point was created and now requires additional or different information in -// order to be restored. +// Returns a set of metadata key-value pairs that were used to create the backup. // // // Example sending a request using GetRecoveryPointRestoreMetadataRequest. // req := client.GetRecoveryPointRestoreMetadataRequest(params) diff --git a/service/backup/api_op_GetSupportedResourceTypes.go b/service/backup/api_op_GetSupportedResourceTypes.go index 644e91a40e7..4e4a76b56c0 100644 --- a/service/backup/api_op_GetSupportedResourceTypes.go +++ b/service/backup/api_op_GetSupportedResourceTypes.go @@ -33,7 +33,7 @@ type GetSupportedResourceTypesOutput struct { // // * EBS for Amazon Elastic Block Store // - // * SGW for AWS Storage Gateway + // * Storage Gateway for AWS Storage Gateway // // * RDS for Amazon Relational Database Service // diff --git a/service/backup/api_op_ListBackupJobs.go b/service/backup/api_op_ListBackupJobs.go index 3fabef5071e..4954bd78c46 100644 --- a/service/backup/api_op_ListBackupJobs.go +++ b/service/backup/api_op_ListBackupJobs.go @@ -32,15 +32,15 @@ type ListBackupJobsInput struct { // Returns only backup jobs for the specified resources: // + // * DynamoDB for Amazon DynamoDB + // // * EBS for Amazon Elastic Block Store // - // * SGW for AWS Storage Gateway + // * EFS for Amazon Elastic File System // // * RDS for Amazon Relational Database Service // - // * DDB for Amazon DynamoDB - // - // * EFS for Amazon Elastic File System + // * Storage Gateway for AWS Storage Gateway ByResourceType *string `location:"querystring" locationName:"resourceType" type:"string"` // Returns only backup jobs that are in the specified state. diff --git a/service/backup/api_op_ListCopyJobs.go b/service/backup/api_op_ListCopyJobs.go new file mode 100644 index 00000000000..691a1a712bd --- /dev/null +++ b/service/backup/api_op_ListCopyJobs.go @@ -0,0 +1,293 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package backup + +import ( + "context" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" + "github.com/aws/aws-sdk-go-v2/private/protocol" +) + +type ListCopyJobsInput struct { + _ struct{} `type:"structure"` + + // Returns only copy jobs that were created after the specified date. + ByCreatedAfter *time.Time `location:"querystring" locationName:"createdAfter" type:"timestamp"` + + // Returns only copy jobs that were created before the specified date. + ByCreatedBefore *time.Time `location:"querystring" locationName:"createdBefore" type:"timestamp"` + + // An Amazon Resource Name (ARN) that uniquely identifies a source backup vault + // to copy from; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault. + ByDestinationVaultArn *string `location:"querystring" locationName:"destinationVaultArn" type:"string"` + + // Returns only copy jobs that match the specified resource Amazon Resource + // Name (ARN). + ByResourceArn *string `location:"querystring" locationName:"resourceArn" type:"string"` + + // Returns only backup jobs for the specified resources: + // + // * DynamoDB for Amazon DynamoDB + // + // * EBS for Amazon Elastic Block Store + // + // * EFS for Amazon Elastic File System + // + // * RDS for Amazon Relational Database Service + // + // * Storage Gateway for AWS Storage Gateway + ByResourceType *string `location:"querystring" locationName:"resourceType" type:"string"` + + // Returns only copy jobs that are in the specified state. + ByState CopyJobState `location:"querystring" locationName:"state" type:"string" enum:"true"` + + // The maximum number of items to be returned. + MaxResults *int64 `location:"querystring" locationName:"maxResults" min:"1" type:"integer"` + + // The next item following a partial list of returned items. For example, if + // a request is made to return maxResults number of items, NextToken allows + // you to return more items in your list starting at the location pointed to + // by the next token. + NextToken *string `location:"querystring" locationName:"nextToken" type:"string"` +} + +// String returns the string representation +func (s ListCopyJobsInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *ListCopyJobsInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "ListCopyJobsInput"} + if s.MaxResults != nil && *s.MaxResults < 1 { + invalidParams.Add(aws.NewErrParamMinValue("MaxResults", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s ListCopyJobsInput) MarshalFields(e protocol.FieldEncoder) error { + e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + + if s.ByCreatedAfter != nil { + v := *s.ByCreatedAfter + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "createdAfter", + protocol.TimeValue{V: v, Format: protocol.ISO8601TimeFormatName, QuotedFormatTime: false}, metadata) + } + if s.ByCreatedBefore != nil { + v := *s.ByCreatedBefore + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "createdBefore", + protocol.TimeValue{V: v, Format: protocol.ISO8601TimeFormatName, QuotedFormatTime: false}, metadata) + } + if s.ByDestinationVaultArn != nil { + v := *s.ByDestinationVaultArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "destinationVaultArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.ByResourceArn != nil { + v := *s.ByResourceArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "resourceArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.ByResourceType != nil { + v := *s.ByResourceType + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "resourceType", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if len(s.ByState) > 0 { + v := s.ByState + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "state", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } + if s.MaxResults != nil { + v := *s.MaxResults + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "maxResults", protocol.Int64Value(v), metadata) + } + if s.NextToken != nil { + v := *s.NextToken + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "nextToken", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +type ListCopyJobsOutput struct { + _ struct{} `type:"structure"` + + // An array of structures containing metadata about your copy jobs returned + // in JSON format. + CopyJobs []CopyJob `type:"list"` + + // The next item following a partial list of returned items. For example, if + // a request is made to return maxResults number of items, NextToken allows + // you to return more items in your list starting at the location pointed to + // by the next token. + NextToken *string `type:"string"` +} + +// String returns the string representation +func (s ListCopyJobsOutput) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s ListCopyJobsOutput) MarshalFields(e protocol.FieldEncoder) error { + if s.CopyJobs != nil { + v := s.CopyJobs + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "CopyJobs", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } + if s.NextToken != nil { + v := *s.NextToken + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "NextToken", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +const opListCopyJobs = "ListCopyJobs" + +// ListCopyJobsRequest returns a request value for making API operation for +// AWS Backup. +// +// Returns metadata about your copy jobs. +// +// // Example sending a request using ListCopyJobsRequest. +// req := client.ListCopyJobsRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/ListCopyJobs +func (c *Client) ListCopyJobsRequest(input *ListCopyJobsInput) ListCopyJobsRequest { + op := &aws.Operation{ + Name: opListCopyJobs, + HTTPMethod: "GET", + HTTPPath: "/copy-jobs/", + Paginator: &aws.Paginator{ + InputTokens: []string{"NextToken"}, + OutputTokens: []string{"NextToken"}, + LimitToken: "MaxResults", + TruncationToken: "", + }, + } + + if input == nil { + input = &ListCopyJobsInput{} + } + + req := c.newRequest(op, input, &ListCopyJobsOutput{}) + return ListCopyJobsRequest{Request: req, Input: input, Copy: c.ListCopyJobsRequest} +} + +// ListCopyJobsRequest is the request type for the +// ListCopyJobs API operation. +type ListCopyJobsRequest struct { + *aws.Request + Input *ListCopyJobsInput + Copy func(*ListCopyJobsInput) ListCopyJobsRequest +} + +// Send marshals and sends the ListCopyJobs API request. +func (r ListCopyJobsRequest) Send(ctx context.Context) (*ListCopyJobsResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &ListCopyJobsResponse{ + ListCopyJobsOutput: r.Request.Data.(*ListCopyJobsOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// NewListCopyJobsRequestPaginator returns a paginator for ListCopyJobs. +// Use Next method to get the next page, and CurrentPage to get the current +// response page from the paginator. Next will return false, if there are +// no more pages, or an error was encountered. +// +// Note: This operation can generate multiple requests to a service. +// +// // Example iterating over pages. +// req := client.ListCopyJobsRequest(input) +// p := backup.NewListCopyJobsRequestPaginator(req) +// +// for p.Next(context.TODO()) { +// page := p.CurrentPage() +// } +// +// if err := p.Err(); err != nil { +// return err +// } +// +func NewListCopyJobsPaginator(req ListCopyJobsRequest) ListCopyJobsPaginator { + return ListCopyJobsPaginator{ + Pager: aws.Pager{ + NewRequest: func(ctx context.Context) (*aws.Request, error) { + var inCpy *ListCopyJobsInput + if req.Input != nil { + tmp := *req.Input + inCpy = &tmp + } + + newReq := req.Copy(inCpy) + newReq.SetContext(ctx) + return newReq.Request, nil + }, + }, + } +} + +// ListCopyJobsPaginator is used to paginate the request. This can be done by +// calling Next and CurrentPage. +type ListCopyJobsPaginator struct { + aws.Pager +} + +func (p *ListCopyJobsPaginator) CurrentPage() *ListCopyJobsOutput { + return p.Pager.CurrentPage().(*ListCopyJobsOutput) +} + +// ListCopyJobsResponse is the response type for the +// ListCopyJobs API operation. +type ListCopyJobsResponse struct { + *ListCopyJobsOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// ListCopyJobs request. +func (r *ListCopyJobsResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/backup/api_op_StartBackupJob.go b/service/backup/api_op_StartBackupJob.go index 991302f7236..350de3d8c56 100644 --- a/service/backup/api_op_StartBackupJob.go +++ b/service/backup/api_op_StartBackupJob.go @@ -33,11 +33,7 @@ type StartBackupJobInput struct { IamRoleArn *string `type:"string" required:"true"` // A customer chosen string that can be used to distinguish between calls to - // StartBackupJob. Idempotency tokens time out after one hour. Therefore, if - // you call StartBackupJob multiple times with the same idempotency token within - // one hour, AWS Backup recognizes that you are requesting only one backup job - // and initiates only one. If you change the idempotency token for each call, - // AWS Backup recognizes that you are requesting to start multiple backups. + // StartBackupJob. IdempotencyToken *string `type:"string"` // The lifecycle defines when a protected resource is transitioned to cold storage diff --git a/service/backup/api_op_StartCopyJob.go b/service/backup/api_op_StartCopyJob.go new file mode 100644 index 00000000000..4fdb8ef2351 --- /dev/null +++ b/service/backup/api_op_StartCopyJob.go @@ -0,0 +1,234 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package backup + +import ( + "context" + "time" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" + "github.com/aws/aws-sdk-go-v2/private/protocol" +) + +type StartCopyJobInput struct { + _ struct{} `type:"structure"` + + // An Amazon Resource Name (ARN) that uniquely identifies a destination backup + // vault to copy to; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault. + // + // DestinationBackupVaultArn is a required field + DestinationBackupVaultArn *string `type:"string" required:"true"` + + // Specifies the IAM role ARN used to copy the target recovery point; for example, + // arn:aws:iam::123456789012:role/S3Access. + // + // IamRoleArn is a required field + IamRoleArn *string `type:"string" required:"true"` + + // A customer chosen string that can be used to distinguish between calls to + // StartCopyJob. + IdempotencyToken *string `type:"string"` + + // Contains an array of Transition objects specifying how long in days before + // a recovery point transitions to cold storage or is deleted. + // + // Backups transitioned to cold storage must be stored in cold storage for a + // minimum of 90 days. Therefore, on the console, the “expire after days” + // setting must be 90 days greater than the “transition to cold after days” + // setting. The “transition to cold after days” setting cannot be changed + // after a backup has been transitioned to cold. + Lifecycle *Lifecycle `type:"structure"` + + // An ARN that uniquely identifies a recovery point to use for the copy job; + // for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45. + // + // RecoveryPointArn is a required field + RecoveryPointArn *string `type:"string" required:"true"` + + // The name of a logical source container where backups are stored. Backup vaults + // are identified by names that are unique to the account used to create them + // and the AWS Region where they are created. They consist of lowercase letters, + // numbers, and hyphens. > + // + // SourceBackupVaultName is a required field + SourceBackupVaultName *string `type:"string" required:"true"` +} + +// String returns the string representation +func (s StartCopyJobInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *StartCopyJobInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "StartCopyJobInput"} + + if s.DestinationBackupVaultArn == nil { + invalidParams.Add(aws.NewErrParamRequired("DestinationBackupVaultArn")) + } + + if s.IamRoleArn == nil { + invalidParams.Add(aws.NewErrParamRequired("IamRoleArn")) + } + + if s.RecoveryPointArn == nil { + invalidParams.Add(aws.NewErrParamRequired("RecoveryPointArn")) + } + + if s.SourceBackupVaultName == nil { + invalidParams.Add(aws.NewErrParamRequired("SourceBackupVaultName")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s StartCopyJobInput) MarshalFields(e protocol.FieldEncoder) error { + e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + + if s.DestinationBackupVaultArn != nil { + v := *s.DestinationBackupVaultArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "DestinationBackupVaultArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.IamRoleArn != nil { + v := *s.IamRoleArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "IamRoleArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.IdempotencyToken != nil { + v := *s.IdempotencyToken + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "IdempotencyToken", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.Lifecycle != nil { + v := s.Lifecycle + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "Lifecycle", v, metadata) + } + if s.RecoveryPointArn != nil { + v := *s.RecoveryPointArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "RecoveryPointArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.SourceBackupVaultName != nil { + v := *s.SourceBackupVaultName + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "SourceBackupVaultName", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +type StartCopyJobOutput struct { + _ struct{} `type:"structure"` + + // Uniquely identifies a request to AWS Backup to copy a resource. + CopyJobId *string `type:"string"` + + // The date and time that a backup job is started, in Unix format and Coordinated + // Universal Time (UTC). The value of CreationDate is accurate to milliseconds. + // For example, the value 1516925490.087 represents Friday, January 26, 2018 + // 12:11:30.087 AM. > + CreationDate *time.Time `type:"timestamp"` +} + +// String returns the string representation +func (s StartCopyJobOutput) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s StartCopyJobOutput) MarshalFields(e protocol.FieldEncoder) error { + if s.CopyJobId != nil { + v := *s.CopyJobId + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "CopyJobId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.CreationDate != nil { + v := *s.CreationDate + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "CreationDate", + protocol.TimeValue{V: v, Format: protocol.UnixTimeFormatName, QuotedFormatTime: true}, metadata) + } + return nil +} + +const opStartCopyJob = "StartCopyJob" + +// StartCopyJobRequest returns a request value for making API operation for +// AWS Backup. +// +// Starts a job to create a one-time copy of the specified resource. +// +// // Example sending a request using StartCopyJobRequest. +// req := client.StartCopyJobRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/backup-2018-11-15/StartCopyJob +func (c *Client) StartCopyJobRequest(input *StartCopyJobInput) StartCopyJobRequest { + op := &aws.Operation{ + Name: opStartCopyJob, + HTTPMethod: "PUT", + HTTPPath: "/copy-jobs", + } + + if input == nil { + input = &StartCopyJobInput{} + } + + req := c.newRequest(op, input, &StartCopyJobOutput{}) + return StartCopyJobRequest{Request: req, Input: input, Copy: c.StartCopyJobRequest} +} + +// StartCopyJobRequest is the request type for the +// StartCopyJob API operation. +type StartCopyJobRequest struct { + *aws.Request + Input *StartCopyJobInput + Copy func(*StartCopyJobInput) StartCopyJobRequest +} + +// Send marshals and sends the StartCopyJob API request. +func (r StartCopyJobRequest) Send(ctx context.Context) (*StartCopyJobResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &StartCopyJobResponse{ + StartCopyJobOutput: r.Request.Data.(*StartCopyJobOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// StartCopyJobResponse is the response type for the +// StartCopyJob API operation. +type StartCopyJobResponse struct { + *StartCopyJobOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// StartCopyJob request. +func (r *StartCopyJobResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/backup/api_op_StartRestoreJob.go b/service/backup/api_op_StartRestoreJob.go index 5bf04b24ca2..127ab914729 100644 --- a/service/backup/api_op_StartRestoreJob.go +++ b/service/backup/api_op_StartRestoreJob.go @@ -20,18 +20,40 @@ type StartRestoreJobInput struct { IamRoleArn *string `type:"string" required:"true"` // A customer chosen string that can be used to distinguish between calls to - // StartRestoreJob. Idempotency tokens time out after one hour. Therefore, if - // you call StartRestoreJob multiple times with the same idempotency token within - // one hour, AWS Backup recognizes that you are requesting only one restore - // job and initiates only one. If you change the idempotency token for each - // call, AWS Backup recognizes that you are requesting to start multiple restores. + // StartRestoreJob. IdempotencyToken *string `type:"string"` - // A set of metadata key-value pairs. Lists the metadata that the recovery point - // was created with. + // A set of metadata key-value pairs. Contains information, such as a resource + // name, required to restore a recovery point. + // + // You can get configuration metadata about a resource at the time it was backed-up + // by calling GetRecoveryPointRestoreMetadata. However, values in addition to + // those provided by GetRecoveryPointRestoreMetadata might be required to restore + // a resource. For example, you might need to provide a new resource name if + // the original already exists. + // + // You need to specify specific metadata to restore an Amazon Elastic File System + // (Amazon EFS) instance: + // + // * file-system-id: ID of the Amazon EFS file system that is backed up by + // AWS Backup. Returned in GetRecoveryPointRestoreMetadata. + // + // * Encrypted: A Boolean value that, if true, specifies that the file system + // is encrypted. If KmsKeyId is specified, Encrypted must be set to true. + // + // * KmsKeyId: Specifies the AWS KMS key that is used to encrypt the restored + // file system. + // + // * PerformanceMode: Specifies the throughput mode of the file system. + // + // * CreationToken: A user-supplied value that ensures the uniqueness (idempotency) + // of the request. + // + // * newFileSystem: A Boolean value that, if true, specifies that the recovery + // point is restored to a new Amazon EFS file system. // // Metadata is a required field - Metadata map[string]string `type:"map" required:"true"` + Metadata map[string]string `type:"map" required:"true" sensitive:"true"` // An ARN that uniquely identifies a recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45. // @@ -42,7 +64,7 @@ type StartRestoreJobInput struct { // // * EBS for Amazon Elastic Block Store // - // * SGW for AWS Storage Gateway + // * Storage Gateway for AWS Storage Gateway // // * RDS for Amazon Relational Database Service // diff --git a/service/backup/api_types.go b/service/backup/api_types.go index d5a6b701c10..ad196476dfc 100644 --- a/service/backup/api_types.go +++ b/service/backup/api_types.go @@ -475,6 +475,9 @@ type BackupRule struct { // be completed or it is canceled by AWS Backup. This value is optional. CompletionWindowMinutes *int64 `type:"long"` + // An array of CopyAction objects, which contains the details of the copy operation. + CopyActions []CopyAction `type:"list"` + // The lifecycle defines when a protected resource is transitioned to cold storage // and when it expires. AWS Backup transitions and expires backups automatically // according to the lifecycle that you define. @@ -528,6 +531,18 @@ func (s BackupRule) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "CompletionWindowMinutes", protocol.Int64Value(v), metadata) } + if s.CopyActions != nil { + v := s.CopyActions + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "CopyActions", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } if s.Lifecycle != nil { v := s.Lifecycle @@ -587,6 +602,9 @@ type BackupRuleInput struct { // and returning an error. CompletionWindowMinutes *int64 `type:"long"` + // An array of CopyAction objects, which contains the details of the copy operation. + CopyActions []CopyAction `type:"list"` + // The lifecycle defines when a protected resource is transitioned to cold storage // and when it expires. AWS Backup will transition and expire backups automatically // according to the lifecycle that you define. @@ -638,6 +656,13 @@ func (s *BackupRuleInput) Validate() error { if s.TargetBackupVaultName == nil { invalidParams.Add(aws.NewErrParamRequired("TargetBackupVaultName")) } + if s.CopyActions != nil { + for i, v := range s.CopyActions { + if err := v.Validate(); err != nil { + invalidParams.AddNested(fmt.Sprintf("%s[%v]", "CopyActions", i), err.(aws.ErrInvalidParams)) + } + } + } if invalidParams.Len() > 0 { return invalidParams @@ -653,6 +678,18 @@ func (s BackupRuleInput) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "CompletionWindowMinutes", protocol.Int64Value(v), metadata) } + if s.CopyActions != nil { + v := s.CopyActions + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "CopyActions", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } if s.Lifecycle != nil { v := s.Lifecycle @@ -709,12 +746,11 @@ type BackupSelection struct { IamRoleArn *string `type:"string" required:"true"` // An array of conditions used to specify a set of resources to assign to a - // backup plan; for example, "StringEquals": {"ec2:ResourceTag/Department": + // backup plan; for example, "STRINGEQUALS": {"ec2:ResourceTag/Department": // "accounting". ListOfTags []Condition `type:"list"` - // An array of strings that either contain Amazon Resource Names (ARNs) or match - // patterns such as "arn:aws:ec2:us-east-1:123456789012:volume/*" of resources + // An array of strings that contain Amazon Resource Names (ARNs) of resources // to assign to a backup plan. Resources []string `type:"list"` @@ -995,7 +1031,7 @@ func (s CalculatedLifecycle) MarshalFields(e protocol.FieldEncoder) error { return nil } -// Contains an array of triplets made up of a condition type (such as StringEquals), +// Contains an array of triplets made up of a condition type (such as STRINGEQUALS), // a key, and a value. Conditions are used to filter resources in a selection // that is assigned to a backup plan. type Condition struct { @@ -1007,7 +1043,7 @@ type Condition struct { // ConditionKey is a required field ConditionKey *string `type:"string" required:"true"` - // An operation, such as StringEquals, that is applied to a key-value pair used + // An operation, such as STRINGEQUALS, that is applied to a key-value pair used // to filter resources in a selection. // // ConditionType is a required field @@ -1069,13 +1105,234 @@ func (s Condition) MarshalFields(e protocol.FieldEncoder) error { return nil } +// The details of the copy operation. +type CopyAction struct { + _ struct{} `type:"structure"` + + // An Amazon Resource Name (ARN) that uniquely identifies the destination backup + // vault for the copied backup. For example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault. + // + // DestinationBackupVaultArn is a required field + DestinationBackupVaultArn *string `type:"string" required:"true"` + + // Contains an array of Transition objects specifying how long in days before + // a recovery point transitions to cold storage or is deleted. + // + // Backups transitioned to cold storage must be stored in cold storage for a + // minimum of 90 days. Therefore, on the console, the “expire after days” + // setting must be 90 days greater than the “transition to cold after days” + // setting. The “transition to cold after days” setting cannot be changed + // after a backup has been transitioned to cold. + Lifecycle *Lifecycle `type:"structure"` +} + +// String returns the string representation +func (s CopyAction) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CopyAction) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "CopyAction"} + + if s.DestinationBackupVaultArn == nil { + invalidParams.Add(aws.NewErrParamRequired("DestinationBackupVaultArn")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s CopyAction) MarshalFields(e protocol.FieldEncoder) error { + if s.DestinationBackupVaultArn != nil { + v := *s.DestinationBackupVaultArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "DestinationBackupVaultArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.Lifecycle != nil { + v := s.Lifecycle + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "Lifecycle", v, metadata) + } + return nil +} + +// Contains detailed information about a copy job. +type CopyJob struct { + _ struct{} `type:"structure"` + + // The size, in bytes, of a copy job. + BackupSizeInBytes *int64 `type:"long"` + + // The date and time a job to create a copy job is completed, in Unix format + // and Coordinated Universal Time (UTC). The value of CompletionDate is accurate + // to milliseconds. For example, the value 1516925490.087 represents Friday, + // January 26, 2018 12:11:30.087 AM. + CompletionDate *time.Time `type:"timestamp"` + + // Uniquely identifies a request to AWS Backup to copy a resource. + CopyJobId *string `type:"string"` + + // Contains information about the backup plan and rule that AWS Backup used + // to initiate the recovery point backup. + CreatedBy *RecoveryPointCreator `type:"structure"` + + // The date and time a copy job is created, in Unix format and Coordinated Universal + // Time (UTC). The value of CreationDate is accurate to milliseconds. For example, + // the value 1516925490.087 represents Friday, January 26, 2018 12:11:30.087 + // AM. + CreationDate *time.Time `type:"timestamp"` + + // An Amazon Resource Name (ARN) that uniquely identifies a destination copy + // vault; for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault. + DestinationBackupVaultArn *string `type:"string"` + + // An ARN that uniquely identifies a destination recovery point; for example, + // arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45. + DestinationRecoveryPointArn *string `type:"string"` + + // Specifies the IAM role ARN used to copy the target recovery point; for example, + // arn:aws:iam::123456789012:role/S3Access. + IamRoleArn *string `type:"string"` + + // The type of AWS resource to be copied; for example, an Amazon Elastic Block + // Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon + // RDS) database. + ResourceArn *string `type:"string"` + + // The type of AWS resource to be copied; for example, an Amazon Elastic Block + // Store (Amazon EBS) volume or an Amazon Relational Database Service (Amazon + // RDS) database. + ResourceType *string `type:"string"` + + // An Amazon Resource Name (ARN) that uniquely identifies a source copy vault; + // for example, arn:aws:backup:us-east-1:123456789012:vault:aBackupVault. + SourceBackupVaultArn *string `type:"string"` + + // An ARN that uniquely identifies a source recovery point; for example, arn:aws:backup:us-east-1:123456789012:recovery-point:1EB3B5E7-9EB0-435A-A80B-108B488B0D45. + SourceRecoveryPointArn *string `type:"string"` + + // The current state of a resource recovery point. + State CopyJobState `type:"string" enum:"true"` + + // A detailed message explaining the status of the job that to copy a resource. + StatusMessage *string `type:"string"` +} + +// String returns the string representation +func (s CopyJob) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s CopyJob) MarshalFields(e protocol.FieldEncoder) error { + if s.BackupSizeInBytes != nil { + v := *s.BackupSizeInBytes + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "BackupSizeInBytes", protocol.Int64Value(v), metadata) + } + if s.CompletionDate != nil { + v := *s.CompletionDate + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "CompletionDate", + protocol.TimeValue{V: v, Format: protocol.UnixTimeFormatName, QuotedFormatTime: true}, metadata) + } + if s.CopyJobId != nil { + v := *s.CopyJobId + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "CopyJobId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.CreatedBy != nil { + v := s.CreatedBy + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "CreatedBy", v, metadata) + } + if s.CreationDate != nil { + v := *s.CreationDate + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "CreationDate", + protocol.TimeValue{V: v, Format: protocol.UnixTimeFormatName, QuotedFormatTime: true}, metadata) + } + if s.DestinationBackupVaultArn != nil { + v := *s.DestinationBackupVaultArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "DestinationBackupVaultArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.DestinationRecoveryPointArn != nil { + v := *s.DestinationRecoveryPointArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "DestinationRecoveryPointArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.IamRoleArn != nil { + v := *s.IamRoleArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "IamRoleArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.ResourceArn != nil { + v := *s.ResourceArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "ResourceArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.ResourceType != nil { + v := *s.ResourceType + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "ResourceType", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.SourceBackupVaultArn != nil { + v := *s.SourceBackupVaultArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "SourceBackupVaultArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.SourceRecoveryPointArn != nil { + v := *s.SourceRecoveryPointArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "SourceRecoveryPointArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if len(s.State) > 0 { + v := s.State + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "State", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } + if s.StatusMessage != nil { + v := *s.StatusMessage + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "StatusMessage", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + // Contains an array of Transition objects specifying how long in days before // a recovery point transitions to cold storage or is deleted. +// +// Backups transitioned to cold storage must be stored in cold storage for a +// minimum of 90 days. Therefore, on the console, the “expire after days” +// setting must be 90 days greater than the “transition to cold after days” +// setting. The “transition to cold after days” setting cannot be changed +// after a backup has been transitioned to cold. type Lifecycle struct { _ struct{} `type:"structure"` // Specifies the number of days after creation that a recovery point is deleted. - // Must be greater than MoveToColdStorageAfterDays. + // Must be greater than 90 days plus MoveToColdStorageAfterDays. DeleteAfterDays *int64 `type:"long"` // Specifies the number of days after creation that a recovery point is moved diff --git a/service/backup/backupiface/interface.go b/service/backup/backupiface/interface.go index ec331643ab4..d013edef749 100644 --- a/service/backup/backupiface/interface.go +++ b/service/backup/backupiface/interface.go @@ -83,6 +83,8 @@ type ClientAPI interface { DescribeBackupVaultRequest(*backup.DescribeBackupVaultInput) backup.DescribeBackupVaultRequest + DescribeCopyJobRequest(*backup.DescribeCopyJobInput) backup.DescribeCopyJobRequest + DescribeProtectedResourceRequest(*backup.DescribeProtectedResourceInput) backup.DescribeProtectedResourceRequest DescribeRecoveryPointRequest(*backup.DescribeRecoveryPointInput) backup.DescribeRecoveryPointRequest @@ -119,6 +121,8 @@ type ClientAPI interface { ListBackupVaultsRequest(*backup.ListBackupVaultsInput) backup.ListBackupVaultsRequest + ListCopyJobsRequest(*backup.ListCopyJobsInput) backup.ListCopyJobsRequest + ListProtectedResourcesRequest(*backup.ListProtectedResourcesInput) backup.ListProtectedResourcesRequest ListRecoveryPointsByBackupVaultRequest(*backup.ListRecoveryPointsByBackupVaultInput) backup.ListRecoveryPointsByBackupVaultRequest @@ -135,6 +139,8 @@ type ClientAPI interface { StartBackupJobRequest(*backup.StartBackupJobInput) backup.StartBackupJobRequest + StartCopyJobRequest(*backup.StartCopyJobInput) backup.StartCopyJobRequest + StartRestoreJobRequest(*backup.StartRestoreJobInput) backup.StartRestoreJobRequest StopBackupJobRequest(*backup.StopBackupJobInput) backup.StopBackupJobRequest diff --git a/service/batch/api_op_DeregisterJobDefinition.go b/service/batch/api_op_DeregisterJobDefinition.go index 6f1b6b8b051..3dc7c86ce51 100644 --- a/service/batch/api_op_DeregisterJobDefinition.go +++ b/service/batch/api_op_DeregisterJobDefinition.go @@ -71,7 +71,8 @@ const opDeregisterJobDefinition = "DeregisterJobDefinition" // DeregisterJobDefinitionRequest returns a request value for making API operation for // AWS Batch. // -// Deregisters an AWS Batch job definition. +// Deregisters an AWS Batch job definition. Job definitions will be permanently +// deleted after 180 days. // // // Example sending a request using DeregisterJobDefinitionRequest. // req := client.DeregisterJobDefinitionRequest(params) diff --git a/service/batch/api_op_SubmitJob.go b/service/batch/api_op_SubmitJob.go index 2513c7ce650..eb56ef08f22 100644 --- a/service/batch/api_op_SubmitJob.go +++ b/service/batch/api_op_SubmitJob.go @@ -38,8 +38,9 @@ type SubmitJobInput struct { // begin. DependsOn []JobDependency `locationName:"dependsOn" type:"list"` - // The job definition used by this job. This value can be either a name:revision - // or the Amazon Resource Name (ARN) for the job definition. + // The job definition used by this job. This value can be one of name, name:revision, + // or the Amazon Resource Name (ARN) for the job definition. If name is specified + // without a revision then the latest active revision is used. // // JobDefinition is a required field JobDefinition *string `locationName:"jobDefinition" type:"string" required:"true"` diff --git a/service/batch/api_types.go b/service/batch/api_types.go index cf06314d003..03ff6feafe1 100644 --- a/service/batch/api_types.go +++ b/service/batch/api_types.go @@ -442,13 +442,16 @@ type ComputeResource struct { // limits (https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-resource-limits.html). // If this is not specified, the default is BEST_FIT, which will use only the // best fitting instance type, waiting for additional capacity if it's not available. - // This allocation strategy keeps costs lower but can limit scaling. BEST_FIT_PROGRESSIVE - // will select an additional instance type that is large enough to meet the - // requirements of the jobs in the queue, with a preference for an instance - // type with a lower cost. SPOT_CAPACITY_OPTIMIZED is only available for Spot - // Instance compute resources and will select an additional instance type that - // is large enough to meet the requirements of the jobs in the queue, with a - // preference for an instance type that is less likely to be interrupted. + // This allocation strategy keeps costs lower but can limit scaling. If you + // are using Spot Fleets with BEST_FIT then the Spot Fleet IAM Role must be + // specified. BEST_FIT_PROGRESSIVE will select additional instance types that + // are large enough to meet the requirements of the jobs in the queue, with + // a preference for instance types with a lower cost per vCPU. SPOT_CAPACITY_OPTIMIZED + // is only available for Spot Instance compute resources and will select additional + // instance types that are large enough to meet the requirements of the jobs + // in the queue, with a preference for instance types that are less likely to + // be interrupted. For more information, see Allocation Strategies (https://docs.aws.amazon.com/batch/latest/userguide/allocation-strategies.html) + // in the AWS Batch User Guide. AllocationStrategy CRAllocationStrategy `locationName:"allocationStrategy" type:"string" enum:"true"` // The maximum percentage that a Spot Instance price can be when compared with @@ -525,8 +528,9 @@ type ComputeResource struct { SecurityGroupIds []string `locationName:"securityGroupIds" type:"list"` // The Amazon Resource Name (ARN) of the Amazon EC2 Spot Fleet IAM role applied - // to a SPOT compute environment. For more information, see Amazon EC2 Spot - // Fleet Role (https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html) + // to a SPOT compute environment. This role is required if the allocation strategy + // set to BEST_FIT or if the allocation strategy is not specified. For more + // information, see Amazon EC2 Spot Fleet Role (https://docs.aws.amazon.com/batch/latest/userguide/spot_fleet_IAM_role.html) // in the AWS Batch User Guide. SpotIamFleetRole *string `locationName:"spotIamFleetRole" type:"string"` @@ -1730,7 +1734,7 @@ type JobDetail struct { // state. CreatedAt *int64 `locationName:"createdAt" type:"long"` - // A list of job names or IDs on which this job depends. + // A list of job IDs on which this job depends. DependsOn []JobDependency `locationName:"dependsOn" type:"list"` // The job definition that is used by this job. diff --git a/service/chime/api_enums.go b/service/chime/api_enums.go index 256a2315fe2..eb15dee9c84 100644 --- a/service/chime/api_enums.go +++ b/service/chime/api_enums.go @@ -323,6 +323,23 @@ func (enum RoomMembershipRole) MarshalValueBuf(b []byte) ([]byte, error) { return append(b, enum...), nil } +type UserType string + +// Enum values for UserType +const ( + UserTypePrivateUser UserType = "PrivateUser" + UserTypeSharedDevice UserType = "SharedDevice" +) + +func (enum UserType) MarshalValue() (string, error) { + return string(enum), nil +} + +func (enum UserType) MarshalValueBuf(b []byte) ([]byte, error) { + b = b[0:0] + return append(b, enum...), nil +} + type VoiceConnectorAwsRegion string // Enum values for VoiceConnectorAwsRegion diff --git a/service/chime/api_op_AssociateSigninDelegateGroupsWithAccount.go b/service/chime/api_op_AssociateSigninDelegateGroupsWithAccount.go new file mode 100644 index 00000000000..6b4e53a4b3a --- /dev/null +++ b/service/chime/api_op_AssociateSigninDelegateGroupsWithAccount.go @@ -0,0 +1,156 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package chime + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" + "github.com/aws/aws-sdk-go-v2/private/protocol" +) + +type AssociateSigninDelegateGroupsWithAccountInput struct { + _ struct{} `type:"structure"` + + // The Amazon Chime account ID. + // + // AccountId is a required field + AccountId *string `location:"uri" locationName:"accountId" type:"string" required:"true"` + + // The sign-in delegate groups. + // + // SigninDelegateGroups is a required field + SigninDelegateGroups []SigninDelegateGroup `type:"list" required:"true"` +} + +// String returns the string representation +func (s AssociateSigninDelegateGroupsWithAccountInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *AssociateSigninDelegateGroupsWithAccountInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "AssociateSigninDelegateGroupsWithAccountInput"} + + if s.AccountId == nil { + invalidParams.Add(aws.NewErrParamRequired("AccountId")) + } + + if s.SigninDelegateGroups == nil { + invalidParams.Add(aws.NewErrParamRequired("SigninDelegateGroups")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s AssociateSigninDelegateGroupsWithAccountInput) MarshalFields(e protocol.FieldEncoder) error { + e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + + if s.SigninDelegateGroups != nil { + v := s.SigninDelegateGroups + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "SigninDelegateGroups", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } + if s.AccountId != nil { + v := *s.AccountId + + metadata := protocol.Metadata{} + e.SetValue(protocol.PathTarget, "accountId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +type AssociateSigninDelegateGroupsWithAccountOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s AssociateSigninDelegateGroupsWithAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s AssociateSigninDelegateGroupsWithAccountOutput) MarshalFields(e protocol.FieldEncoder) error { + return nil +} + +const opAssociateSigninDelegateGroupsWithAccount = "AssociateSigninDelegateGroupsWithAccount" + +// AssociateSigninDelegateGroupsWithAccountRequest returns a request value for making API operation for +// Amazon Chime. +// +// Associates the specified sign-in delegate groups with the specified Amazon +// Chime account. +// +// // Example sending a request using AssociateSigninDelegateGroupsWithAccountRequest. +// req := client.AssociateSigninDelegateGroupsWithAccountRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/chime-2018-05-01/AssociateSigninDelegateGroupsWithAccount +func (c *Client) AssociateSigninDelegateGroupsWithAccountRequest(input *AssociateSigninDelegateGroupsWithAccountInput) AssociateSigninDelegateGroupsWithAccountRequest { + op := &aws.Operation{ + Name: opAssociateSigninDelegateGroupsWithAccount, + HTTPMethod: "POST", + HTTPPath: "/accounts/{accountId}?operation=associate-signin-delegate-groups", + } + + if input == nil { + input = &AssociateSigninDelegateGroupsWithAccountInput{} + } + + req := c.newRequest(op, input, &AssociateSigninDelegateGroupsWithAccountOutput{}) + return AssociateSigninDelegateGroupsWithAccountRequest{Request: req, Input: input, Copy: c.AssociateSigninDelegateGroupsWithAccountRequest} +} + +// AssociateSigninDelegateGroupsWithAccountRequest is the request type for the +// AssociateSigninDelegateGroupsWithAccount API operation. +type AssociateSigninDelegateGroupsWithAccountRequest struct { + *aws.Request + Input *AssociateSigninDelegateGroupsWithAccountInput + Copy func(*AssociateSigninDelegateGroupsWithAccountInput) AssociateSigninDelegateGroupsWithAccountRequest +} + +// Send marshals and sends the AssociateSigninDelegateGroupsWithAccount API request. +func (r AssociateSigninDelegateGroupsWithAccountRequest) Send(ctx context.Context) (*AssociateSigninDelegateGroupsWithAccountResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &AssociateSigninDelegateGroupsWithAccountResponse{ + AssociateSigninDelegateGroupsWithAccountOutput: r.Request.Data.(*AssociateSigninDelegateGroupsWithAccountOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// AssociateSigninDelegateGroupsWithAccountResponse is the response type for the +// AssociateSigninDelegateGroupsWithAccount API operation. +type AssociateSigninDelegateGroupsWithAccountResponse struct { + *AssociateSigninDelegateGroupsWithAccountOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// AssociateSigninDelegateGroupsWithAccount request. +func (r *AssociateSigninDelegateGroupsWithAccountResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/chime/api_op_BatchSuspendUser.go b/service/chime/api_op_BatchSuspendUser.go index 6247aeaad5e..faf1d0fd9d0 100644 --- a/service/chime/api_op_BatchSuspendUser.go +++ b/service/chime/api_op_BatchSuspendUser.go @@ -113,7 +113,7 @@ const opBatchSuspendUser = "BatchSuspendUser" // Chime Accounts (https://docs.aws.amazon.com/chime/latest/ag/manage-chime-account.html) // in the Amazon Chime Administration Guide. // -// Users suspended from a Team account are dissasociated from the account, but +// Users suspended from a Team account are disassociated from the account, but // they can continue to use Amazon Chime as free users. To remove the suspension // from suspended Team account users, invite them to the Team account again. // You can use the InviteUsers action to do so. diff --git a/service/chime/api_op_CreateMeeting.go b/service/chime/api_op_CreateMeeting.go index 4bc6344e70d..fd9ef7b80f3 100644 --- a/service/chime/api_op_CreateMeeting.go +++ b/service/chime/api_op_CreateMeeting.go @@ -19,7 +19,9 @@ type CreateMeetingInput struct { // ClientRequestToken is a required field ClientRequestToken *string `min:"2" type:"string" required:"true" idempotencyToken:"true" sensitive:"true"` - // The Region in which to create the meeting. Available values: us-east-1, us-west-2. + // The Region in which to create the meeting. Available values: ap-northeast-1, + // ap-southeast-1, ap-southeast-2, ca-central-1, eu-central-1, eu-north-1, eu-west-1, + // eu-west-2, eu-west-3, sa-east-1, us-east-1, us-east-2, us-west-1, us-west-2. MediaRegion *string `type:"string"` // Reserved. diff --git a/service/chime/api_op_CreateUser.go b/service/chime/api_op_CreateUser.go new file mode 100644 index 00000000000..c0561eba5fc --- /dev/null +++ b/service/chime/api_op_CreateUser.go @@ -0,0 +1,170 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package chime + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" + "github.com/aws/aws-sdk-go-v2/private/protocol" +) + +type CreateUserInput struct { + _ struct{} `type:"structure"` + + // The Amazon Chime account ID. + // + // AccountId is a required field + AccountId *string `location:"uri" locationName:"accountId" type:"string" required:"true"` + + // The user's email address. + Email *string `type:"string" sensitive:"true"` + + // The user type. + UserType UserType `type:"string" enum:"true"` + + // The user name. + Username *string `type:"string"` +} + +// String returns the string representation +func (s CreateUserInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *CreateUserInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "CreateUserInput"} + + if s.AccountId == nil { + invalidParams.Add(aws.NewErrParamRequired("AccountId")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s CreateUserInput) MarshalFields(e protocol.FieldEncoder) error { + e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + + if s.Email != nil { + v := *s.Email + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "Email", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if len(s.UserType) > 0 { + v := s.UserType + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "UserType", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } + if s.Username != nil { + v := *s.Username + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "Username", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.AccountId != nil { + v := *s.AccountId + + metadata := protocol.Metadata{} + e.SetValue(protocol.PathTarget, "accountId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +type CreateUserOutput struct { + _ struct{} `type:"structure"` + + // The user on the Amazon Chime account. + User *User `type:"structure"` +} + +// String returns the string representation +func (s CreateUserOutput) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s CreateUserOutput) MarshalFields(e protocol.FieldEncoder) error { + if s.User != nil { + v := s.User + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "User", v, metadata) + } + return nil +} + +const opCreateUser = "CreateUser" + +// CreateUserRequest returns a request value for making API operation for +// Amazon Chime. +// +// Creates a user under the specified Amazon Chime account. +// +// // Example sending a request using CreateUserRequest. +// req := client.CreateUserRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/chime-2018-05-01/CreateUser +func (c *Client) CreateUserRequest(input *CreateUserInput) CreateUserRequest { + op := &aws.Operation{ + Name: opCreateUser, + HTTPMethod: "POST", + HTTPPath: "/accounts/{accountId}/users?operation=create", + } + + if input == nil { + input = &CreateUserInput{} + } + + req := c.newRequest(op, input, &CreateUserOutput{}) + return CreateUserRequest{Request: req, Input: input, Copy: c.CreateUserRequest} +} + +// CreateUserRequest is the request type for the +// CreateUser API operation. +type CreateUserRequest struct { + *aws.Request + Input *CreateUserInput + Copy func(*CreateUserInput) CreateUserRequest +} + +// Send marshals and sends the CreateUser API request. +func (r CreateUserRequest) Send(ctx context.Context) (*CreateUserResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &CreateUserResponse{ + CreateUserOutput: r.Request.Data.(*CreateUserOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// CreateUserResponse is the response type for the +// CreateUser API operation. +type CreateUserResponse struct { + *CreateUserOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// CreateUser request. +func (r *CreateUserResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/chime/api_op_DisassociateSigninDelegateGroupsFromAccount.go b/service/chime/api_op_DisassociateSigninDelegateGroupsFromAccount.go new file mode 100644 index 00000000000..79a706c7d9c --- /dev/null +++ b/service/chime/api_op_DisassociateSigninDelegateGroupsFromAccount.go @@ -0,0 +1,159 @@ +// Code generated by private/model/cli/gen-api/main.go. DO NOT EDIT. + +package chime + +import ( + "context" + + "github.com/aws/aws-sdk-go-v2/aws" + "github.com/aws/aws-sdk-go-v2/internal/awsutil" + "github.com/aws/aws-sdk-go-v2/private/protocol" +) + +type DisassociateSigninDelegateGroupsFromAccountInput struct { + _ struct{} `type:"structure"` + + // The Amazon Chime account ID. + // + // AccountId is a required field + AccountId *string `location:"uri" locationName:"accountId" type:"string" required:"true"` + + // The sign-in delegate group names. + // + // GroupNames is a required field + GroupNames []string `min:"1" type:"list" required:"true"` +} + +// String returns the string representation +func (s DisassociateSigninDelegateGroupsFromAccountInput) String() string { + return awsutil.Prettify(s) +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *DisassociateSigninDelegateGroupsFromAccountInput) Validate() error { + invalidParams := aws.ErrInvalidParams{Context: "DisassociateSigninDelegateGroupsFromAccountInput"} + + if s.AccountId == nil { + invalidParams.Add(aws.NewErrParamRequired("AccountId")) + } + + if s.GroupNames == nil { + invalidParams.Add(aws.NewErrParamRequired("GroupNames")) + } + if s.GroupNames != nil && len(s.GroupNames) < 1 { + invalidParams.Add(aws.NewErrParamMinLen("GroupNames", 1)) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s DisassociateSigninDelegateGroupsFromAccountInput) MarshalFields(e protocol.FieldEncoder) error { + e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + + if s.GroupNames != nil { + v := s.GroupNames + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "GroupNames", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddValue(protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v1)}) + } + ls0.End() + + } + if s.AccountId != nil { + v := *s.AccountId + + metadata := protocol.Metadata{} + e.SetValue(protocol.PathTarget, "accountId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + +type DisassociateSigninDelegateGroupsFromAccountOutput struct { + _ struct{} `type:"structure"` +} + +// String returns the string representation +func (s DisassociateSigninDelegateGroupsFromAccountOutput) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s DisassociateSigninDelegateGroupsFromAccountOutput) MarshalFields(e protocol.FieldEncoder) error { + return nil +} + +const opDisassociateSigninDelegateGroupsFromAccount = "DisassociateSigninDelegateGroupsFromAccount" + +// DisassociateSigninDelegateGroupsFromAccountRequest returns a request value for making API operation for +// Amazon Chime. +// +// Disassociates the specified sign-in delegate groups from the specified Amazon +// Chime account. +// +// // Example sending a request using DisassociateSigninDelegateGroupsFromAccountRequest. +// req := client.DisassociateSigninDelegateGroupsFromAccountRequest(params) +// resp, err := req.Send(context.TODO()) +// if err == nil { +// fmt.Println(resp) +// } +// +// Please also see https://docs.aws.amazon.com/goto/WebAPI/chime-2018-05-01/DisassociateSigninDelegateGroupsFromAccount +func (c *Client) DisassociateSigninDelegateGroupsFromAccountRequest(input *DisassociateSigninDelegateGroupsFromAccountInput) DisassociateSigninDelegateGroupsFromAccountRequest { + op := &aws.Operation{ + Name: opDisassociateSigninDelegateGroupsFromAccount, + HTTPMethod: "POST", + HTTPPath: "/accounts/{accountId}?operation=disassociate-signin-delegate-groups", + } + + if input == nil { + input = &DisassociateSigninDelegateGroupsFromAccountInput{} + } + + req := c.newRequest(op, input, &DisassociateSigninDelegateGroupsFromAccountOutput{}) + return DisassociateSigninDelegateGroupsFromAccountRequest{Request: req, Input: input, Copy: c.DisassociateSigninDelegateGroupsFromAccountRequest} +} + +// DisassociateSigninDelegateGroupsFromAccountRequest is the request type for the +// DisassociateSigninDelegateGroupsFromAccount API operation. +type DisassociateSigninDelegateGroupsFromAccountRequest struct { + *aws.Request + Input *DisassociateSigninDelegateGroupsFromAccountInput + Copy func(*DisassociateSigninDelegateGroupsFromAccountInput) DisassociateSigninDelegateGroupsFromAccountRequest +} + +// Send marshals and sends the DisassociateSigninDelegateGroupsFromAccount API request. +func (r DisassociateSigninDelegateGroupsFromAccountRequest) Send(ctx context.Context) (*DisassociateSigninDelegateGroupsFromAccountResponse, error) { + r.Request.SetContext(ctx) + err := r.Request.Send() + if err != nil { + return nil, err + } + + resp := &DisassociateSigninDelegateGroupsFromAccountResponse{ + DisassociateSigninDelegateGroupsFromAccountOutput: r.Request.Data.(*DisassociateSigninDelegateGroupsFromAccountOutput), + response: &aws.Response{Request: r.Request}, + } + + return resp, nil +} + +// DisassociateSigninDelegateGroupsFromAccountResponse is the response type for the +// DisassociateSigninDelegateGroupsFromAccount API operation. +type DisassociateSigninDelegateGroupsFromAccountResponse struct { + *DisassociateSigninDelegateGroupsFromAccountOutput + + response *aws.Response +} + +// SDKResponseMetdata returns the response metadata for the +// DisassociateSigninDelegateGroupsFromAccount request. +func (r *DisassociateSigninDelegateGroupsFromAccountResponse) SDKResponseMetdata() *aws.Response { + return r.response +} diff --git a/service/chime/api_op_GetRoom.go b/service/chime/api_op_GetRoom.go index 76447cc6f0d..4928b2ca72f 100644 --- a/service/chime/api_op_GetRoom.go +++ b/service/chime/api_op_GetRoom.go @@ -94,7 +94,7 @@ const opGetRoom = "GetRoom" // GetRoomRequest returns a request value for making API operation for // Amazon Chime. // -// Retrieves room details, such as name. +// Retrieves room details, such as the room name. // // // Example sending a request using GetRoomRequest. // req := client.GetRoomRequest(params) diff --git a/service/chime/api_op_InviteUsers.go b/service/chime/api_op_InviteUsers.go index f8a713acdba..ae60d26fcc1 100644 --- a/service/chime/api_op_InviteUsers.go +++ b/service/chime/api_op_InviteUsers.go @@ -22,6 +22,9 @@ type InviteUsersInput struct { // // UserEmailList is a required field UserEmailList []string `type:"list" required:"true"` + + // The user type. + UserType UserType `type:"string" enum:"true"` } // String returns the string representation @@ -63,6 +66,12 @@ func (s InviteUsersInput) MarshalFields(e protocol.FieldEncoder) error { ls0.End() } + if len(s.UserType) > 0 { + v := s.UserType + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "UserType", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } if s.AccountId != nil { v := *s.AccountId diff --git a/service/chime/api_op_ListRoomMemberships.go b/service/chime/api_op_ListRoomMemberships.go index e8d01e29468..a5872dfd6d1 100644 --- a/service/chime/api_op_ListRoomMemberships.go +++ b/service/chime/api_op_ListRoomMemberships.go @@ -130,8 +130,8 @@ const opListRoomMemberships = "ListRoomMemberships" // ListRoomMembershipsRequest returns a request value for making API operation for // Amazon Chime. // -// Lists the membership details for the specified room, such as member IDs, -// member email addresses, and member names. +// Lists the membership details for the specified room, such as the members' +// IDs, email addresses, and names. // // // Example sending a request using ListRoomMembershipsRequest. // req := client.ListRoomMembershipsRequest(params) diff --git a/service/chime/api_op_ListUsers.go b/service/chime/api_op_ListUsers.go index 710c362567f..c8388bc6c8b 100644 --- a/service/chime/api_op_ListUsers.go +++ b/service/chime/api_op_ListUsers.go @@ -26,6 +26,9 @@ type ListUsersInput struct { // Optional. The user email address used to filter results. Maximum 1. UserEmail *string `location:"querystring" locationName:"user-email" type:"string" sensitive:"true"` + + // The user type. + UserType UserType `location:"querystring" locationName:"user-type" type:"string" enum:"true"` } // String returns the string representation @@ -78,6 +81,12 @@ func (s ListUsersInput) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.QueryTarget, "user-email", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if len(s.UserType) > 0 { + v := s.UserType + + metadata := protocol.Metadata{} + e.SetValue(protocol.QueryTarget, "user-type", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } return nil } diff --git a/service/chime/api_op_UpdateRoomMembership.go b/service/chime/api_op_UpdateRoomMembership.go index 0c0018b8714..40e01c4b592 100644 --- a/service/chime/api_op_UpdateRoomMembership.go +++ b/service/chime/api_op_UpdateRoomMembership.go @@ -118,9 +118,9 @@ const opUpdateRoomMembership = "UpdateRoomMembership" // UpdateRoomMembershipRequest returns a request value for making API operation for // Amazon Chime. // -// Updates room membership details, such as member role. The member role designates -// whether the member is a chat room administrator or a general chat room member. -// Member role can only be updated for user IDs. +// Updates room membership details, such as the member role. The member role +// designates whether the member is a chat room administrator or a general chat +// room member. The member role can be updated only for user IDs. // // // Example sending a request using UpdateRoomMembershipRequest. // req := client.UpdateRoomMembershipRequest(params) diff --git a/service/chime/api_op_UpdateUser.go b/service/chime/api_op_UpdateUser.go index 4b21da2f43c..65d3f70a544 100644 --- a/service/chime/api_op_UpdateUser.go +++ b/service/chime/api_op_UpdateUser.go @@ -18,6 +18,9 @@ type UpdateUserInput struct { // AccountId is a required field AccountId *string `location:"uri" locationName:"accountId" type:"string" required:"true"` + // The Alexa for Business metadata. + AlexaForBusinessMetadata *AlexaForBusinessMetadata `type:"structure"` + // The user license type to update. This must be a supported license type for // the Amazon Chime account that the user belongs to. LicenseType License `type:"string" enum:"true"` @@ -26,6 +29,9 @@ type UpdateUserInput struct { // // UserId is a required field UserId *string `location:"uri" locationName:"userId" type:"string" required:"true"` + + // The user type. + UserType UserType `type:"string" enum:"true"` } // String returns the string representation @@ -55,12 +61,24 @@ func (s *UpdateUserInput) Validate() error { func (s UpdateUserInput) MarshalFields(e protocol.FieldEncoder) error { e.SetValue(protocol.HeaderTarget, "Content-Type", protocol.StringValue("application/json"), protocol.Metadata{}) + if s.AlexaForBusinessMetadata != nil { + v := s.AlexaForBusinessMetadata + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "AlexaForBusinessMetadata", v, metadata) + } if len(s.LicenseType) > 0 { v := s.LicenseType metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "LicenseType", protocol.QuotedValue{ValueMarshaler: v}, metadata) } + if len(s.UserType) > 0 { + v := s.UserType + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "UserType", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } if s.AccountId != nil { v := *s.AccountId diff --git a/service/chime/api_types.go b/service/chime/api_types.go index 4140162ef09..e01edb0ccff 100644 --- a/service/chime/api_types.go +++ b/service/chime/api_types.go @@ -45,6 +45,9 @@ type Account struct { // Name is a required field Name *string `type:"string" required:"true"` + // The sign-in delegate groups associated with the account. + SigninDelegateGroups []SigninDelegateGroup `type:"list"` + // Supported licenses for the Amazon Chime account. SupportedLicenses []License `type:"list"` } @@ -93,6 +96,18 @@ func (s Account) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "Name", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.SigninDelegateGroups != nil { + v := s.SigninDelegateGroups + + metadata := protocol.Metadata{} + ls0 := e.List(protocol.BodyTarget, "SigninDelegateGroups", metadata) + ls0.Start() + for _, v1 := range v { + ls0.ListAddFields(v1) + } + ls0.End() + + } if s.SupportedLicenses != nil { v := s.SupportedLicenses @@ -147,6 +162,40 @@ func (s AccountSettings) MarshalFields(e protocol.FieldEncoder) error { return nil } +// The Alexa for Business metadata associated with an Amazon Chime user, used +// to integrate Alexa for Business with a device. +type AlexaForBusinessMetadata struct { + _ struct{} `type:"structure"` + + // The ARN of the room resource. + AlexaForBusinessRoomArn *string `type:"string" sensitive:"true"` + + // Starts or stops Alexa for Business. + IsAlexaForBusinessEnabled *bool `type:"boolean"` +} + +// String returns the string representation +func (s AlexaForBusinessMetadata) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s AlexaForBusinessMetadata) MarshalFields(e protocol.FieldEncoder) error { + if s.AlexaForBusinessRoomArn != nil { + v := *s.AlexaForBusinessRoomArn + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "AlexaForBusinessRoomArn", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + if s.IsAlexaForBusinessEnabled != nil { + v := *s.IsAlexaForBusinessEnabled + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "IsAlexaForBusinessEnabled", protocol.BoolValue(v), metadata) + } + return nil +} + // An Amazon Chime SDK meeting attendee. Includes a unique AttendeeId and JoinToken. // The JoinToken allows a client to authenticate and join as the specified attendee. // The JoinToken expires when the meeting ends or when DeleteAttendee is called. @@ -646,7 +695,9 @@ type Meeting struct { // The media placement for the meeting. MediaPlacement *MediaPlacement `type:"structure"` - // The Region in which to create the meeting. Available values: us-east-1, us-west-2. + // The Region in which to create the meeting. Available values: ap-northeast-1, + // ap-southeast-1, ap-southeast-2, ca-central-1, eu-central-1, eu-north-1, eu-west-1, + // eu-west-2, eu-west-3, sa-east-1, us-east-1, us-east-2, us-west-1, us-west-2. MediaRegion *string `type:"string"` // The Amazon Chime SDK meeting ID. @@ -1542,6 +1593,31 @@ func (s RoomMembership) MarshalFields(e protocol.FieldEncoder) error { return nil } +// An Active Directory (AD) group whose members are granted permission to act +// as delegates. +type SigninDelegateGroup struct { + _ struct{} `type:"structure"` + + // The group name. + GroupName *string `type:"string"` +} + +// String returns the string representation +func (s SigninDelegateGroup) String() string { + return awsutil.Prettify(s) +} + +// MarshalFields encodes the AWS API shape using the passed in protocol encoder. +func (s SigninDelegateGroup) MarshalFields(e protocol.FieldEncoder) error { + if s.GroupName != nil { + v := *s.GroupName + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "GroupName", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) + } + return nil +} + // The streaming configuration associated with an Amazon Chime Voice Connector. // Specifies whether media streaming is enabled for sending to Amazon Kinesis, // and shows the retention period for the Amazon Kinesis data, in hours. @@ -1849,6 +1925,9 @@ func (s UpdatePhoneNumberRequestItem) MarshalFields(e protocol.FieldEncoder) err type UpdateUserRequestItem struct { _ struct{} `type:"structure"` + // The Alexa for Business metadata. + AlexaForBusinessMetadata *AlexaForBusinessMetadata `type:"structure"` + // The user license type. LicenseType License `type:"string" enum:"true"` @@ -1856,6 +1935,9 @@ type UpdateUserRequestItem struct { // // UserId is a required field UserId *string `type:"string" required:"true"` + + // The user type. + UserType UserType `type:"string" enum:"true"` } // String returns the string representation @@ -1879,6 +1961,12 @@ func (s *UpdateUserRequestItem) Validate() error { // MarshalFields encodes the AWS API shape using the passed in protocol encoder. func (s UpdateUserRequestItem) MarshalFields(e protocol.FieldEncoder) error { + if s.AlexaForBusinessMetadata != nil { + v := s.AlexaForBusinessMetadata + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "AlexaForBusinessMetadata", v, metadata) + } if len(s.LicenseType) > 0 { v := s.LicenseType @@ -1891,6 +1979,12 @@ func (s UpdateUserRequestItem) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "UserId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if len(s.UserType) > 0 { + v := s.UserType + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "UserType", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } return nil } @@ -1901,6 +1995,9 @@ type User struct { // The Amazon Chime account ID. AccountId *string `type:"string"` + // The Alexa for Business metadata. + AlexaForBusinessMetadata *AlexaForBusinessMetadata `type:"structure"` + // The display name of the user. DisplayName *string `type:"string" sensitive:"true"` @@ -1933,6 +2030,9 @@ type User struct { // The user registration status. UserRegistrationStatus RegistrationStatus `type:"string" enum:"true"` + + // The user type. + UserType UserType `type:"string" enum:"true"` } // String returns the string representation @@ -1948,6 +2048,12 @@ func (s User) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "AccountId", protocol.QuotedValue{ValueMarshaler: protocol.StringValue(v)}, metadata) } + if s.AlexaForBusinessMetadata != nil { + v := s.AlexaForBusinessMetadata + + metadata := protocol.Metadata{} + e.SetFields(protocol.BodyTarget, "AlexaForBusinessMetadata", v, metadata) + } if s.DisplayName != nil { v := *s.DisplayName @@ -2010,6 +2116,12 @@ func (s User) MarshalFields(e protocol.FieldEncoder) error { metadata := protocol.Metadata{} e.SetValue(protocol.BodyTarget, "UserRegistrationStatus", protocol.QuotedValue{ValueMarshaler: v}, metadata) } + if len(s.UserType) > 0 { + v := s.UserType + + metadata := protocol.Metadata{} + e.SetValue(protocol.BodyTarget, "UserType", protocol.QuotedValue{ValueMarshaler: v}, metadata) + } return nil } diff --git a/service/chime/chimeiface/interface.go b/service/chime/chimeiface/interface.go index 0b6f10ffcd0..153855664fc 100644 --- a/service/chime/chimeiface/interface.go +++ b/service/chime/chimeiface/interface.go @@ -67,6 +67,8 @@ type ClientAPI interface { AssociatePhoneNumbersWithVoiceConnectorGroupRequest(*chime.AssociatePhoneNumbersWithVoiceConnectorGroupInput) chime.AssociatePhoneNumbersWithVoiceConnectorGroupRequest + AssociateSigninDelegateGroupsWithAccountRequest(*chime.AssociateSigninDelegateGroupsWithAccountInput) chime.AssociateSigninDelegateGroupsWithAccountRequest + BatchCreateAttendeeRequest(*chime.BatchCreateAttendeeInput) chime.BatchCreateAttendeeRequest BatchCreateRoomMembershipRequest(*chime.BatchCreateRoomMembershipInput) chime.BatchCreateRoomMembershipRequest @@ -95,6 +97,8 @@ type ClientAPI interface { CreateRoomMembershipRequest(*chime.CreateRoomMembershipInput) chime.CreateRoomMembershipRequest + CreateUserRequest(*chime.CreateUserInput) chime.CreateUserRequest + CreateVoiceConnectorRequest(*chime.CreateVoiceConnectorInput) chime.CreateVoiceConnectorRequest CreateVoiceConnectorGroupRequest(*chime.CreateVoiceConnectorGroupInput) chime.CreateVoiceConnectorGroupRequest @@ -131,6 +135,8 @@ type ClientAPI interface { DisassociatePhoneNumbersFromVoiceConnectorGroupRequest(*chime.DisassociatePhoneNumbersFromVoiceConnectorGroupInput) chime.DisassociatePhoneNumbersFromVoiceConnectorGroupRequest + DisassociateSigninDelegateGroupsFromAccountRequest(*chime.DisassociateSigninDelegateGroupsFromAccountInput) chime.DisassociateSigninDelegateGroupsFromAccountRequest + GetAccountRequest(*chime.GetAccountInput) chime.GetAccountRequest GetAccountSettingsRequest(*chime.GetAccountSettingsInput) chime.GetAccountSettingsRequest diff --git a/service/cloudfront/api_errors.go b/service/cloudfront/api_errors.go index a22997474f3..ce87dc9e401 100644 --- a/service/cloudfront/api_errors.go +++ b/service/cloudfront/api_errors.go @@ -249,7 +249,11 @@ const ( // ErrCodeInvalidWebACLId for service response error code // "InvalidWebACLId". // - // A web ACL id specified in the response body is not valid. + // A web ACL ID specified in the response body is not valid. To specify a web + // ACL created using the latest version of AWS WAF, use the ACL ARN, for example + // arn:aws:wafv2:us-east-1:123456789012:global/webacl/ExampleWebACL/473e64fd-f30b-4765-81a0-62ad96dd167a. + // To specify a web ACL created using AWS WAF Classic, use the ACL ID, for example + // 473e64fd-f30b-4765-81a0-62ad96dd167a. ErrCodeInvalidWebACLId = "InvalidWebACLId" // ErrCodeMissingBody for service response error code diff --git a/service/cloudfront/api_types.go b/service/cloudfront/api_types.go index a6bc1266cf6..410875afa86 100644 --- a/service/cloudfront/api_types.go +++ b/service/cloudfront/api_types.go @@ -110,8 +110,11 @@ type AliasICPRecordal struct { // * SUSPENDED indicates that the associated CNAME does not have a valid // ICP recordal number. // - // * PENDING indicates that at least one CNAME associated with the distribution - // does not have a valid ICP recordal number. + // * PENDING indicates that CloudFront can't determine the ICP recordal status + // of the CNAME associated with the distribution because there was an error + // in trying to determine the status. You can try again to see if the error + // is resolved in which case CloudFront returns an APPROVED or SUSPENDED + // status. ICPRecordalStatus ICPRecordalStatus `type:"string" enum:"true"` } @@ -362,7 +365,8 @@ type CacheBehavior struct { // or for the default cache behavior in your distribution. FieldLevelEncryptionId *string `type:"string"` - // A complex type that specifies how CloudFront handles query strings and cookies. + // A complex type that specifies how CloudFront handles query strings, cookies, + // and HTTP headers. // // ForwardedValues is a required field ForwardedValues *ForwardedValues `type:"structure" required:"true"` @@ -1356,9 +1360,6 @@ type CustomErrorResponse struct { // CloudFront queries your origin to see whether the problem that caused the // error has been resolved and the requested object is now available. // - // If you don't want to specify a value, include an empty element,