diff --git a/CHANGELOG.md b/CHANGELOG.md index 833ffe09b85..bb1c8fb31aa 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,18 @@ +Release v1.38.55 (2021-06-04) +=== + +### Service Client Updates +* `service/autoscaling`: Updates service documentation + * Documentation updates for Amazon EC2 Auto Scaling +* `service/cloudtrail`: Updates service documentation + * AWS CloudTrail supports data events on new service resources, including Amazon DynamoDB tables and S3 Object Lambda access points. +* `service/medialive`: Updates service API and documentation + * Add support for automatically setting the H.264 adaptive quantization and GOP B-frame fields. +* `service/pi`: Updates service API and documentation +* `service/qldb`: Updates service documentation +* `service/rds`: Updates service API, documentation, waiters, paginators, and examples + * Documentation updates for RDS: fixing an outdated link to the RDS documentation in DBInstance$DBInstanceStatus + Release v1.38.54 (2021-06-03) === diff --git a/aws/version.go b/aws/version.go index 572b480d25e..ba613df59b2 100644 --- a/aws/version.go +++ b/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.38.54" +const SDKVersion = "1.38.55" diff --git a/models/apis/autoscaling/2011-01-01/docs-2.json b/models/apis/autoscaling/2011-01-01/docs-2.json index ce911c88ca8..fc8cb8d0770 100644 --- a/models/apis/autoscaling/2011-01-01/docs-2.json +++ b/models/apis/autoscaling/2011-01-01/docs-2.json @@ -325,7 +325,7 @@ "BlockDeviceEbsThroughput": { "base": null, "refs": { - "Ebs$Throughput": "

The throughput to provision for a gp3 volume.

Valid Range: Minimum value of 125. Maximum value of 1000.

" + "Ebs$Throughput": "

The throughput (MiBps) to provision for a gp3 volume.

" } }, "BlockDeviceEbsVolumeSize": { diff --git a/models/apis/cloudtrail/2013-11-01/docs-2.json b/models/apis/cloudtrail/2013-11-01/docs-2.json index 04057b71e6f..661d8332854 100644 --- a/models/apis/cloudtrail/2013-11-01/docs-2.json +++ b/models/apis/cloudtrail/2013-11-01/docs-2.json @@ -6,7 +6,7 @@ "CreateTrail": "

Creates a trail that specifies the settings for delivery of log data to an Amazon S3 bucket.

", "DeleteTrail": "

Deletes a trail. This operation must be called from the region in which the trail was created. DeleteTrail cannot be called on the shadow trails (replicated trails in other regions) of a trail that is enabled in all regions.

", "DescribeTrails": "

Retrieves settings for one or more trails associated with the current region for your account.

", - "GetEventSelectors": "

Describes the settings for the event selectors that you configured for your trail. The information returned for your event selectors includes the following:

For more information, see Logging Data and Management Events for Trails in the AWS CloudTrail User Guide.

", + "GetEventSelectors": "

Describes the settings for the event selectors that you configured for your trail. The information returned for your event selectors includes the following:

For more information, see Logging Data and Management Events for Trails in the AWS CloudTrail User Guide.

", "GetInsightSelectors": "

Describes the settings for the Insights event selectors that you configured for your trail. GetInsightSelectors shows if CloudTrail Insights event logging is enabled on the trail, and if it is, which insight types are enabled. If you run GetInsightSelectors on a trail that does not have Insights events enabled, the operation throws the exception InsightNotEnabledException

For more information, see Logging CloudTrail Insights Events for Trails in the AWS CloudTrail User Guide.

", "GetTrail": "

Returns settings information for a specified trail.

", "GetTrailStatus": "

Returns a JSON-formatted list of information about the specified trail. Fields include information on delivery errors, Amazon SNS and Amazon S3 errors, and start and stop logging times for each trail. This operation returns trail status from a single region. To return trail status from all regions, you must call the operation on each region.

", @@ -130,7 +130,7 @@ } }, "DataResource": { - "base": "

The Amazon S3 buckets or AWS Lambda functions that you specify in your event selectors for your trail to log data events. Data events provide information about the resource operations performed on or within a resource itself. These are also known as data plane operations. You can specify up to 250 data resources for a trail.

The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors.

If you are using advanced event selectors, the maximum total number of values for all conditions, across all advanced event selectors for the trail, is 500.

The following example demonstrates how logging works when you configure logging of all data events for an S3 bucket named bucket-1. In this example, the CloudTrail user specified an empty prefix, and the option to log both Read and Write data events.

  1. A user uploads an image file to bucket-1.

  2. The PutObject API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.

  3. A user uploads an object to an Amazon S3 bucket named arn:aws:s3:::bucket-2.

  4. The PutObject API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.

The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named MyLambdaFunction, but not for all AWS Lambda functions.

  1. A user runs a script that includes a call to the MyLambdaFunction function and the MyOtherLambdaFunction function.

  2. The Invoke API operation on MyLambdaFunction is an AWS Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for MyLambdaFunction, any invocations of that function are logged. The trail processes and logs the event.

  3. The Invoke API operation on MyOtherLambdaFunction is an AWS Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the Invoke operation for MyOtherLambdaFunction does not match the function specified for the trail. The trail doesn’t log the event.

", + "base": "

The Amazon S3 buckets, AWS Lambda functions, or Amazon DynamoDB tables that you specify in your event selectors for your trail to log data events. Data events provide information about the resource operations performed on or within a resource itself. These are also known as data plane operations. You can specify up to 250 data resources for a trail.

The total number of allowed data resources is 250. This number can be distributed between 1 and 5 event selectors, but the total cannot exceed 250 across all selectors.

If you are using advanced event selectors, the maximum total number of values for all conditions, across all advanced event selectors for the trail, is 500.

The following example demonstrates how logging works when you configure logging of all data events for an S3 bucket named bucket-1. In this example, the CloudTrail user specified an empty prefix, and the option to log both Read and Write data events.

  1. A user uploads an image file to bucket-1.

  2. The PutObject API operation is an Amazon S3 object-level API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified an S3 bucket with an empty prefix, events that occur on any object in that bucket are logged. The trail processes and logs the event.

  3. A user uploads an object to an Amazon S3 bucket named arn:aws:s3:::bucket-2.

  4. The PutObject API operation occurred for an object in an S3 bucket that the CloudTrail user didn't specify for the trail. The trail doesn’t log the event.

The following example demonstrates how logging works when you configure logging of AWS Lambda data events for a Lambda function named MyLambdaFunction, but not for all AWS Lambda functions.

  1. A user runs a script that includes a call to the MyLambdaFunction function and the MyOtherLambdaFunction function.

  2. The Invoke API operation on MyLambdaFunction is an AWS Lambda API. It is recorded as a data event in CloudTrail. Because the CloudTrail user specified logging data events for MyLambdaFunction, any invocations of that function are logged. The trail processes and logs the event.

  3. The Invoke API operation on MyOtherLambdaFunction is an AWS Lambda API. Because the CloudTrail user did not specify logging data events for all Lambda functions, the Invoke operation for MyOtherLambdaFunction does not match the function specified for the trail. The trail doesn’t log the event.

", "refs": { "DataResources$member": null } @@ -138,13 +138,13 @@ "DataResourceValues": { "base": null, "refs": { - "DataResource$Values": "

An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified objects.

" + "DataResource$Values": "

An array of Amazon Resource Name (ARN) strings or partial ARN strings for the specified objects.

" } }, "DataResources": { "base": null, "refs": { - "EventSelector$DataResources": "

CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda functions. You can specify up to 250 resources for an individual event selector, but the total number of data resources cannot exceed 250 across all event selectors in a trail. This limit does not apply if you configure resource logging for all data events.

For more information, see Data Events and Limits in AWS CloudTrail in the AWS CloudTrail User Guide.

" + "EventSelector$DataResources": "

CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda functions with basic event selectors. You can specify up to 250 resources for an individual event selector, but the total number of data resources cannot exceed 250 across all event selectors in a trail. This limit does not apply if you configure resource logging for all data events.

For more information, see Data Events and Limits in AWS CloudTrail in the AWS CloudTrail User Guide.

" } }, "Date": { @@ -409,7 +409,7 @@ } }, "KmsKeyNotFoundException": { - "base": "

This exception is thrown when the KMS key does not exist, when the S3 bucket and the KMS key are not in the same region, or when the KMS key associated with the SNS topic either does not exist or is not in the same region.

", + "base": "

This exception is thrown when the AWS KMS key does not exist, when the S3 bucket and the AWS KMS key are not in the same region, or when the AWS KMS key associated with the SNS topic either does not exist or is not in the same region.

", "refs": { } }, @@ -622,7 +622,7 @@ "SelectorField": { "base": null, "refs": { - "AdvancedFieldSelector$Field": "

A field in an event record on which to filter events to be logged. Supported fields include readOnly, eventCategory, eventSource (for management events), eventName, resources.type, and resources.ARN.

" + "AdvancedFieldSelector$Field": "

A field in an event record on which to filter events to be logged. Supported fields include readOnly, eventCategory, eventSource (for management events), eventName, resources.type, and resources.ARN.

" } }, "SelectorName": { @@ -671,7 +671,7 @@ "CreateTrailResponse$CloudWatchLogsLogGroupArn": "

Specifies the Amazon Resource Name (ARN) of the log group to which CloudTrail logs will be delivered.

", "CreateTrailResponse$CloudWatchLogsRoleArn": "

Specifies the role for the CloudWatch Logs endpoint to assume to write to a user's log group.

", "CreateTrailResponse$KmsKeyId": "

Specifies the KMS key ID that encrypts the logs delivered by CloudTrail. The value is a fully specified ARN to a KMS key in the format:

arn:aws:kms:us-east-2:123456789012:key/12345678-1234-1234-1234-123456789012

", - "DataResource$Type": "

The resource type in which you want to log data events. You can specify AWS::S3::Object or AWS::Lambda::Function resources.

The AWS::S3Outposts::Object resource type is not valid in basic event selectors. To log data events on this resource type, use advanced event selectors.

", + "DataResource$Type": "

The resource type in which you want to log data events. You can specify AWS::S3::Object, AWS::Lambda::Function, or AWS::DynamoDB::Table resources.

The AWS::S3Outposts::Object, AWS::ManagedBlockchain::Node, and AWS::S3ObjectLambda::AccessPoint resource types are not valid in basic event selectors. To log data events on these resource types, use advanced event selectors.

", "DataResourceValues$member": null, "DeleteTrailRequest$Name": "

Specifies the name or the CloudTrail ARN of the trail to be deleted. The format of a trail ARN is: arn:aws:cloudtrail:us-east-2:123456789012:trail/MyTrail

", "Event$EventId": "

The CloudTrail ID of the event returned.

", diff --git a/models/apis/medialive/2017-10-14/api-2.json b/models/apis/medialive/2017-10-14/api-2.json index fcaeed62c98..3f0942f8e0d 100644 --- a/models/apis/medialive/2017-10-14/api-2.json +++ b/models/apis/medialive/2017-10-14/api-2.json @@ -5600,10 +5600,6 @@ "shape": "EbuTtDFillLineGapControl", "locationName": "fillLineGap" }, - "CopyrightHolder": { - "shape": "__stringMax1000", - "locationName": "copyrightHolder" - }, "FontFamily": { "shape": "__string", "locationName": "fontFamily" @@ -6047,6 +6043,7 @@ "H264AdaptiveQuantization": { "type": "string", "enum": [ + "AUTO", "HIGH", "HIGHER", "LOW", @@ -6415,6 +6412,7 @@ "H265AdaptiveQuantization": { "type": "string", "enum": [ + "AUTO", "HIGH", "HIGHER", "LOW", diff --git a/models/apis/medialive/2017-10-14/docs-2.json b/models/apis/medialive/2017-10-14/docs-2.json index 6f6b026f118..5afc81fd371 100644 --- a/models/apis/medialive/2017-10-14/docs-2.json +++ b/models/apis/medialive/2017-10-14/docs-2.json @@ -1080,7 +1080,7 @@ "H264AdaptiveQuantization": { "base": "H264 Adaptive Quantization", "refs": { - "H264Settings$AdaptiveQuantization": "Adaptive quantization. Allows intra-frame quantizers to vary to improve visual quality." + "H264Settings$AdaptiveQuantization": "Enables or disables adaptive quantization, which is a technique MediaLive can apply to video on a frame-by-frame basis to produce more compression without losing quality. There are three types of adaptive quantization: flicker, spatial, and temporal. Set the field in one of these ways: Set to Auto. Recommended. For each type of AQ, MediaLive will determine if AQ is needed, and if so, the appropriate strength. Set a strength (a value other than Auto or Disable). This strength will apply to any of the AQ fields that you choose to enable. Set to Disabled to disable all types of adaptive quantization." } }, "H264ColorMetadata": { @@ -1110,7 +1110,7 @@ "H264FlickerAq": { "base": "H264 Flicker Aq", "refs": { - "H264Settings$FlickerAq": "If set to enabled, adjust quantization within each frame to reduce flicker or 'pop' on I-frames." + "H264Settings$FlickerAq": "Flicker AQ makes adjustments within each frame to reduce flicker or 'pop' on I-frames. The value to enter in this field depends on the value in the Adaptive quantization field: If you have set the Adaptive quantization field to Auto, MediaLive ignores any value in this field. MediaLive will determine if flicker AQ is appropriate and will apply the appropriate strength. If you have set the Adaptive quantization field to a strength, you can set this field to Enabled or Disabled. Enabled: MediaLive will apply flicker AQ using the specified strength. Disabled: MediaLive won't apply flicker AQ. If you have set the Adaptive quantization to Disabled, MediaLive ignores any value in this field and doesn't apply flicker AQ." } }, "H264ForceFieldPictures": { @@ -1194,7 +1194,7 @@ "H264SpatialAq": { "base": "H264 Spatial Aq", "refs": { - "H264Settings$SpatialAq": "If set to enabled, adjust quantization within each frame based on spatial variation of content complexity." + "H264Settings$SpatialAq": "Spatial AQ makes adjustments within each frame based on spatial variation of content complexity. The value to enter in this field depends on the value in the Adaptive quantization field: If you have set the Adaptive quantization field to Auto, MediaLive ignores any value in this field. MediaLive will determine if spatial AQ is appropriate and will apply the appropriate strength. If you have set the Adaptive quantization field to a strength, you can set this field to Enabled or Disabled. Enabled: MediaLive will apply spatial AQ using the specified strength. Disabled: MediaLive won't apply spatial AQ. If you have set the Adaptive quantization to Disabled, MediaLive ignores any value in this field and doesn't apply spatial AQ." } }, "H264SubGopLength": { @@ -1212,7 +1212,7 @@ "H264TemporalAq": { "base": "H264 Temporal Aq", "refs": { - "H264Settings$TemporalAq": "If set to enabled, adjust quantization within each frame based on temporal variation of content complexity." + "H264Settings$TemporalAq": "Temporal makes adjustments within each frame based on temporal variation of content complexity. The value to enter in this field depends on the value in the Adaptive quantization field: If you have set the Adaptive quantization field to Auto, MediaLive ignores any value in this field. MediaLive will determine if temporal AQ is appropriate and will apply the appropriate strength. If you have set the Adaptive quantization field to a strength, you can set this field to Enabled or Disabled. Enabled: MediaLive will apply temporal AQ using the specified strength. Disabled: MediaLive won't apply temporal AQ. If you have set the Adaptive quantization to Disabled, MediaLive ignores any value in this field and doesn't apply temporal AQ." } }, "H264TimecodeInsertionBehavior": { @@ -3678,7 +3678,7 @@ "__integerMin1Max10": { "base": null, "refs": { - "H264Settings$QvbrQualityLevel": "Controls the target quality for the video encode. Applies only when the rate control mode is QVBR. Set values for the QVBR quality level field and Max bitrate field that suit your most important viewing devices. Recommended values are:\n- Primary screen: Quality level: 8 to 10. Max bitrate: 4M\n- PC or tablet: Quality level: 7. Max bitrate: 1.5M to 3M\n- Smartphone: Quality level: 6. Max bitrate: 1M to 1.5M", + "H264Settings$QvbrQualityLevel": "Controls the target quality for the video encode. Applies only when the rate control mode is QVBR. You can set a target quality or you can let MediaLive determine the best quality. To set a target quality, enter values in the QVBR quality level field and the Max bitrate field. Enter values that suit your most important viewing devices. Recommended values are:\n- Primary screen: Quality level: 8 to 10. Max bitrate: 4M\n- PC or tablet: Quality level: 7. Max bitrate: 1.5M to 3M\n- Smartphone: Quality level: 6. Max bitrate: 1M to 1.5M\nTo let MediaLive decide, leave the QVBR quality level field empty, and in Max bitrate enter the maximum rate you want in the video. For more information, see the section called \"Video - rate control mode\" in the MediaLive user guide", "H265Settings$QvbrQualityLevel": "Controls the target quality for the video encode. Applies only when the rate control mode is QVBR. Set values for the QVBR quality level field and Max bitrate field that suit your most important viewing devices. Recommended values are:\n- Primary screen: Quality level: 8 to 10. Max bitrate: 4M\n- PC or tablet: Quality level: 7. Max bitrate: 1.5M to 3M\n- Smartphone: Quality level: 6. Max bitrate: 1M to 1.5M" } }, diff --git a/models/apis/pi/2018-02-27/api-2.json b/models/apis/pi/2018-02-27/api-2.json index 8188b6ac3bf..8cb458830da 100644 --- a/models/apis/pi/2018-02-27/api-2.json +++ b/models/apis/pi/2018-02-27/api-2.json @@ -28,6 +28,20 @@ {"shape":"NotAuthorizedException"} ] }, + "GetDimensionKeyDetails":{ + "name":"GetDimensionKeyDetails", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"GetDimensionKeyDetailsRequest"}, + "output":{"shape":"GetDimensionKeyDetailsResponse"}, + "errors":[ + {"shape":"InvalidArgumentException"}, + {"shape":"InternalServiceError"}, + {"shape":"NotAuthorizedException"} + ] + }, "GetResourceMetrics":{ "name":"GetResourceMetrics", "http":{ @@ -71,16 +85,16 @@ ], "members":{ "ServiceType":{"shape":"ServiceType"}, - "Identifier":{"shape":"String"}, + "Identifier":{"shape":"RequestString"}, "StartTime":{"shape":"ISOTimestamp"}, "EndTime":{"shape":"ISOTimestamp"}, - "Metric":{"shape":"String"}, + "Metric":{"shape":"RequestString"}, "PeriodInSeconds":{"shape":"Integer"}, "GroupBy":{"shape":"DimensionGroup"}, "PartitionBy":{"shape":"DimensionGroup"}, "Filter":{"shape":"MetricQueryFilterMap"}, "MaxResults":{"shape":"MaxResults"}, - "NextToken":{"shape":"String"} + "NextToken":{"shape":"NextToken"} } }, "DescribeDimensionKeysResponse":{ @@ -90,15 +104,23 @@ "AlignedEndTime":{"shape":"ISOTimestamp"}, "PartitionKeys":{"shape":"ResponsePartitionKeyList"}, "Keys":{"shape":"DimensionKeyDescriptionList"}, - "NextToken":{"shape":"String"} + "NextToken":{"shape":"NextToken"} } }, + "DetailStatus":{ + "type":"string", + "enum":[ + "AVAILABLE", + "PROCESSING", + "UNAVAILABLE" + ] + }, "DimensionGroup":{ "type":"structure", "required":["Group"], "members":{ - "Group":{"shape":"String"}, - "Dimensions":{"shape":"StringList"}, + "Group":{"shape":"RequestString"}, + "Dimensions":{"shape":"RequestStringList"}, "Limit":{"shape":"Limit"} } }, @@ -114,12 +136,47 @@ "type":"list", "member":{"shape":"DimensionKeyDescription"} }, + "DimensionKeyDetail":{ + "type":"structure", + "members":{ + "Value":{"shape":"String"}, + "Dimension":{"shape":"String"}, + "Status":{"shape":"DetailStatus"} + } + }, + "DimensionKeyDetailList":{ + "type":"list", + "member":{"shape":"DimensionKeyDetail"} + }, "DimensionMap":{ "type":"map", - "key":{"shape":"String"}, - "value":{"shape":"String"} + "key":{"shape":"RequestString"}, + "value":{"shape":"RequestString"} }, "Double":{"type":"double"}, + "ErrorString":{"type":"string"}, + "GetDimensionKeyDetailsRequest":{ + "type":"structure", + "required":[ + "ServiceType", + "Identifier", + "Group", + "GroupIdentifier" + ], + "members":{ + "ServiceType":{"shape":"ServiceType"}, + "Identifier":{"shape":"IdentifierString"}, + "Group":{"shape":"RequestString"}, + "GroupIdentifier":{"shape":"RequestString"}, + "RequestedDimensions":{"shape":"RequestedDimensionList"} + } + }, + "GetDimensionKeyDetailsResponse":{ + "type":"structure", + "members":{ + "Dimensions":{"shape":"DimensionKeyDetailList"} + } + }, "GetResourceMetricsRequest":{ "type":"structure", "required":[ @@ -131,13 +188,13 @@ ], "members":{ "ServiceType":{"shape":"ServiceType"}, - "Identifier":{"shape":"String"}, + "Identifier":{"shape":"RequestString"}, "MetricQueries":{"shape":"MetricQueryList"}, "StartTime":{"shape":"ISOTimestamp"}, "EndTime":{"shape":"ISOTimestamp"}, "PeriodInSeconds":{"shape":"Integer"}, "MaxResults":{"shape":"MaxResults"}, - "NextToken":{"shape":"String"} + "NextToken":{"shape":"NextToken"} } }, "GetResourceMetricsResponse":{ @@ -147,15 +204,21 @@ "AlignedEndTime":{"shape":"ISOTimestamp"}, "Identifier":{"shape":"String"}, "MetricList":{"shape":"MetricKeyDataPointsList"}, - "NextToken":{"shape":"String"} + "NextToken":{"shape":"NextToken"} } }, "ISOTimestamp":{"type":"timestamp"}, + "IdentifierString":{ + "type":"string", + "max":256, + "min":0, + "pattern":"^db-[a-zA-Z0-9-]*$" + }, "Integer":{"type":"integer"}, "InternalServiceError":{ "type":"structure", "members":{ - "Message":{"shape":"String"} + "Message":{"shape":"ErrorString"} }, "exception":true, "fault":true @@ -163,7 +226,7 @@ "InvalidArgumentException":{ "type":"structure", "members":{ - "Message":{"shape":"String"} + "Message":{"shape":"ErrorString"} }, "exception":true }, @@ -192,15 +255,15 @@ "type":"structure", "required":["Metric"], "members":{ - "Metric":{"shape":"String"}, + "Metric":{"shape":"RequestString"}, "GroupBy":{"shape":"DimensionGroup"}, "Filter":{"shape":"MetricQueryFilterMap"} } }, "MetricQueryFilterMap":{ "type":"map", - "key":{"shape":"String"}, - "value":{"shape":"String"} + "key":{"shape":"RequestString"}, + "value":{"shape":"RequestString"} }, "MetricQueryList":{ "type":"list", @@ -212,13 +275,37 @@ "type":"list", "member":{"shape":"Double"} }, + "NextToken":{ + "type":"string", + "max":8192, + "min":1, + "pattern":"[\\s\\S]*" + }, "NotAuthorizedException":{ "type":"structure", "members":{ - "Message":{"shape":"String"} + "Message":{"shape":"ErrorString"} }, "exception":true }, + "RequestString":{ + "type":"string", + "max":256, + "min":0, + "pattern":".*\\S.*" + }, + "RequestStringList":{ + "type":"list", + "member":{"shape":"RequestString"}, + "max":10, + "min":1 + }, + "RequestedDimensionList":{ + "type":"list", + "member":{"shape":"RequestString"}, + "max":10, + "min":1 + }, "ResponsePartitionKey":{ "type":"structure", "required":["Dimensions"], @@ -242,12 +329,11 @@ "type":"string", "enum":["RDS"] }, - "String":{"type":"string"}, - "StringList":{ - "type":"list", - "member":{"shape":"String"}, - "max":10, - "min":1 + "String":{ + "type":"string", + "max":256, + "min":0, + "pattern":".*\\S.*" } } } diff --git a/models/apis/pi/2018-02-27/docs-2.json b/models/apis/pi/2018-02-27/docs-2.json index 9327f30fafc..c822018e527 100644 --- a/models/apis/pi/2018-02-27/docs-2.json +++ b/models/apis/pi/2018-02-27/docs-2.json @@ -3,6 +3,7 @@ "service": "Amazon RDS Performance Insights

Amazon RDS Performance Insights enables you to monitor and explore different dimensions of database load based on data captured from a running DB instance. The guide provides detailed information about Performance Insights data types, parameters and errors.

When Performance Insights is enabled, the Amazon RDS Performance Insights API provides visibility into the performance of your DB instance. Amazon CloudWatch provides the authoritative source for AWS service-vended monitoring metrics. Performance Insights offers a domain-specific view of DB load.

DB load is measured as Average Active Sessions. Performance Insights provides the data to API consumers as a two-dimensional time-series dataset. The time dimension provides DB load data for each time point in the queried time range. Each time point decomposes overall load in relation to the requested dimensions, measured at that time point. Examples include SQL, Wait event, User, and Host.

", "operations": { "DescribeDimensionKeys": "

For a specific time period, retrieve the top N dimension keys for a metric.

Each response element returns a maximum of 500 bytes. For larger elements, such as SQL statements, only the first 500 bytes are returned.

", + "GetDimensionKeyDetails": "

Get the attributes of the specified dimension group for a DB instance or data source. For example, if you specify a SQL ID, GetDimensionKeyDetails retrieves the full text of the dimension db.sql.statement associated with this ID. This operation is useful because GetResourceMetrics and DescribeDimensionKeys don't support retrieval of large SQL statement text.

", "GetResourceMetrics": "

Retrieve Performance Insights metrics for a set of data sources, over a time period. You can provide specific dimension groups and dimensions, and provide aggregation and filtering criteria for each group.

Each response element returns a maximum of 500 bytes. For larger elements, such as SQL statements, only the first 500 bytes are returned.

" }, "shapes": { @@ -28,6 +29,12 @@ "refs": { } }, + "DetailStatus": { + "base": null, + "refs": { + "DimensionKeyDetail$Status": "

The status of the dimension detail data. Possible values include the following:

" + } + }, "DimensionGroup": { "base": "

A logical grouping of Performance Insights metrics for a related subject area. For example, the db.sql dimension group consists of the following dimensions: db.sql.id, db.sql.db_id, db.sql.statement, and db.sql.tokenized_id.

Each response element returns a maximum of 500 bytes. For larger elements, such as SQL statements, only the first 500 bytes are returned.

", "refs": { @@ -48,6 +55,18 @@ "DescribeDimensionKeysResponse$Keys": "

The dimension keys that were requested.

" } }, + "DimensionKeyDetail": { + "base": "

An object that describes the details for a specified dimension.

", + "refs": { + "DimensionKeyDetailList$member": null + } + }, + "DimensionKeyDetailList": { + "base": null, + "refs": { + "GetDimensionKeyDetailsResponse$Dimensions": "

The details for the requested dimensions.

" + } + }, "DimensionMap": { "base": null, "refs": { @@ -64,6 +83,24 @@ "MetricValuesList$member": null } }, + "ErrorString": { + "base": null, + "refs": { + "InternalServiceError$Message": null, + "InvalidArgumentException$Message": null, + "NotAuthorizedException$Message": null + } + }, + "GetDimensionKeyDetailsRequest": { + "base": null, + "refs": { + } + }, + "GetDimensionKeyDetailsResponse": { + "base": null, + "refs": { + } + }, "GetResourceMetricsRequest": { "base": null, "refs": { @@ -88,6 +125,12 @@ "GetResourceMetricsResponse$AlignedEndTime": "

The end time for the returned metrics, after alignment to a granular boundary (as specified by PeriodInSeconds). AlignedEndTime will be greater than or equal to the value of the user-specified Endtime.

" } }, + "IdentifierString": { + "base": null, + "refs": { + "GetDimensionKeyDetailsRequest$Identifier": "

The ID for a data source from which to gather dimension data. This ID must be immutable and unique within an AWS Region. When a DB instance is the data source, specify its DbiResourceId value. For example, specify db-ABCDEFGHIJKLMNOPQRSTU1VW2X.

" + } + }, "Integer": { "base": null, "refs": { @@ -155,11 +198,50 @@ "DimensionKeyDescription$Partitions": "

If PartitionBy was specified, PartitionKeys contains the dimensions that were.

" } }, + "NextToken": { + "base": null, + "refs": { + "DescribeDimensionKeysRequest$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

", + "DescribeDimensionKeysResponse$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

", + "GetResourceMetricsRequest$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

", + "GetResourceMetricsResponse$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

" + } + }, "NotAuthorizedException": { "base": "

The user is not authorized to perform this request.

", "refs": { } }, + "RequestString": { + "base": null, + "refs": { + "DescribeDimensionKeysRequest$Identifier": "

An immutable, AWS Region-unique identifier for a data source. Performance Insights gathers metrics from this data source.

To use an Amazon RDS instance as a data source, you specify its DbiResourceId value. For example, specify db-FAIHNTYBKTGAUSUZQYPDS2GW4A

", + "DescribeDimensionKeysRequest$Metric": "

The name of a Performance Insights metric to be measured.

Valid values for Metric are:

If the number of active sessions is less than an internal Performance Insights threshold, db.load.avg and db.sampledload.avg are the same value. If the number of active sessions is greater than the internal threshold, Performance Insights samples the active sessions, with db.load.avg showing the scaled values, db.sampledload.avg showing the raw values, and db.sampledload.avg less than db.load.avg. For most use cases, you can query db.load.avg only.

", + "DimensionGroup$Group": "

The name of the dimension group. Valid values are:

", + "DimensionMap$key": null, + "DimensionMap$value": null, + "GetDimensionKeyDetailsRequest$Group": "

The name of the dimension group. The only valid value is db.sql. Performance Insights searches the specified group for the dimension group ID.

", + "GetDimensionKeyDetailsRequest$GroupIdentifier": "

The ID of the dimension group from which to retrieve dimension details. For dimension group db.sql, the group ID is db.sql.id.

", + "GetResourceMetricsRequest$Identifier": "

An immutable, AWS Region-unique identifier for a data source. Performance Insights gathers metrics from this data source.

To use a DB instance as a data source, specify its DbiResourceId value. For example, specify db-FAIHNTYBKTGAUSUZQYPDS2GW4A.

", + "MetricQuery$Metric": "

The name of a Performance Insights metric to be measured.

Valid values for Metric are:

If the number of active sessions is less than an internal Performance Insights threshold, db.load.avg and db.sampledload.avg are the same value. If the number of active sessions is greater than the internal threshold, Performance Insights samples the active sessions, with db.load.avg showing the scaled values, db.sampledload.avg showing the raw values, and db.sampledload.avg less than db.load.avg. For most use cases, you can query db.load.avg only.

", + "MetricQueryFilterMap$key": null, + "MetricQueryFilterMap$value": null, + "RequestStringList$member": null, + "RequestedDimensionList$member": null + } + }, + "RequestStringList": { + "base": null, + "refs": { + "DimensionGroup$Dimensions": "

A list of specific dimensions from a dimension group. If this parameter is not present, then it signifies that all of the dimensions in the group were requested, or are present in the response.

Valid values for elements in the Dimensions array are:

" + } + }, + "RequestedDimensionList": { + "base": null, + "refs": { + "GetDimensionKeyDetailsRequest$RequestedDimensions": "

A list of dimensions to retrieve the detail data for within the given dimension group. For the dimension group db.sql, specify either the full dimension name db.sql.statement or the short dimension name statement. If you don't specify this parameter, Performance Insights returns all dimension data within the specified dimension group.

" + } + }, "ResponsePartitionKey": { "base": "

If PartitionBy was specified in a DescribeDimensionKeys request, the dimensions are returned in an array. Each element in the array specifies one dimension.

", "refs": { @@ -182,37 +264,17 @@ "base": null, "refs": { "DescribeDimensionKeysRequest$ServiceType": "

The AWS service for which Performance Insights will return metrics. The only valid value for ServiceType is RDS.

", + "GetDimensionKeyDetailsRequest$ServiceType": "

The AWS service for which Performance Insights returns data. The only valid value is RDS.

", "GetResourceMetricsRequest$ServiceType": "

The AWS service for which Performance Insights returns metrics. The only valid value for ServiceType is RDS.

" } }, "String": { "base": null, "refs": { - "DescribeDimensionKeysRequest$Identifier": "

An immutable, AWS Region-unique identifier for a data source. Performance Insights gathers metrics from this data source.

To use an Amazon RDS instance as a data source, you specify its DbiResourceId value. For example, specify db-FAIHNTYBKTGAUSUZQYPDS2GW4A

", - "DescribeDimensionKeysRequest$Metric": "

The name of a Performance Insights metric to be measured.

Valid values for Metric are:

If the number of active sessions is less than an internal Performance Insights threshold, db.load.avg and db.sampledload.avg are the same value. If the number of active sessions is greater than the internal threshold, Performance Insights samples the active sessions, with db.load.avg showing the scaled values, db.sampledload.avg showing the raw values, and db.sampledload.avg less than db.load.avg. For most use cases, you can query db.load.avg only.

", - "DescribeDimensionKeysRequest$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

", - "DescribeDimensionKeysResponse$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

", - "DimensionGroup$Group": "

The name of the dimension group. Valid values are:

", - "DimensionMap$key": null, - "DimensionMap$value": null, - "GetResourceMetricsRequest$Identifier": "

An immutable, AWS Region-unique identifier for a data source. Performance Insights gathers metrics from this data source.

To use a DB instance as a data source, specify its DbiResourceId value. For example, specify db-FAIHNTYBKTGAUSUZQYPDS2GW4A.

", - "GetResourceMetricsRequest$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

", + "DimensionKeyDetail$Value": "

The value of the dimension detail data. For the db.sql.statement dimension, this value is either the full or truncated SQL query, depending on the return status.

", + "DimensionKeyDetail$Dimension": "

The full name of the dimension. The full name includes the group name and key name. The only valid value is db.sql.statement.

", "GetResourceMetricsResponse$Identifier": "

An immutable, AWS Region-unique identifier for a data source. Performance Insights gathers metrics from this data source.

To use a DB instance as a data source, you specify its DbiResourceId value - for example: db-FAIHNTYBKTGAUSUZQYPDS2GW4A

", - "GetResourceMetricsResponse$NextToken": "

An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the token, up to the value specified by MaxRecords.

", - "InternalServiceError$Message": null, - "InvalidArgumentException$Message": null, - "MetricQuery$Metric": "

The name of a Performance Insights metric to be measured.

Valid values for Metric are:

If the number of active sessions is less than an internal Performance Insights threshold, db.load.avg and db.sampledload.avg are the same value. If the number of active sessions is greater than the internal threshold, Performance Insights samples the active sessions, with db.load.avg showing the scaled values, db.sampledload.avg showing the raw values, and db.sampledload.avg less than db.load.avg. For most use cases, you can query db.load.avg only.

", - "MetricQueryFilterMap$key": null, - "MetricQueryFilterMap$value": null, - "NotAuthorizedException$Message": null, - "ResponseResourceMetricKey$Metric": "

The name of a Performance Insights metric to be measured.

Valid values for Metric are:

If the number of active sessions is less than an internal Performance Insights threshold, db.load.avg and db.sampledload.avg are the same value. If the number of active sessions is greater than the internal threshold, Performance Insights samples the active sessions, with db.load.avg showing the scaled values, db.sampledload.avg showing the raw values, and db.sampledload.avg less than db.load.avg. For most use cases, you can query db.load.avg only.

", - "StringList$member": null - } - }, - "StringList": { - "base": null, - "refs": { - "DimensionGroup$Dimensions": "

A list of specific dimensions from a dimension group. If this parameter is not present, then it signifies that all of the dimensions in the group were requested, or are present in the response.

Valid values for elements in the Dimensions array are:

" + "ResponseResourceMetricKey$Metric": "

The name of a Performance Insights metric to be measured.

Valid values for Metric are:

If the number of active sessions is less than an internal Performance Insights threshold, db.load.avg and db.sampledload.avg are the same value. If the number of active sessions is greater than the internal threshold, Performance Insights samples the active sessions, with db.load.avg showing the scaled values, db.sampledload.avg showing the raw values, and db.sampledload.avg less than db.load.avg. For most use cases, you can query db.load.avg only.

" } } } diff --git a/models/apis/qldb/2019-01-02/docs-2.json b/models/apis/qldb/2019-01-02/docs-2.json index 4acf6f91747..be8aecef786 100644 --- a/models/apis/qldb/2019-01-02/docs-2.json +++ b/models/apis/qldb/2019-01-02/docs-2.json @@ -3,25 +3,25 @@ "service": "

The control plane for Amazon QLDB

", "operations": { "CancelJournalKinesisStream": "

Ends a given Amazon QLDB journal stream. Before a stream can be canceled, its current status must be ACTIVE.

You can't restart a stream after you cancel it. Canceled QLDB stream resources are subject to a 7-day retention period, so they are automatically deleted after this limit expires.

", - "CreateLedger": "

Creates a new ledger in your AWS account.

", - "DeleteLedger": "

Deletes a ledger and all of its contents. This action is irreversible.

If deletion protection is enabled, you must first disable it before you can delete the ledger using the QLDB API or the AWS Command Line Interface (AWS CLI). You can disable it by calling the UpdateLedger operation to set the flag to false. The QLDB console disables deletion protection for you when you use it to delete a ledger.

", - "DescribeJournalKinesisStream": "

Returns detailed information about a given Amazon QLDB journal stream. The output includes the Amazon Resource Name (ARN), stream name, current status, creation time, and the parameters of your original stream creation request.

", - "DescribeJournalS3Export": "

Returns information about a journal export job, including the ledger name, export ID, when it was created, current status, and its start and end time export parameters.

This action does not return any expired export jobs. For more information, see Export Job Expiration in the Amazon QLDB Developer Guide.

If the export job with the given ExportId doesn't exist, then throws ResourceNotFoundException.

If the ledger with the given Name doesn't exist, then throws ResourceNotFoundException.

", + "CreateLedger": "

Creates a new ledger in your AWS account in the current Region.

", + "DeleteLedger": "

Deletes a ledger and all of its contents. This action is irreversible.

If deletion protection is enabled, you must first disable it before you can delete the ledger. You can disable it by calling the UpdateLedger operation to set the flag to false.

", + "DescribeJournalKinesisStream": "

Returns detailed information about a given Amazon QLDB journal stream. The output includes the Amazon Resource Name (ARN), stream name, current status, creation time, and the parameters of the original stream creation request.

This action does not return any expired journal streams. For more information, see Expiration for terminal streams in the Amazon QLDB Developer Guide.

", + "DescribeJournalS3Export": "

Returns information about a journal export job, including the ledger name, export ID, creation time, current status, and the parameters of the original export creation request.

This action does not return any expired export jobs. For more information, see Export job expiration in the Amazon QLDB Developer Guide.

If the export job with the given ExportId doesn't exist, then throws ResourceNotFoundException.

If the ledger with the given Name doesn't exist, then throws ResourceNotFoundException.

", "DescribeLedger": "

Returns information about a ledger, including its state and when it was created.

", "ExportJournalToS3": "

Exports journal contents within a date and time range from a ledger into a specified Amazon Simple Storage Service (Amazon S3) bucket. The data is written as files in Amazon Ion format.

If the ledger with the given Name doesn't exist, then throws ResourceNotFoundException.

If the ledger with the given Name is in CREATING status, then throws ResourcePreconditionNotMetException.

You can initiate up to two concurrent journal export requests for each ledger. Beyond this limit, journal export requests throw LimitExceededException.

", "GetBlock": "

Returns a block object at a specified address in a journal. Also returns a proof of the specified block for verification if DigestTipAddress is provided.

For information about the data contents in a block, see Journal contents in the Amazon QLDB Developer Guide.

If the specified ledger doesn't exist or is in DELETING status, then throws ResourceNotFoundException.

If the specified ledger is in CREATING status, then throws ResourcePreconditionNotMetException.

If no block exists with the specified address, then throws InvalidParameterException.

", "GetDigest": "

Returns the digest of a ledger at the latest committed block in the journal. The response includes a 256-bit hash value and a block address.

", "GetRevision": "

Returns a revision data object for a specified document ID and block address. Also returns a proof of the specified revision for verification if DigestTipAddress is provided.

", - "ListJournalKinesisStreamsForLedger": "

Returns an array of all Amazon QLDB journal stream descriptors for a given ledger. The output of each stream descriptor includes the same details that are returned by DescribeJournalKinesisStream.

This action returns a maximum of MaxResults items. It is paginated so that you can retrieve all the items by calling ListJournalKinesisStreamsForLedger multiple times.

", - "ListJournalS3Exports": "

Returns an array of journal export job descriptions for all ledgers that are associated with the current AWS account and Region.

This action returns a maximum of MaxResults items, and is paginated so that you can retrieve all the items by calling ListJournalS3Exports multiple times.

This action does not return any expired export jobs. For more information, see Export Job Expiration in the Amazon QLDB Developer Guide.

", - "ListJournalS3ExportsForLedger": "

Returns an array of journal export job descriptions for a specified ledger.

This action returns a maximum of MaxResults items, and is paginated so that you can retrieve all the items by calling ListJournalS3ExportsForLedger multiple times.

This action does not return any expired export jobs. For more information, see Export Job Expiration in the Amazon QLDB Developer Guide.

", + "ListJournalKinesisStreamsForLedger": "

Returns an array of all Amazon QLDB journal stream descriptors for a given ledger. The output of each stream descriptor includes the same details that are returned by DescribeJournalKinesisStream.

This action does not return any expired journal streams. For more information, see Expiration for terminal streams in the Amazon QLDB Developer Guide.

This action returns a maximum of MaxResults items. It is paginated so that you can retrieve all the items by calling ListJournalKinesisStreamsForLedger multiple times.

", + "ListJournalS3Exports": "

Returns an array of journal export job descriptions for all ledgers that are associated with the current AWS account and Region.

This action returns a maximum of MaxResults items, and is paginated so that you can retrieve all the items by calling ListJournalS3Exports multiple times.

This action does not return any expired export jobs. For more information, see Export job expiration in the Amazon QLDB Developer Guide.

", + "ListJournalS3ExportsForLedger": "

Returns an array of journal export job descriptions for a specified ledger.

This action returns a maximum of MaxResults items, and is paginated so that you can retrieve all the items by calling ListJournalS3ExportsForLedger multiple times.

This action does not return any expired export jobs. For more information, see Export job expiration in the Amazon QLDB Developer Guide.

", "ListLedgers": "

Returns an array of ledger summaries that are associated with the current AWS account and Region.

This action returns a maximum of 100 items and is paginated so that you can retrieve all the items by calling ListLedgers multiple times.

", "ListTagsForResource": "

Returns all tags for a specified Amazon QLDB resource.

", "StreamJournalToKinesis": "

Creates a journal stream for a given Amazon QLDB ledger. The stream captures every document revision that is committed to the ledger's journal and delivers the data to a specified Amazon Kinesis Data Streams resource.

", "TagResource": "

Adds one or more tags to a specified Amazon QLDB resource.

A resource can have up to 50 tags. If you try to create more than 50 tags for a resource, your request fails and returns an error.

", "UntagResource": "

Removes one or more tags from a specified Amazon QLDB resource. You can specify up to 50 tag keys to remove.

", "UpdateLedger": "

Updates properties on a ledger.

", - "UpdateLedgerPermissionsMode": "

Updates the permissions mode of a ledger.

" + "UpdateLedgerPermissionsMode": "

Updates the permissions mode of a ledger.

Before you switch to the STANDARD permissions mode, you must first create all required IAM policies and table tags to avoid disruption to your users. To learn more, see Migrating to the standard permissions mode in the Amazon QLDB Developer Guide.

" }, "shapes": { "Arn": { @@ -33,12 +33,12 @@ "JournalKinesisStreamDescription$RoleArn": "

The Amazon Resource Name (ARN) of the IAM role that grants QLDB permissions for a journal stream to write data records to a Kinesis Data Streams resource.

", "JournalKinesisStreamDescription$Arn": "

The Amazon Resource Name (ARN) of the QLDB journal stream.

", "JournalS3ExportDescription$RoleArn": "

The Amazon Resource Name (ARN) of the IAM role that grants QLDB permissions for a journal export job to do the following:

", - "KinesisConfiguration$StreamArn": "

The Amazon Resource Name (ARN) of the Kinesis data stream resource.

", - "ListTagsForResourceRequest$ResourceArn": "

The Amazon Resource Name (ARN) for which you want to list the tags. For example:

arn:aws:qldb:us-east-1:123456789012:ledger/exampleLedger

", - "S3EncryptionConfiguration$KmsKeyArn": "

The Amazon Resource Name (ARN) for a symmetric customer master key (CMK) in AWS Key Management Service (AWS KMS). Amazon QLDB does not support asymmetric CMKs.

You must provide a KmsKeyArn if you specify SSE_KMS as the ObjectEncryptionType.

KmsKeyArn is not required if you specify SSE_S3 as the ObjectEncryptionType.

", + "KinesisConfiguration$StreamArn": "

The Amazon Resource Name (ARN) of the Kinesis Data Streams resource.

", + "ListTagsForResourceRequest$ResourceArn": "

The Amazon Resource Name (ARN) for which to list the tags. For example:

arn:aws:qldb:us-east-1:123456789012:ledger/exampleLedger

", + "S3EncryptionConfiguration$KmsKeyArn": "

The Amazon Resource Name (ARN) for a symmetric customer master key (CMK) in AWS Key Management Service (AWS KMS). Amazon S3 does not support asymmetric CMKs.

You must provide a KmsKeyArn if you specify SSE_KMS as the ObjectEncryptionType.

KmsKeyArn is not required if you specify SSE_S3 as the ObjectEncryptionType.

", "StreamJournalToKinesisRequest$RoleArn": "

The Amazon Resource Name (ARN) of the IAM role that grants QLDB permissions for a journal stream to write data records to a Kinesis Data Streams resource.

", "TagResourceRequest$ResourceArn": "

The Amazon Resource Name (ARN) to which you want to add the tags. For example:

arn:aws:qldb:us-east-1:123456789012:ledger/exampleLedger

", - "UntagResourceRequest$ResourceArn": "

The Amazon Resource Name (ARN) from which you want to remove the tags. For example:

arn:aws:qldb:us-east-1:123456789012:ledger/exampleLedger

", + "UntagResourceRequest$ResourceArn": "

The Amazon Resource Name (ARN) from which to remove the tags. For example:

arn:aws:qldb:us-east-1:123456789012:ledger/exampleLedger

", "UpdateLedgerPermissionsModeResponse$Arn": "

The Amazon Resource Name (ARN) for the ledger.

", "UpdateLedgerResponse$Arn": "

The Amazon Resource Name (ARN) for the ledger.

" } @@ -46,7 +46,7 @@ "Boolean": { "base": null, "refs": { - "KinesisConfiguration$AggregationEnabled": "

Enables QLDB to publish multiple data records in a single Kinesis Data Streams record. To learn more, see KPL Key Concepts in the Amazon Kinesis Data Streams Developer Guide.

" + "KinesisConfiguration$AggregationEnabled": "

Enables QLDB to publish multiple data records in a single Kinesis Data Streams record, increasing the number of records sent per API call.

This option is enabled by default. Record aggregation has important implications for processing records and requires de-aggregation in your stream consumer. To learn more, see KPL Key Concepts and Consumer De-aggregation in the Amazon Kinesis Data Streams Developer Guide.

" } }, "CancelJournalKinesisStreamRequest": { @@ -77,11 +77,11 @@ "DeletionProtection": { "base": null, "refs": { - "CreateLedgerRequest$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger using the QLDB API or the AWS Command Line Interface (AWS CLI). You can disable it by calling the UpdateLedger operation to set the flag to false. The QLDB console disables deletion protection for you when you use it to delete a ledger.

", - "CreateLedgerResponse$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger using the QLDB API or the AWS Command Line Interface (AWS CLI). You can disable it by calling the UpdateLedger operation to set the flag to false. The QLDB console disables deletion protection for you when you use it to delete a ledger.

", - "DescribeLedgerResponse$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger using the QLDB API or the AWS Command Line Interface (AWS CLI). You can disable it by calling the UpdateLedger operation to set the flag to false. The QLDB console disables deletion protection for you when you use it to delete a ledger.

", - "UpdateLedgerRequest$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger using the QLDB API or the AWS Command Line Interface (AWS CLI). You can disable it by calling the UpdateLedger operation to set the flag to false. The QLDB console disables deletion protection for you when you use it to delete a ledger.

", - "UpdateLedgerResponse$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger using the QLDB API or the AWS Command Line Interface (AWS CLI). You can disable it by calling the UpdateLedger operation to set the flag to false. The QLDB console disables deletion protection for you when you use it to delete a ledger.

" + "CreateLedgerRequest$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger. You can disable it by calling the UpdateLedger operation to set the flag to false.

", + "CreateLedgerResponse$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger. You can disable it by calling the UpdateLedger operation to set the flag to false.

", + "DescribeLedgerResponse$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger. You can disable it by calling the UpdateLedger operation to set the flag to false.

", + "UpdateLedgerRequest$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger. You can disable it by calling the UpdateLedger operation to set the flag to false.

", + "UpdateLedgerResponse$DeletionProtection": "

The flag that prevents a ledger from being deleted by any user. If not provided on ledger creation, this feature is enabled (true) by default.

If deletion protection is enabled, you must first disable it before you can delete the ledger. You can disable it by calling the UpdateLedger operation to set the flag to false.

" } }, "DescribeJournalKinesisStreamRequest": { @@ -195,7 +195,7 @@ } }, "JournalKinesisStreamDescription": { - "base": "

The information about an Amazon QLDB journal stream, including the Amazon Resource Name (ARN), stream name, creation time, current status, and the parameters of your original stream creation request.

", + "base": "

Information about an Amazon QLDB journal stream, including the Amazon Resource Name (ARN), stream name, creation time, current status, and the parameters of the original stream creation request.

", "refs": { "DescribeJournalKinesisStreamResponse$Stream": "

Information about the QLDB journal stream returned by a DescribeJournalS3Export request.

", "JournalKinesisStreamDescriptionList$member": null @@ -208,7 +208,7 @@ } }, "JournalS3ExportDescription": { - "base": "

The information about a journal export job, including the ledger name, export ID, when it was created, current status, and its start and end time export parameters.

", + "base": "

Information about a journal export job, including the ledger name, export ID, creation time, current status, and the parameters of the original export creation request.

", "refs": { "DescribeJournalS3ExportResponse$ExportDescription": "

Information about the journal export job returned by a DescribeJournalS3Export request.

", "JournalS3ExportList$member": null @@ -222,9 +222,9 @@ } }, "KinesisConfiguration": { - "base": "

The configuration settings of the Amazon Kinesis Data Streams destination for your Amazon QLDB journal stream.

", + "base": "

The configuration settings of the Amazon Kinesis Data Streams destination for an Amazon QLDB journal stream.

", "refs": { - "JournalKinesisStreamDescription$KinesisConfiguration": "

The configuration settings of the Amazon Kinesis Data Streams destination for your QLDB journal stream.

", + "JournalKinesisStreamDescription$KinesisConfiguration": "

The configuration settings of the Amazon Kinesis Data Streams destination for a QLDB journal stream.

", "StreamJournalToKinesisRequest$KinesisConfiguration": "

The configuration settings of the Kinesis Data Streams destination for your stream request.

" } }, @@ -362,10 +362,10 @@ "PermissionsMode": { "base": null, "refs": { - "CreateLedgerRequest$PermissionsMode": "

The permissions mode to assign to the ledger that you want to create. This parameter can have one of the following values:

We strongly recommend using the STANDARD permissions mode to maximize the security of your ledger data.

", + "CreateLedgerRequest$PermissionsMode": "

The permissions mode to assign to the ledger that you want to create. This parameter can have one of the following values:

We strongly recommend using the STANDARD permissions mode to maximize the security of your ledger data.

", "CreateLedgerResponse$PermissionsMode": "

The permissions mode of the ledger that you created.

", "DescribeLedgerResponse$PermissionsMode": "

The permissions mode of the ledger.

", - "UpdateLedgerPermissionsModeRequest$PermissionsMode": "

The permissions mode to assign to the ledger. This parameter can have one of the following values:

We strongly recommend using the STANDARD permissions mode to maximize the security of your ledger data.

", + "UpdateLedgerPermissionsModeRequest$PermissionsMode": "

The permissions mode to assign to the ledger. This parameter can have one of the following values:

We strongly recommend using the STANDARD permissions mode to maximize the security of your ledger data.

", "UpdateLedgerPermissionsModeResponse$PermissionsMode": "

The current permissions mode of the ledger.

" } }, @@ -472,7 +472,7 @@ "TagKeyList": { "base": null, "refs": { - "UntagResourceRequest$TagKeys": "

The list of tag keys that you want to remove.

" + "UntagResourceRequest$TagKeys": "

The list of tag keys to remove.

" } }, "TagResourceRequest": { @@ -505,8 +505,8 @@ "refs": { "CreateLedgerResponse$CreationDateTime": "

The date and time, in epoch time format, when the ledger was created. (Epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.)

", "DescribeLedgerResponse$CreationDateTime": "

The date and time, in epoch time format, when the ledger was created. (Epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.)

", - "ExportJournalToS3Request$InclusiveStartTime": "

The inclusive start date and time for the range of journal contents that you want to export.

The InclusiveStartTime must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z

The InclusiveStartTime must be before ExclusiveEndTime.

If you provide an InclusiveStartTime that is before the ledger's CreationDateTime, Amazon QLDB defaults it to the ledger's CreationDateTime.

", - "ExportJournalToS3Request$ExclusiveEndTime": "

The exclusive end date and time for the range of journal contents that you want to export.

The ExclusiveEndTime must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z

The ExclusiveEndTime must be less than or equal to the current UTC date and time.

", + "ExportJournalToS3Request$InclusiveStartTime": "

The inclusive start date and time for the range of journal contents to export.

The InclusiveStartTime must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z.

The InclusiveStartTime must be before ExclusiveEndTime.

If you provide an InclusiveStartTime that is before the ledger's CreationDateTime, Amazon QLDB defaults it to the ledger's CreationDateTime.

", + "ExportJournalToS3Request$ExclusiveEndTime": "

The exclusive end date and time for the range of journal contents to export.

The ExclusiveEndTime must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z.

The ExclusiveEndTime must be less than or equal to the current UTC date and time.

", "JournalKinesisStreamDescription$CreationTime": "

The date and time, in epoch time format, when the QLDB journal stream was created. (Epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.)

", "JournalKinesisStreamDescription$InclusiveStartTime": "

The inclusive start date and time from which to start streaming journal data.

", "JournalKinesisStreamDescription$ExclusiveEndTime": "

The exclusive date and time that specifies when the stream ends. If this parameter is blank, the stream runs indefinitely until you cancel it.

", @@ -514,23 +514,23 @@ "JournalS3ExportDescription$InclusiveStartTime": "

The inclusive start date and time for the range of journal contents that are specified in the original export request.

", "JournalS3ExportDescription$ExclusiveEndTime": "

The exclusive end date and time for the range of journal contents that are specified in the original export request.

", "LedgerSummary$CreationDateTime": "

The date and time, in epoch time format, when the ledger was created. (Epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.)

", - "StreamJournalToKinesisRequest$InclusiveStartTime": "

The inclusive start date and time from which to start streaming journal data. This parameter must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z

The InclusiveStartTime cannot be in the future and must be before ExclusiveEndTime.

If you provide an InclusiveStartTime that is before the ledger's CreationDateTime, QLDB effectively defaults it to the ledger's CreationDateTime.

", - "StreamJournalToKinesisRequest$ExclusiveEndTime": "

The exclusive date and time that specifies when the stream ends. If you don't define this parameter, the stream runs indefinitely until you cancel it.

The ExclusiveEndTime must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z

", + "StreamJournalToKinesisRequest$InclusiveStartTime": "

The inclusive start date and time from which to start streaming journal data. This parameter must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z.

The InclusiveStartTime cannot be in the future and must be before ExclusiveEndTime.

If you provide an InclusiveStartTime that is before the ledger's CreationDateTime, QLDB effectively defaults it to the ledger's CreationDateTime.

", + "StreamJournalToKinesisRequest$ExclusiveEndTime": "

The exclusive date and time that specifies when the stream ends. If you don't define this parameter, the stream runs indefinitely until you cancel it.

The ExclusiveEndTime must be in ISO 8601 date and time format and in Universal Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z.

", "UpdateLedgerResponse$CreationDateTime": "

The date and time, in epoch time format, when the ledger was created. (Epoch time format is the number of seconds elapsed since 12:00:00 AM January 1, 1970 UTC.)

" } }, "UniqueId": { "base": null, "refs": { - "CancelJournalKinesisStreamRequest$StreamId": "

The unique ID that QLDB assigns to each QLDB journal stream.

", - "CancelJournalKinesisStreamResponse$StreamId": "

The unique ID that QLDB assigns to each QLDB journal stream.

", - "DescribeJournalKinesisStreamRequest$StreamId": "

The unique ID that QLDB assigns to each QLDB journal stream.

", - "DescribeJournalS3ExportRequest$ExportId": "

The unique ID of the journal export job that you want to describe.

", - "ExportJournalToS3Response$ExportId": "

The unique ID that QLDB assigns to each journal export job.

To describe your export request and check the status of the job, you can use ExportId to call DescribeJournalS3Export.

", - "GetRevisionRequest$DocumentId": "

The unique ID of the document to be verified.

", - "JournalKinesisStreamDescription$StreamId": "

The unique ID that QLDB assigns to each QLDB journal stream.

", - "JournalS3ExportDescription$ExportId": "

The unique ID of the journal export job.

", - "StreamJournalToKinesisResponse$StreamId": "

The unique ID that QLDB assigns to each QLDB journal stream.

" + "CancelJournalKinesisStreamRequest$StreamId": "

The UUID (represented in Base62-encoded text) of the QLDB journal stream to be canceled.

", + "CancelJournalKinesisStreamResponse$StreamId": "

The UUID (Base62-encoded text) of the canceled QLDB journal stream.

", + "DescribeJournalKinesisStreamRequest$StreamId": "

The UUID (represented in Base62-encoded text) of the QLDB journal stream to describe.

", + "DescribeJournalS3ExportRequest$ExportId": "

The UUID (represented in Base62-encoded text) of the journal export job to describe.

", + "ExportJournalToS3Response$ExportId": "

The UUID (represented in Base62-encoded text) that QLDB assigns to each journal export job.

To describe your export request and check the status of the job, you can use ExportId to call DescribeJournalS3Export.

", + "GetRevisionRequest$DocumentId": "

The UUID (represented in Base62-encoded text) of the document to be verified.

", + "JournalKinesisStreamDescription$StreamId": "

The UUID (represented in Base62-encoded text) of the QLDB journal stream.

", + "JournalS3ExportDescription$ExportId": "

The UUID (represented in Base62-encoded text) of the journal export job.

", + "StreamJournalToKinesisResponse$StreamId": "

The UUID (represented in Base62-encoded text) that QLDB assigns to each QLDB journal stream.

" } }, "UntagResourceRequest": { @@ -566,13 +566,13 @@ "ValueHolder": { "base": "

A structure that can contain a value in multiple encoding formats.

", "refs": { - "GetBlockRequest$BlockAddress": "

The location of the block that you want to request. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:14}

", - "GetBlockRequest$DigestTipAddress": "

The latest block location covered by the digest for which to request a proof. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:49}

", + "GetBlockRequest$BlockAddress": "

The location of the block that you want to request. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:14}.

", + "GetBlockRequest$DigestTipAddress": "

The latest block location covered by the digest for which to request a proof. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:49}.

", "GetBlockResponse$Block": "

The block data object in Amazon Ion format.

", "GetBlockResponse$Proof": "

The proof object in Amazon Ion format returned by a GetBlock request. A proof contains the list of hash values required to recalculate the specified digest using a Merkle tree, starting with the specified block.

", "GetDigestResponse$DigestTipAddress": "

The latest block location covered by the digest that you requested. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

", - "GetRevisionRequest$BlockAddress": "

The block location of the document revision to be verified. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:14}

", - "GetRevisionRequest$DigestTipAddress": "

The latest block location covered by the digest for which to request a proof. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:49}

", + "GetRevisionRequest$BlockAddress": "

The block location of the document revision to be verified. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:14}.

", + "GetRevisionRequest$DigestTipAddress": "

The latest block location covered by the digest for which to request a proof. An address is an Amazon Ion structure that has two fields: strandId and sequenceNo.

For example: {strandId:\"BlFTjlSXze9BIh1KOszcE3\",sequenceNo:49}.

", "GetRevisionResponse$Proof": "

The proof object in Amazon Ion format returned by a GetRevision request. A proof contains the list of hash values that are required to recalculate the specified digest using a Merkle tree, starting with the specified document revision.

", "GetRevisionResponse$Revision": "

The document revision data object in Amazon Ion format.

" } diff --git a/models/apis/rds/2014-10-31/docs-2.json b/models/apis/rds/2014-10-31/docs-2.json index 3f52930ec8e..ee823c4b62b 100644 --- a/models/apis/rds/2014-10-31/docs-2.json +++ b/models/apis/rds/2014-10-31/docs-2.json @@ -98,10 +98,10 @@ "ModifyCurrentDBClusterCapacity": "

Set the capacity of an Aurora Serverless DB cluster to a specific value.

Aurora Serverless scales seamlessly based on the workload on the DB cluster. In some cases, the capacity might not scale fast enough to meet a sudden change in workload, such as a large number of new transactions. Call ModifyCurrentDBClusterCapacity to set the capacity explicitly.

After this call sets the DB cluster capacity, Aurora Serverless can automatically scale the DB cluster based on the cooldown period for scaling up and the cooldown period for scaling down.

For more information about Aurora Serverless, see Using Amazon Aurora Serverless in the Amazon Aurora User Guide.

If you call ModifyCurrentDBClusterCapacity with the default TimeoutAction, connections that prevent Aurora Serverless from finding a scaling point might be dropped. For more information about scaling points, see Autoscaling for Aurora Serverless in the Amazon Aurora User Guide.

This action only applies to Aurora DB clusters.

", "ModifyDBCluster": "

Modify a setting for an Amazon Aurora DB cluster. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.

This action only applies to Aurora DB clusters.

", "ModifyDBClusterEndpoint": "

Modifies the properties of an endpoint in an Amazon Aurora DB cluster.

This action only applies to Aurora DB clusters.

", - "ModifyDBClusterParameterGroup": "

Modifies the parameters of a DB cluster parameter group. To modify more than one parameter, submit a list of the following: ParameterName, ParameterValue, and ApplyMethod. A maximum of 20 parameters can be modified in a single request.

For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.

Changes to dynamic parameters are applied immediately. Changes to static parameters require a reboot without failover to the DB cluster associated with the parameter group before the change can take effect.

After you create a DB cluster parameter group, you should wait at least 5 minutes before creating your first DB cluster that uses that DB cluster parameter group as the default parameter group. This allows Amazon RDS to fully complete the create action before the parameter group is used as the default for a new DB cluster. This is especially important for parameters that are critical when creating the default database for a DB cluster, such as the character set for the default database defined by the character_set_database parameter. You can use the Parameter Groups option of the Amazon RDS console or the DescribeDBClusterParameters action to verify that your DB cluster parameter group has been created or modified.

If the modified DB cluster parameter group is used by an Aurora Serverless cluster, Aurora applies the update immediately. The cluster restart might interrupt your workload. In that case, your application must reopen any connections and retry any transactions that were active when the parameter changes took effect.

This action only applies to Aurora DB clusters.

", + "ModifyDBClusterParameterGroup": "

Modifies the parameters of a DB cluster parameter group. To modify more than one parameter, submit a list of the following: ParameterName, ParameterValue, and ApplyMethod. A maximum of 20 parameters can be modified in a single request.

For more information on Amazon Aurora, see What Is Amazon Aurora? in the Amazon Aurora User Guide.

After you create a DB cluster parameter group, you should wait at least 5 minutes before creating your first DB cluster that uses that DB cluster parameter group as the default parameter group. This allows Amazon RDS to fully complete the create action before the parameter group is used as the default for a new DB cluster. This is especially important for parameters that are critical when creating the default database for a DB cluster, such as the character set for the default database defined by the character_set_database parameter. You can use the Parameter Groups option of the Amazon RDS console or the DescribeDBClusterParameters action to verify that your DB cluster parameter group has been created or modified.

If the modified DB cluster parameter group is used by an Aurora Serverless cluster, Aurora applies the update immediately. The cluster restart might interrupt your workload. In that case, your application must reopen any connections and retry any transactions that were active when the parameter changes took effect.

This action only applies to Aurora DB clusters.

", "ModifyDBClusterSnapshotAttribute": "

Adds an attribute and values to, or removes an attribute and values from, a manual DB cluster snapshot.

To share a manual DB cluster snapshot with other AWS accounts, specify restore as the AttributeName and use the ValuesToAdd parameter to add a list of IDs of the AWS accounts that are authorized to restore the manual DB cluster snapshot. Use the value all to make the manual DB cluster snapshot public, which means that it can be copied or restored by all AWS accounts.

Don't add the all value for any manual DB cluster snapshots that contain private information that you don't want available to all AWS accounts.

If a manual DB cluster snapshot is encrypted, it can be shared, but only by specifying a list of authorized AWS account IDs for the ValuesToAdd parameter. You can't use all as a value for that parameter in this case.

To view which AWS accounts have access to copy or restore a manual DB cluster snapshot, or whether a manual DB cluster snapshot is public or private, use the DescribeDBClusterSnapshotAttributes API action. The accounts are returned as values for the restore attribute.

This action only applies to Aurora DB clusters.

", "ModifyDBInstance": "

Modifies settings for a DB instance. You can change one or more database configuration parameters by specifying these parameters and the new values in the request. To learn what modifications you can make to your DB instance, call DescribeValidDBInstanceModifications before you call ModifyDBInstance.

", - "ModifyDBParameterGroup": "

Modifies the parameters of a DB parameter group. To modify more than one parameter, submit a list of the following: ParameterName, ParameterValue, and ApplyMethod. A maximum of 20 parameters can be modified in a single request.

Changes to dynamic parameters are applied immediately. Changes to static parameters require a reboot without failover to the DB instance associated with the parameter group before the change can take effect.

After you modify a DB parameter group, you should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group as the default parameter group. This allows Amazon RDS to fully complete the modify action before the parameter group is used as the default for a new DB instance. This is especially important for parameters that are critical when creating the default database for a DB instance, such as the character set for the default database defined by the character_set_database parameter. You can use the Parameter Groups option of the Amazon RDS console or the DescribeDBParameters command to verify that your DB parameter group has been created or modified.

", + "ModifyDBParameterGroup": "

Modifies the parameters of a DB parameter group. To modify more than one parameter, submit a list of the following: ParameterName, ParameterValue, and ApplyMethod. A maximum of 20 parameters can be modified in a single request.

After you modify a DB parameter group, you should wait at least 5 minutes before creating your first DB instance that uses that DB parameter group as the default parameter group. This allows Amazon RDS to fully complete the modify action before the parameter group is used as the default for a new DB instance. This is especially important for parameters that are critical when creating the default database for a DB instance, such as the character set for the default database defined by the character_set_database parameter. You can use the Parameter Groups option of the Amazon RDS console or the DescribeDBParameters command to verify that your DB parameter group has been created or modified.

", "ModifyDBProxy": "

Changes the settings for an existing DB proxy.

", "ModifyDBProxyEndpoint": "

Changes the settings for an existing DB proxy endpoint.

", "ModifyDBProxyTargetGroup": "

Modifies the properties of a DBProxyTargetGroup.

", @@ -3076,8 +3076,8 @@ "DBClusterParameterGroupDetails$Parameters": "

Provides a list of parameters for the DB cluster parameter group.

", "DBParameterGroupDetails$Parameters": "

A list of Parameter values.

", "EngineDefaults$Parameters": "

Contains a list of engine default parameters.

", - "ModifyDBClusterParameterGroupMessage$Parameters": "

A list of parameters in the DB cluster parameter group to modify.

", - "ModifyDBParameterGroupMessage$Parameters": "

An array of parameter names, values, and the apply method for the parameter update. At least one parameter name, value, and apply method must be supplied; later arguments are optional. A maximum of 20 parameters can be modified in a single request.

Valid Values (for the application method): immediate | pending-reboot

You can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters, and changes are applied when you reboot the DB instance without failover.

", + "ModifyDBClusterParameterGroupMessage$Parameters": "

A list of parameters in the DB cluster parameter group to modify.

Valid Values (for the application method): immediate | pending-reboot

You can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters.

When the application method is immediate, changes to dynamic parameters are applied immediately to the DB clusters associated with the parameter group. When the application method is pending-reboot, changes to dynamic and static parameters are applied after a reboot without failover to the DB clusters associated with the parameter group.

", + "ModifyDBParameterGroupMessage$Parameters": "

An array of parameter names, values, and the application methods for the parameter update. At least one parameter name, value, and application method method must be supplied; later arguments are optional. A maximum of 20 parameters can be modified in a single request.

Valid Values (for the application method): immediate | pending-reboot

You can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters.

When the application method is immediate, changes to dynamic parameters are applied immediately to the DB instances associated with the parameter group. When the application method is pending-reboot, changes to dynamic and static parameters are applied after a reboot without failover to the DB instances associated with the parameter group.

", "ResetDBClusterParameterGroupMessage$Parameters": "

A list of parameter names in the DB cluster parameter group to reset to the default values. You can't use this parameter if the ResetAllParameters parameter is enabled.

", "ResetDBParameterGroupMessage$Parameters": "

To reset the entire DB parameter group, specify the DBParameterGroup name and ResetAllParameters parameters. To reset specific parameters, provide a list of the following: ParameterName and ApplyMethod. A maximum of 20 parameters can be modified in a single request.

MySQL

Valid Values (for Apply method): immediate | pending-reboot

You can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters, and changes are applied when DB instance reboots.

MariaDB

Valid Values (for Apply method): immediate | pending-reboot

You can use the immediate value with dynamic parameters only. You can use the pending-reboot value for both dynamic and static parameters, and changes are applied when DB instance reboots.

Oracle

Valid Values (for Apply method): pending-reboot

" } @@ -3701,7 +3701,7 @@ "CreateDBClusterMessage$Domain": "

The Active Directory directory ID to create the DB cluster in.

For Amazon Aurora DB clusters, Amazon RDS can use Kerberos Authentication to authenticate users that connect to the DB cluster. For more information, see Kerberos Authentication in the Amazon Aurora User Guide.

", "CreateDBClusterMessage$DomainIAMRoleName": "

Specify the name of the IAM role to be used when making API calls to the Directory Service.

", "CreateDBClusterParameterGroupMessage$DBClusterParameterGroupName": "

The name of the DB cluster parameter group.

Constraints:

This value is stored as a lowercase string.

", - "CreateDBClusterParameterGroupMessage$DBParameterGroupFamily": "

The DB cluster parameter group family name. A DB cluster parameter group can be associated with one and only one DB cluster parameter group family, and can be applied only to a DB cluster running a database engine and engine version compatible with that DB cluster parameter group family.

Aurora MySQL

Example: aurora5.6, aurora-mysql5.7

Aurora PostgreSQL

Example: aurora-postgresql9.6

", + "CreateDBClusterParameterGroupMessage$DBParameterGroupFamily": "

The DB cluster parameter group family name. A DB cluster parameter group can be associated with one and only one DB cluster parameter group family, and can be applied only to a DB cluster running a database engine and engine version compatible with that DB cluster parameter group family.

Aurora MySQL

Example: aurora5.6, aurora-mysql5.7

Aurora PostgreSQL

Example: aurora-postgresql9.6

To list all of the available parameter group families for a DB engine, use the following command:

aws rds describe-db-engine-versions --query \"DBEngineVersions[].DBParameterGroupFamily\" --engine <engine>

For example, to list all of the available parameter group families for the Aurora PostgreSQL DB engine, use the following command:

aws rds describe-db-engine-versions --query \"DBEngineVersions[].DBParameterGroupFamily\" --engine aurora-postgresql

The output contains duplicates.

The following are the valid DB engine values:

", "CreateDBClusterParameterGroupMessage$Description": "

The description for the DB cluster parameter group.

", "CreateDBClusterSnapshotMessage$DBClusterSnapshotIdentifier": "

The identifier of the DB cluster snapshot. This parameter is stored as a lowercase string.

Constraints:

Example: my-cluster1-snapshot1

", "CreateDBClusterSnapshotMessage$DBClusterIdentifier": "

The identifier of the DB cluster to create a snapshot for. This parameter isn't case-sensitive.

Constraints:

Example: my-cluster1

", @@ -3746,7 +3746,7 @@ "CreateDBInstanceReadReplicaMessage$Domain": "

The Active Directory directory ID to create the DB instance in. Currently, only MySQL, Microsoft SQL Server, Oracle, and PostgreSQL DB instances can be created in an Active Directory Domain.

For more information, see Kerberos Authentication in the Amazon RDS User Guide.

", "CreateDBInstanceReadReplicaMessage$DomainIAMRoleName": "

Specify the name of the IAM role to be used when making API calls to the Directory Service.

", "CreateDBParameterGroupMessage$DBParameterGroupName": "

The name of the DB parameter group.

Constraints:

This value is stored as a lowercase string.

", - "CreateDBParameterGroupMessage$DBParameterGroupFamily": "

The DB parameter group family name. A DB parameter group can be associated with one and only one DB parameter group family, and can be applied only to a DB instance running a database engine and engine version compatible with that DB parameter group family.

To list all of the available parameter group families, use the following command:

aws rds describe-db-engine-versions --query \"DBEngineVersions[].DBParameterGroupFamily\"

The output contains duplicates.

", + "CreateDBParameterGroupMessage$DBParameterGroupFamily": "

The DB parameter group family name. A DB parameter group can be associated with one and only one DB parameter group family, and can be applied only to a DB instance running a database engine and engine version compatible with that DB parameter group family.

To list all of the available parameter group families for a DB engine, use the following command:

aws rds describe-db-engine-versions --query \"DBEngineVersions[].DBParameterGroupFamily\" --engine <engine>

For example, to list all of the available parameter group families for the MySQL DB engine, use the following command:

aws rds describe-db-engine-versions --query \"DBEngineVersions[].DBParameterGroupFamily\" --engine mysql

The output contains duplicates.

The following are the valid DB engine values:

", "CreateDBParameterGroupMessage$Description": "

The description for the DB parameter group.

", "CreateDBProxyRequest$DBProxyName": "

The identifier for the proxy. This name must be unique for all proxies owned by your AWS account in the specified AWS Region. An identifier must begin with a letter and must contain only ASCII letters, digits, and hyphens; it can't end with a hyphen or contain two consecutive hyphens.

", "CreateDBProxyRequest$RoleArn": "

The Amazon Resource Name (ARN) of the IAM role that the proxy uses to access secrets in AWS Secrets Manager.

", @@ -3851,7 +3851,7 @@ "DBInstance$DBInstanceIdentifier": "

Contains a user-supplied database identifier. This identifier is the unique key that identifies a DB instance.

", "DBInstance$DBInstanceClass": "

Contains the name of the compute and memory capacity class of the DB instance.

", "DBInstance$Engine": "

The name of the database engine to be used for this DB instance.

", - "DBInstance$DBInstanceStatus": "

Specifies the current state of this database.

For information about DB instance statuses, see DB Instance Status in the Amazon RDS User Guide.

", + "DBInstance$DBInstanceStatus": "

Specifies the current state of this database.

For information about DB instance statuses, see Viewing DB instance status in the Amazon RDS User Guide.

", "DBInstance$MasterUsername": "

Contains the master username for the DB instance.

", "DBInstance$DBName": "

The meaning of this parameter differs according to the database engine you use.

MySQL, MariaDB, SQL Server, PostgreSQL

Contains the name of the initial database of this instance that was provided at create time, if one was specified when the DB instance was created. This same name is returned for the life of the DB instance.

Type: String

Oracle

Contains the Oracle System ID (SID) of the created DB instance. Not shown when the returned parameters do not apply to an Oracle DB instance.

", "DBInstance$PreferredBackupWindow": "

Specifies the daily time range during which automated backups are created if automated backups are enabled, as determined by the BackupRetentionPeriod.

", diff --git a/service/autoscaling/api.go b/service/autoscaling/api.go index da79cdfcca1..fc332608b77 100644 --- a/service/autoscaling/api.go +++ b/service/autoscaling/api.go @@ -10468,9 +10468,7 @@ type Ebs struct { // You must specify either a VolumeSize or a SnapshotId. SnapshotId *string `min:"1" type:"string"` - // The throughput to provision for a gp3 volume. - // - // Valid Range: Minimum value of 125. Maximum value of 1000. + // The throughput (MiBps) to provision for a gp3 volume. Throughput *int64 `min:"125" type:"integer"` // The volume size, in GiBs. The following are the supported volumes sizes for diff --git a/service/cloudtrail/api.go b/service/cloudtrail/api.go index b6d64a4eeda..8211c92999a 100644 --- a/service/cloudtrail/api.go +++ b/service/cloudtrail/api.go @@ -255,9 +255,10 @@ func (c *CloudTrail) CreateTrailRequest(input *CreateTrailInput) (req *request.R // valid. // // * KmsKeyNotFoundException -// This exception is thrown when the KMS key does not exist, when the S3 bucket -// and the KMS key are not in the same region, or when the KMS key associated -// with the SNS topic either does not exist or is not in the same region. +// This exception is thrown when the AWS KMS key does not exist, when the S3 +// bucket and the AWS KMS key are not in the same region, or when the AWS KMS +// key associated with the SNS topic either does not exist or is not in the +// same region. // // * KmsKeyDisabledException // This exception is no longer in use. @@ -618,8 +619,8 @@ func (c *CloudTrail) GetEventSelectorsRequest(input *GetEventSelectorsInput) (re // // * If your event selector includes management events. // -// * If your event selector includes data events, the Amazon S3 objects or -// AWS Lambda functions that you are logging for data events. +// * If your event selector includes data events, the resources on which +// you are logging data events. // // For more information, see Logging Data and Management Events for Trails (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html) // in the AWS CloudTrail User Guide. @@ -2482,9 +2483,10 @@ func (c *CloudTrail) UpdateTrailRequest(input *UpdateTrailInput) (req *request.R // other than the region in which the trail was created. // // * KmsKeyNotFoundException -// This exception is thrown when the KMS key does not exist, when the S3 bucket -// and the KMS key are not in the same region, or when the KMS key associated -// with the SNS topic either does not exist or is not in the same region. +// This exception is thrown when the AWS KMS key does not exist, when the S3 +// bucket and the AWS KMS key are not in the same region, or when the AWS KMS +// key associated with the SNS topic either does not exist or is not in the +// same region. // // * KmsKeyDisabledException // This exception is no longer in use. @@ -2871,20 +2873,30 @@ type AdvancedFieldSelector struct { // // * resources.type - This field is required. resources.type can only use // the Equals operator, and the value can be one of the following: AWS::S3::Object, - // AWS::Lambda::Function, or AWS::S3Outposts::Object. You can have only one - // resources.type field per selector. To log data events on more than one - // resource type, add another selector. + // AWS::Lambda::Function, AWS::DynamoDB::Table, AWS::S3Outposts::Object, + // AWS::ManagedBlockchain::Node, or AWS::S3ObjectLambda::AccessPoint. You + // can have only one resources.type field per selector. To log data events + // on more than one resource type, add another selector. // // * resources.ARN - You can use any operator with resources.ARN, but if // you use Equals or NotEquals, the value must exactly match the ARN of a // valid resource of the type you've specified in the template as the value // of resources.type. For example, if resources.type equals AWS::S3::Object, - // the ARN must be in one of the following formats. The trailing slash is - // intentional; do not exclude it. arn:partition:s3:::bucket_name/ arn:partition:s3:::bucket_name/object_or_file_name/ + // the ARN must be in one of the following formats. To log all data events + // for all objects in a specific S3 bucket, use the StartsWith operator, + // and include only the bucket ARN as the matching value. The trailing slash + // is intentional; do not exclude it. arn:partition:s3:::bucket_name/ arn:partition:s3:::bucket_name/object_or_file_name/ // When resources.type equals AWS::Lambda::Function, and the operator is // set to Equals or NotEquals, the ARN must be in the following format: arn:partition:lambda:region:account_ID:function:function_name + // When resources.type equals AWS::DynamoDB::Table, and the operator is set + // to Equals or NotEquals, the ARN must be in the following format: arn:partition:dynamodb:region:account_ID:table:table_name // When resources.type equals AWS::S3Outposts::Object, and the operator is // set to Equals or NotEquals, the ARN must be in the following format: arn:partition:s3-outposts:region:>account_ID:object_path + // When resources.type equals AWS::ManagedBlockchain::Node, and the operator + // is set to Equals or NotEquals, the ARN must be in the following format: + // arn:partition:managedblockchain:region:account_ID:nodes/node_ID When resources.type + // equals AWS::S3ObjectLambda::AccessPoint, and the operator is set to Equals + // or NotEquals, the ARN must be in the following format: arn:partition:s3-object-lambda:region:account_ID:accesspoint/access_point_name // // Field is a required field Field *string `min:"1" type:"string" required:"true"` @@ -3517,11 +3529,11 @@ func (s *CreateTrailOutput) SetTrailARN(v string) *CreateTrailOutput { return s } -// The Amazon S3 buckets or AWS Lambda functions that you specify in your event -// selectors for your trail to log data events. Data events provide information -// about the resource operations performed on or within a resource itself. These -// are also known as data plane operations. You can specify up to 250 data resources -// for a trail. +// The Amazon S3 buckets, AWS Lambda functions, or Amazon DynamoDB tables that +// you specify in your event selectors for your trail to log data events. Data +// events provide information about the resource operations performed on or +// within a resource itself. These are also known as data plane operations. +// You can specify up to 250 data resources for a trail. // // The total number of allowed data resources is 250. This number can be distributed // between 1 and 5 event selectors, but the total cannot exceed 250 across all @@ -3568,11 +3580,12 @@ func (s *CreateTrailOutput) SetTrailARN(v string) *CreateTrailOutput { type DataResource struct { _ struct{} `type:"structure"` - // The resource type in which you want to log data events. You can specify AWS::S3::Object - // or AWS::Lambda::Function resources. + // The resource type in which you want to log data events. You can specify AWS::S3::Object, + // AWS::Lambda::Function, or AWS::DynamoDB::Table resources. // - // The AWS::S3Outposts::Object resource type is not valid in basic event selectors. - // To log data events on this resource type, use advanced event selectors. + // The AWS::S3Outposts::Object, AWS::ManagedBlockchain::Node, and AWS::S3ObjectLambda::AccessPoint + // resource types are not valid in basic event selectors. To log data events + // on these resource types, use advanced event selectors. Type *string `type:"string"` // An array of Amazon Resource Name (ARN) strings or partial ARN strings for @@ -3592,16 +3605,19 @@ type DataResource struct { // prefix such as arn:aws:s3:::bucket-1/example-images. The trail logs data // events for objects in this S3 bucket that match the prefix. // - // * To log data events for all functions in your AWS account, specify the - // prefix as arn:aws:lambda. This will also enable logging of Invoke activity - // performed by any user or role in your AWS account, even if that activity - // is performed on a function that belongs to another AWS account. + // * To log data events for all Lambda functions in your AWS account, specify + // the prefix as arn:aws:lambda. This will also enable logging of Invoke + // activity performed by any user or role in your AWS account, even if that + // activity is performed on a function that belongs to another AWS account. // // * To log data events for a specific Lambda function, specify the function // ARN. Lambda function ARNs are exact. For example, if you specify a function // ARN arn:aws:lambda:us-west-2:111111111111:function:helloworld, data events // will only be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld. // They will not be logged for arn:aws:lambda:us-west-2:111111111111:function:helloworld2. + // + // * To log data events for all DynamoDB tables in your AWS account, specify + // the prefix as arn:aws:dynamodb. Values []*string `type:"list"` } @@ -3883,10 +3899,10 @@ type EventSelector struct { _ struct{} `type:"structure"` // CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda - // functions. You can specify up to 250 resources for an individual event selector, - // but the total number of data resources cannot exceed 250 across all event - // selectors in a trail. This limit does not apply if you configure resource - // logging for all data events. + // functions with basic event selectors. You can specify up to 250 resources + // for an individual event selector, but the total number of data resources + // cannot exceed 250 across all event selectors in a trail. This limit does + // not apply if you configure resource logging for all data events. // // For more information, see Data Events (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-management-and-data-events-with-cloudtrail.html#logging-data-events) // and Limits in AWS CloudTrail (https://docs.aws.amazon.com/awscloudtrail/latest/userguide/WhatIsCloudTrail-Limits.html) @@ -5919,9 +5935,10 @@ func (s *KmsKeyDisabledException) RequestID() string { return s.RespMetadata.RequestID } -// This exception is thrown when the KMS key does not exist, when the S3 bucket -// and the KMS key are not in the same region, or when the KMS key associated -// with the SNS topic either does not exist or is not in the same region. +// This exception is thrown when the AWS KMS key does not exist, when the S3 +// bucket and the AWS KMS key are not in the same region, or when the AWS KMS +// key associated with the SNS topic either does not exist or is not in the +// same region. type KmsKeyNotFoundException struct { _ struct{} `type:"structure"` RespMetadata protocol.ResponseMetadata `json:"-" xml:"-"` diff --git a/service/cloudtrail/errors.go b/service/cloudtrail/errors.go index e66ef96b023..49f5585ff74 100644 --- a/service/cloudtrail/errors.go +++ b/service/cloudtrail/errors.go @@ -252,9 +252,10 @@ const ( // ErrCodeKmsKeyNotFoundException for service response error code // "KmsKeyNotFoundException". // - // This exception is thrown when the KMS key does not exist, when the S3 bucket - // and the KMS key are not in the same region, or when the KMS key associated - // with the SNS topic either does not exist or is not in the same region. + // This exception is thrown when the AWS KMS key does not exist, when the S3 + // bucket and the AWS KMS key are not in the same region, or when the AWS KMS + // key associated with the SNS topic either does not exist or is not in the + // same region. ErrCodeKmsKeyNotFoundException = "KmsKeyNotFoundException" // ErrCodeMaximumNumberOfTrailsExceededException for service response error code diff --git a/service/medialive/api.go b/service/medialive/api.go index 6ab4f26fc94..b9bde73b5cd 100644 --- a/service/medialive/api.go +++ b/service/medialive/api.go @@ -14206,8 +14206,14 @@ func (s *H264FilterSettings) SetTemporalFilterSettings(v *TemporalFilterSettings type H264Settings struct { _ struct{} `type:"structure"` - // Adaptive quantization. Allows intra-frame quantizers to vary to improve visual - // quality. + // Enables or disables adaptive quantization, which is a technique MediaLive + // can apply to video on a frame-by-frame basis to produce more compression + // without losing quality. There are three types of adaptive quantization: flicker, + // spatial, and temporal. Set the field in one of these ways: Set to Auto. Recommended. + // For each type of AQ, MediaLive will determine if AQ is needed, and if so, + // the appropriate strength. Set a strength (a value other than Auto or Disable). + // This strength will apply to any of the AQ fields that you choose to enable. + // Set to Disabled to disable all types of adaptive quantization. AdaptiveQuantization *string `locationName:"adaptiveQuantization" type:"string" enum:"H264AdaptiveQuantization"` // Indicates that AFD values will be written into the output stream. If afdSignaling @@ -14244,8 +14250,16 @@ type H264Settings struct { // Only valid when afdSignaling is set to 'Fixed'. FixedAfd *string `locationName:"fixedAfd" type:"string" enum:"FixedAfd"` - // If set to enabled, adjust quantization within each frame to reduce flicker - // or 'pop' on I-frames. + // Flicker AQ makes adjustments within each frame to reduce flicker or 'pop' + // on I-frames. The value to enter in this field depends on the value in the + // Adaptive quantization field: If you have set the Adaptive quantization field + // to Auto, MediaLive ignores any value in this field. MediaLive will determine + // if flicker AQ is appropriate and will apply the appropriate strength. If + // you have set the Adaptive quantization field to a strength, you can set this + // field to Enabled or Disabled. Enabled: MediaLive will apply flicker AQ using + // the specified strength. Disabled: MediaLive won't apply flicker AQ. If you + // have set the Adaptive quantization to Disabled, MediaLive ignores any value + // in this field and doesn't apply flicker AQ. FlickerAq *string `locationName:"flickerAq" type:"string" enum:"H264FlickerAq"` // This setting applies only when scan type is "interlaced." It controls whether @@ -14342,11 +14356,16 @@ type H264Settings struct { QualityLevel *string `locationName:"qualityLevel" type:"string" enum:"H264QualityLevel"` // Controls the target quality for the video encode. Applies only when the rate - // control mode is QVBR. Set values for the QVBR quality level field and Max - // bitrate field that suit your most important viewing devices. Recommended - // values are:- Primary screen: Quality level: 8 to 10. Max bitrate: 4M- PC - // or tablet: Quality level: 7. Max bitrate: 1.5M to 3M- Smartphone: Quality - // level: 6. Max bitrate: 1M to 1.5M + // control mode is QVBR. You can set a target quality or you can let MediaLive + // determine the best quality. To set a target quality, enter values in the + // QVBR quality level field and the Max bitrate field. Enter values that suit + // your most important viewing devices. Recommended values are:- Primary screen: + // Quality level: 8 to 10. Max bitrate: 4M- PC or tablet: Quality level: 7. + // Max bitrate: 1.5M to 3M- Smartphone: Quality level: 6. Max bitrate: 1M to + // 1.5MTo let MediaLive decide, leave the QVBR quality level field empty, and + // in Max bitrate enter the maximum rate you want in the video. For more information, + // see the section called "Video - rate control mode" in the MediaLive user + // guide QvbrQualityLevel *int64 `locationName:"qvbrQualityLevel" min:"1" type:"integer"` // Rate control mode.QVBR: Quality will match the specified quality level except @@ -14379,8 +14398,16 @@ type H264Settings struct { // in the encoded image. If not set to zero, must be greater than 15. Softness *int64 `locationName:"softness" type:"integer"` - // If set to enabled, adjust quantization within each frame based on spatial - // variation of content complexity. + // Spatial AQ makes adjustments within each frame based on spatial variation + // of content complexity. The value to enter in this field depends on the value + // in the Adaptive quantization field: If you have set the Adaptive quantization + // field to Auto, MediaLive ignores any value in this field. MediaLive will + // determine if spatial AQ is appropriate and will apply the appropriate strength. + // If you have set the Adaptive quantization field to a strength, you can set + // this field to Enabled or Disabled. Enabled: MediaLive will apply spatial + // AQ using the specified strength. Disabled: MediaLive won't apply spatial + // AQ. If you have set the Adaptive quantization to Disabled, MediaLive ignores + // any value in this field and doesn't apply spatial AQ. SpatialAq *string `locationName:"spatialAq" type:"string" enum:"H264SpatialAq"` // If set to fixed, use gopNumBFrames B-frames per sub-GOP. If set to dynamic, @@ -14390,8 +14417,16 @@ type H264Settings struct { // Produces a bitstream compliant with SMPTE RP-2027. Syntax *string `locationName:"syntax" type:"string" enum:"H264Syntax"` - // If set to enabled, adjust quantization within each frame based on temporal - // variation of content complexity. + // Temporal makes adjustments within each frame based on temporal variation + // of content complexity. The value to enter in this field depends on the value + // in the Adaptive quantization field: If you have set the Adaptive quantization + // field to Auto, MediaLive ignores any value in this field. MediaLive will + // determine if temporal AQ is appropriate and will apply the appropriate strength. + // If you have set the Adaptive quantization field to a strength, you can set + // this field to Enabled or Disabled. Enabled: MediaLive will apply temporal + // AQ using the specified strength. Disabled: MediaLive won't apply temporal + // AQ. If you have set the Adaptive quantization to Disabled, MediaLive ignores + // any value in this field and doesn't apply temporal AQ. TemporalAq *string `locationName:"temporalAq" type:"string" enum:"H264TemporalAq"` // Determines how timecodes should be inserted into the video elementary stream.- @@ -29001,6 +29036,9 @@ func GlobalConfigurationOutputTimingSource_Values() []string { // H264 Adaptive Quantization const ( + // H264AdaptiveQuantizationAuto is a H264AdaptiveQuantization enum value + H264AdaptiveQuantizationAuto = "AUTO" + // H264AdaptiveQuantizationHigh is a H264AdaptiveQuantization enum value H264AdaptiveQuantizationHigh = "HIGH" @@ -29023,6 +29061,7 @@ const ( // H264AdaptiveQuantization_Values returns all elements of the H264AdaptiveQuantization enum func H264AdaptiveQuantization_Values() []string { return []string{ + H264AdaptiveQuantizationAuto, H264AdaptiveQuantizationHigh, H264AdaptiveQuantizationHigher, H264AdaptiveQuantizationLow, @@ -29462,6 +29501,9 @@ func H264TimecodeInsertionBehavior_Values() []string { // H265 Adaptive Quantization const ( + // H265AdaptiveQuantizationAuto is a H265AdaptiveQuantization enum value + H265AdaptiveQuantizationAuto = "AUTO" + // H265AdaptiveQuantizationHigh is a H265AdaptiveQuantization enum value H265AdaptiveQuantizationHigh = "HIGH" @@ -29484,6 +29526,7 @@ const ( // H265AdaptiveQuantization_Values returns all elements of the H265AdaptiveQuantization enum func H265AdaptiveQuantization_Values() []string { return []string{ + H265AdaptiveQuantizationAuto, H265AdaptiveQuantizationHigh, H265AdaptiveQuantizationHigher, H265AdaptiveQuantizationLow, diff --git a/service/pi/api.go b/service/pi/api.go index 93326093946..6c66a334e55 100644 --- a/service/pi/api.go +++ b/service/pi/api.go @@ -100,6 +100,95 @@ func (c *PI) DescribeDimensionKeysWithContext(ctx aws.Context, input *DescribeDi return out, req.Send() } +const opGetDimensionKeyDetails = "GetDimensionKeyDetails" + +// GetDimensionKeyDetailsRequest generates a "aws/request.Request" representing the +// client's request for the GetDimensionKeyDetails operation. The "output" return +// value will be populated with the request's response once the request completes +// successfully. +// +// Use "Send" method on the returned Request to send the API call to the service. +// the "output" return value is not valid until after Send returns without error. +// +// See GetDimensionKeyDetails for more information on using the GetDimensionKeyDetails +// API call, and error handling. +// +// This method is useful when you want to inject custom logic or configuration +// into the SDK's request lifecycle. Such as custom headers, or retry logic. +// +// +// // Example sending a request using the GetDimensionKeyDetailsRequest method. +// req, resp := client.GetDimensionKeyDetailsRequest(params) +// +// err := req.Send() +// if err == nil { // resp is now filled +// fmt.Println(resp) +// } +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pi-2018-02-27/GetDimensionKeyDetails +func (c *PI) GetDimensionKeyDetailsRequest(input *GetDimensionKeyDetailsInput) (req *request.Request, output *GetDimensionKeyDetailsOutput) { + op := &request.Operation{ + Name: opGetDimensionKeyDetails, + HTTPMethod: "POST", + HTTPPath: "/", + } + + if input == nil { + input = &GetDimensionKeyDetailsInput{} + } + + output = &GetDimensionKeyDetailsOutput{} + req = c.newRequest(op, input, output) + return +} + +// GetDimensionKeyDetails API operation for AWS Performance Insights. +// +// Get the attributes of the specified dimension group for a DB instance or +// data source. For example, if you specify a SQL ID, GetDimensionKeyDetails +// retrieves the full text of the dimension db.sql.statement associated with +// this ID. This operation is useful because GetResourceMetrics and DescribeDimensionKeys +// don't support retrieval of large SQL statement text. +// +// Returns awserr.Error for service API and SDK errors. Use runtime type assertions +// with awserr.Error's Code and Message methods to get detailed information about +// the error. +// +// See the AWS API reference guide for AWS Performance Insights's +// API operation GetDimensionKeyDetails for usage and error information. +// +// Returned Error Types: +// * InvalidArgumentException +// One of the arguments provided is invalid for this request. +// +// * InternalServiceError +// The request failed due to an unknown error. +// +// * NotAuthorizedException +// The user is not authorized to perform this request. +// +// See also, https://docs.aws.amazon.com/goto/WebAPI/pi-2018-02-27/GetDimensionKeyDetails +func (c *PI) GetDimensionKeyDetails(input *GetDimensionKeyDetailsInput) (*GetDimensionKeyDetailsOutput, error) { + req, out := c.GetDimensionKeyDetailsRequest(input) + return out, req.Send() +} + +// GetDimensionKeyDetailsWithContext is the same as GetDimensionKeyDetails with the addition of +// the ability to pass a context and additional request options. +// +// See GetDimensionKeyDetails for details on how to use this API operation. +// +// The context must be non-nil and will be used for request cancellation. If +// the context is nil a panic will occur. In the future the SDK may create +// sub-contexts for http.Requests. See https://golang.org/pkg/context/ +// for more information on using Contexts. +func (c *PI) GetDimensionKeyDetailsWithContext(ctx aws.Context, input *GetDimensionKeyDetailsInput, opts ...request.Option) (*GetDimensionKeyDetailsOutput, error) { + req, out := c.GetDimensionKeyDetailsRequest(input) + req.SetContext(ctx) + req.ApplyOptions(opts...) + return out, req.Send() +} + const opGetResourceMetrics = "GetResourceMetrics" // GetResourceMetricsRequest generates a "aws/request.Request" representing the @@ -294,7 +383,7 @@ type DescribeDimensionKeysInput struct { // An optional pagination token provided by a previous request. If this parameter // is specified, the response includes only records beyond the token, up to // the value specified by MaxRecords. - NextToken *string `type:"string"` + NextToken *string `min:"1" type:"string"` // For each dimension specified in GroupBy, specify a secondary dimension to // further subdivide the partition keys in the response. @@ -361,6 +450,9 @@ func (s *DescribeDimensionKeysInput) Validate() error { if s.Metric == nil { invalidParams.Add(request.NewErrParamRequired("Metric")) } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } if s.ServiceType == nil { invalidParams.Add(request.NewErrParamRequired("ServiceType")) } @@ -469,7 +561,7 @@ type DescribeDimensionKeysOutput struct { // An optional pagination token provided by a previous request. If this parameter // is specified, the response includes only records beyond the token, up to // the value specified by MaxRecords. - NextToken *string `type:"string"` + NextToken *string `min:"1" type:"string"` // If PartitionBy was present in the request, PartitionKeys contains the breakdown // of dimension keys by the specified partitions. @@ -699,6 +791,184 @@ func (s *DimensionKeyDescription) SetTotal(v float64) *DimensionKeyDescription { return s } +// An object that describes the details for a specified dimension. +type DimensionKeyDetail struct { + _ struct{} `type:"structure"` + + // The full name of the dimension. The full name includes the group name and + // key name. The only valid value is db.sql.statement. + Dimension *string `type:"string"` + + // The status of the dimension detail data. Possible values include the following: + // + // * AVAILABLE - The dimension detail data is ready to be retrieved. + // + // * PROCESSING - The dimension detail data isn't ready to be retrieved because + // more processing time is required. If the requested detail data for db.sql.statement + // has the status PROCESSING, Performance Insights returns the truncated + // query. + // + // * UNAVAILABLE - The dimension detail data could not be collected successfully. + Status *string `type:"string" enum:"DetailStatus"` + + // The value of the dimension detail data. For the db.sql.statement dimension, + // this value is either the full or truncated SQL query, depending on the return + // status. + Value *string `type:"string"` +} + +// String returns the string representation +func (s DimensionKeyDetail) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s DimensionKeyDetail) GoString() string { + return s.String() +} + +// SetDimension sets the Dimension field's value. +func (s *DimensionKeyDetail) SetDimension(v string) *DimensionKeyDetail { + s.Dimension = &v + return s +} + +// SetStatus sets the Status field's value. +func (s *DimensionKeyDetail) SetStatus(v string) *DimensionKeyDetail { + s.Status = &v + return s +} + +// SetValue sets the Value field's value. +func (s *DimensionKeyDetail) SetValue(v string) *DimensionKeyDetail { + s.Value = &v + return s +} + +type GetDimensionKeyDetailsInput struct { + _ struct{} `type:"structure"` + + // The name of the dimension group. The only valid value is db.sql. Performance + // Insights searches the specified group for the dimension group ID. + // + // Group is a required field + Group *string `type:"string" required:"true"` + + // The ID of the dimension group from which to retrieve dimension details. For + // dimension group db.sql, the group ID is db.sql.id. + // + // GroupIdentifier is a required field + GroupIdentifier *string `type:"string" required:"true"` + + // The ID for a data source from which to gather dimension data. This ID must + // be immutable and unique within an AWS Region. When a DB instance is the data + // source, specify its DbiResourceId value. For example, specify db-ABCDEFGHIJKLMNOPQRSTU1VW2X. + // + // Identifier is a required field + Identifier *string `type:"string" required:"true"` + + // A list of dimensions to retrieve the detail data for within the given dimension + // group. For the dimension group db.sql, specify either the full dimension + // name db.sql.statement or the short dimension name statement. If you don't + // specify this parameter, Performance Insights returns all dimension data within + // the specified dimension group. + RequestedDimensions []*string `min:"1" type:"list"` + + // The AWS service for which Performance Insights returns data. The only valid + // value is RDS. + // + // ServiceType is a required field + ServiceType *string `type:"string" required:"true" enum:"ServiceType"` +} + +// String returns the string representation +func (s GetDimensionKeyDetailsInput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDimensionKeyDetailsInput) GoString() string { + return s.String() +} + +// Validate inspects the fields of the type to determine if they are valid. +func (s *GetDimensionKeyDetailsInput) Validate() error { + invalidParams := request.ErrInvalidParams{Context: "GetDimensionKeyDetailsInput"} + if s.Group == nil { + invalidParams.Add(request.NewErrParamRequired("Group")) + } + if s.GroupIdentifier == nil { + invalidParams.Add(request.NewErrParamRequired("GroupIdentifier")) + } + if s.Identifier == nil { + invalidParams.Add(request.NewErrParamRequired("Identifier")) + } + if s.RequestedDimensions != nil && len(s.RequestedDimensions) < 1 { + invalidParams.Add(request.NewErrParamMinLen("RequestedDimensions", 1)) + } + if s.ServiceType == nil { + invalidParams.Add(request.NewErrParamRequired("ServiceType")) + } + + if invalidParams.Len() > 0 { + return invalidParams + } + return nil +} + +// SetGroup sets the Group field's value. +func (s *GetDimensionKeyDetailsInput) SetGroup(v string) *GetDimensionKeyDetailsInput { + s.Group = &v + return s +} + +// SetGroupIdentifier sets the GroupIdentifier field's value. +func (s *GetDimensionKeyDetailsInput) SetGroupIdentifier(v string) *GetDimensionKeyDetailsInput { + s.GroupIdentifier = &v + return s +} + +// SetIdentifier sets the Identifier field's value. +func (s *GetDimensionKeyDetailsInput) SetIdentifier(v string) *GetDimensionKeyDetailsInput { + s.Identifier = &v + return s +} + +// SetRequestedDimensions sets the RequestedDimensions field's value. +func (s *GetDimensionKeyDetailsInput) SetRequestedDimensions(v []*string) *GetDimensionKeyDetailsInput { + s.RequestedDimensions = v + return s +} + +// SetServiceType sets the ServiceType field's value. +func (s *GetDimensionKeyDetailsInput) SetServiceType(v string) *GetDimensionKeyDetailsInput { + s.ServiceType = &v + return s +} + +type GetDimensionKeyDetailsOutput struct { + _ struct{} `type:"structure"` + + // The details for the requested dimensions. + Dimensions []*DimensionKeyDetail `type:"list"` +} + +// String returns the string representation +func (s GetDimensionKeyDetailsOutput) String() string { + return awsutil.Prettify(s) +} + +// GoString returns the string representation +func (s GetDimensionKeyDetailsOutput) GoString() string { + return s.String() +} + +// SetDimensions sets the Dimensions field's value. +func (s *GetDimensionKeyDetailsOutput) SetDimensions(v []*DimensionKeyDetail) *GetDimensionKeyDetailsOutput { + s.Dimensions = v + return s +} + type GetResourceMetricsInput struct { _ struct{} `type:"structure"` @@ -734,7 +1004,7 @@ type GetResourceMetricsInput struct { // An optional pagination token provided by a previous request. If this parameter // is specified, the response includes only records beyond the token, up to // the value specified by MaxRecords. - NextToken *string `type:"string"` + NextToken *string `min:"1" type:"string"` // The granularity, in seconds, of the data points returned from Performance // Insights. A period can be as short as one second, or as long as one day (86400 @@ -796,6 +1066,9 @@ func (s *GetResourceMetricsInput) Validate() error { if s.MetricQueries != nil && len(s.MetricQueries) < 1 { invalidParams.Add(request.NewErrParamMinLen("MetricQueries", 1)) } + if s.NextToken != nil && len(*s.NextToken) < 1 { + invalidParams.Add(request.NewErrParamMinLen("NextToken", 1)) + } if s.ServiceType == nil { invalidParams.Add(request.NewErrParamRequired("ServiceType")) } @@ -894,7 +1167,7 @@ type GetResourceMetricsOutput struct { // An optional pagination token provided by a previous request. If this parameter // is specified, the response includes only records beyond the token, up to // the value specified by MaxRecords. - NextToken *string `type:"string"` + NextToken *string `min:"1" type:"string"` } // String returns the string representation @@ -1308,6 +1581,26 @@ func (s *ResponseResourceMetricKey) SetMetric(v string) *ResponseResourceMetricK return s } +const ( + // DetailStatusAvailable is a DetailStatus enum value + DetailStatusAvailable = "AVAILABLE" + + // DetailStatusProcessing is a DetailStatus enum value + DetailStatusProcessing = "PROCESSING" + + // DetailStatusUnavailable is a DetailStatus enum value + DetailStatusUnavailable = "UNAVAILABLE" +) + +// DetailStatus_Values returns all elements of the DetailStatus enum +func DetailStatus_Values() []string { + return []string{ + DetailStatusAvailable, + DetailStatusProcessing, + DetailStatusUnavailable, + } +} + const ( // ServiceTypeRds is a ServiceType enum value ServiceTypeRds = "RDS" diff --git a/service/pi/piiface/interface.go b/service/pi/piiface/interface.go index 4694bef942a..a55df5518be 100644 --- a/service/pi/piiface/interface.go +++ b/service/pi/piiface/interface.go @@ -64,6 +64,10 @@ type PIAPI interface { DescribeDimensionKeysWithContext(aws.Context, *pi.DescribeDimensionKeysInput, ...request.Option) (*pi.DescribeDimensionKeysOutput, error) DescribeDimensionKeysRequest(*pi.DescribeDimensionKeysInput) (*request.Request, *pi.DescribeDimensionKeysOutput) + GetDimensionKeyDetails(*pi.GetDimensionKeyDetailsInput) (*pi.GetDimensionKeyDetailsOutput, error) + GetDimensionKeyDetailsWithContext(aws.Context, *pi.GetDimensionKeyDetailsInput, ...request.Option) (*pi.GetDimensionKeyDetailsOutput, error) + GetDimensionKeyDetailsRequest(*pi.GetDimensionKeyDetailsInput) (*request.Request, *pi.GetDimensionKeyDetailsOutput) + GetResourceMetrics(*pi.GetResourceMetricsInput) (*pi.GetResourceMetricsOutput, error) GetResourceMetricsWithContext(aws.Context, *pi.GetResourceMetricsInput, ...request.Option) (*pi.GetResourceMetricsOutput, error) GetResourceMetricsRequest(*pi.GetResourceMetricsInput) (*request.Request, *pi.GetResourceMetricsOutput) diff --git a/service/qldb/api.go b/service/qldb/api.go index b75a8e83275..a001fc0378a 100644 --- a/service/qldb/api.go +++ b/service/qldb/api.go @@ -147,7 +147,7 @@ func (c *QLDB) CreateLedgerRequest(input *CreateLedgerInput) (req *request.Reque // CreateLedger API operation for Amazon QLDB. // -// Creates a new ledger in your AWS account. +// Creates a new ledger in your AWS account in the current Region. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -239,10 +239,8 @@ func (c *QLDB) DeleteLedgerRequest(input *DeleteLedgerInput) (req *request.Reque // Deletes a ledger and all of its contents. This action is irreversible. // // If deletion protection is enabled, you must first disable it before you can -// delete the ledger using the QLDB API or the AWS Command Line Interface (AWS -// CLI). You can disable it by calling the UpdateLedger operation to set the -// flag to false. The QLDB console disables deletion protection for you when -// you use it to delete a ledger. +// delete the ledger. You can disable it by calling the UpdateLedger operation +// to set the flag to false. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -332,7 +330,11 @@ func (c *QLDB) DescribeJournalKinesisStreamRequest(input *DescribeJournalKinesis // // Returns detailed information about a given Amazon QLDB journal stream. The // output includes the Amazon Resource Name (ARN), stream name, current status, -// creation time, and the parameters of your original stream creation request. +// creation time, and the parameters of the original stream creation request. +// +// This action does not return any expired journal streams. For more information, +// see Expiration for terminal streams (https://docs.aws.amazon.com/qldb/latest/developerguide/streams.create.html#streams.create.states.expiration) +// in the Amazon QLDB Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about @@ -418,11 +420,11 @@ func (c *QLDB) DescribeJournalS3ExportRequest(input *DescribeJournalS3ExportInpu // DescribeJournalS3Export API operation for Amazon QLDB. // // Returns information about a journal export job, including the ledger name, -// export ID, when it was created, current status, and its start and end time -// export parameters. +// export ID, creation time, current status, and the parameters of the original +// export creation request. // // This action does not return any expired export jobs. For more information, -// see Export Job Expiration (https://docs.aws.amazon.com/qldb/latest/developerguide/export-journal.request.html#export-journal.request.expiration) +// see Export job expiration (https://docs.aws.amazon.com/qldb/latest/developerguide/export-journal.request.html#export-journal.request.expiration) // in the Amazon QLDB Developer Guide. // // If the export job with the given ExportId doesn't exist, then throws ResourceNotFoundException. @@ -959,6 +961,10 @@ func (c *QLDB) ListJournalKinesisStreamsForLedgerRequest(input *ListJournalKines // ledger. The output of each stream descriptor includes the same details that // are returned by DescribeJournalKinesisStream. // +// This action does not return any expired journal streams. For more information, +// see Expiration for terminal streams (https://docs.aws.amazon.com/qldb/latest/developerguide/streams.create.html#streams.create.states.expiration) +// in the Amazon QLDB Developer Guide. +// // This action returns a maximum of MaxResults items. It is paginated so that // you can retrieve all the items by calling ListJournalKinesisStreamsForLedger // multiple times. @@ -1111,7 +1117,7 @@ func (c *QLDB) ListJournalS3ExportsRequest(input *ListJournalS3ExportsInput) (re // you can retrieve all the items by calling ListJournalS3Exports multiple times. // // This action does not return any expired export jobs. For more information, -// see Export Job Expiration (https://docs.aws.amazon.com/qldb/latest/developerguide/export-journal.request.html#export-journal.request.expiration) +// see Export job expiration (https://docs.aws.amazon.com/qldb/latest/developerguide/export-journal.request.html#export-journal.request.expiration) // in the Amazon QLDB Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1251,7 +1257,7 @@ func (c *QLDB) ListJournalS3ExportsForLedgerRequest(input *ListJournalS3ExportsF // times. // // This action does not return any expired export jobs. For more information, -// see Export Job Expiration (https://docs.aws.amazon.com/qldb/latest/developerguide/export-journal.request.html#export-journal.request.expiration) +// see Export job expiration (https://docs.aws.amazon.com/qldb/latest/developerguide/export-journal.request.html#export-journal.request.expiration) // in the Amazon QLDB Developer Guide. // // Returns awserr.Error for service API and SDK errors. Use runtime type assertions @@ -1937,6 +1943,11 @@ func (c *QLDB) UpdateLedgerPermissionsModeRequest(input *UpdateLedgerPermissions // // Updates the permissions mode of a ledger. // +// Before you switch to the STANDARD permissions mode, you must first create +// all required IAM policies and table tags to avoid disruption to your users. +// To learn more, see Migrating to the standard permissions mode (https://docs.aws.amazon.com/qldb/latest/developerguide/ledger-management.basics.html#ledger-mgmt.basics.update-permissions.migrating) +// in the Amazon QLDB Developer Guide. +// // Returns awserr.Error for service API and SDK errors. Use runtime type assertions // with awserr.Error's Code and Message methods to get detailed information about // the error. @@ -1981,7 +1992,8 @@ type CancelJournalKinesisStreamInput struct { // LedgerName is a required field LedgerName *string `location:"uri" locationName:"name" min:"1" type:"string" required:"true"` - // The unique ID that QLDB assigns to each QLDB journal stream. + // The UUID (represented in Base62-encoded text) of the QLDB journal stream + // to be canceled. // // StreamId is a required field StreamId *string `location:"uri" locationName:"streamId" min:"22" type:"string" required:"true"` @@ -2034,7 +2046,7 @@ func (s *CancelJournalKinesisStreamInput) SetStreamId(v string) *CancelJournalKi type CancelJournalKinesisStreamOutput struct { _ struct{} `type:"structure"` - // The unique ID that QLDB assigns to each QLDB journal stream. + // The UUID (Base62-encoded text) of the canceled QLDB journal stream. StreamId *string `min:"22" type:"string"` } @@ -2061,10 +2073,8 @@ type CreateLedgerInput struct { // on ledger creation, this feature is enabled (true) by default. // // If deletion protection is enabled, you must first disable it before you can - // delete the ledger using the QLDB API or the AWS Command Line Interface (AWS - // CLI). You can disable it by calling the UpdateLedger operation to set the - // flag to false. The QLDB console disables deletion protection for you when - // you use it to delete a ledger. + // delete the ledger. You can disable it by calling the UpdateLedger operation + // to set the flag to false. DeletionProtection *bool `type:"boolean"` // The name of the ledger that you want to create. The name must be unique among @@ -2081,17 +2091,20 @@ type CreateLedgerInput struct { // parameter can have one of the following values: // // * ALLOW_ALL: A legacy permissions mode that enables access control with - // API-level granularity for ledgers. This mode allows users who have SendCommand - // permissions for this ledger to run all PartiQL commands (hence, ALLOW_ALL) - // on any tables in the specified ledger. This mode disregards any table-level - // or command-level IAM permissions policies that you create for the ledger. + // API-level granularity for ledgers. This mode allows users who have the + // SendCommand API permission for this ledger to run all PartiQL commands + // (hence, ALLOW_ALL) on any tables in the specified ledger. This mode disregards + // any table-level or command-level IAM permissions policies that you create + // for the ledger. // // * STANDARD: (Recommended) A permissions mode that enables access control // with finer granularity for ledgers, tables, and PartiQL commands. By default, // this mode denies all user requests to run any PartiQL commands on any // tables in this ledger. To allow PartiQL commands to run, you must create // IAM permissions policies for specific table resources and PartiQL actions, - // in addition to SendCommand API permissions for the ledger. + // in addition to the SendCommand API permission for the ledger. For information, + // see Getting started with the standard permissions mode (https://docs.aws.amazon.com/qldb/latest/developerguide/getting-started-standard-mode.html) + // in the Amazon QLDB Developer Guide. // // We strongly recommend using the STANDARD permissions mode to maximize the // security of your ledger data. @@ -2172,10 +2185,8 @@ type CreateLedgerOutput struct { // on ledger creation, this feature is enabled (true) by default. // // If deletion protection is enabled, you must first disable it before you can - // delete the ledger using the QLDB API or the AWS Command Line Interface (AWS - // CLI). You can disable it by calling the UpdateLedger operation to set the - // flag to false. The QLDB console disables deletion protection for you when - // you use it to delete a ledger. + // delete the ledger. You can disable it by calling the UpdateLedger operation + // to set the flag to false. DeletionProtection *bool `type:"boolean"` // The name of the ledger. @@ -2297,7 +2308,8 @@ type DescribeJournalKinesisStreamInput struct { // LedgerName is a required field LedgerName *string `location:"uri" locationName:"name" min:"1" type:"string" required:"true"` - // The unique ID that QLDB assigns to each QLDB journal stream. + // The UUID (represented in Base62-encoded text) of the QLDB journal stream + // to describe. // // StreamId is a required field StreamId *string `location:"uri" locationName:"streamId" min:"22" type:"string" required:"true"` @@ -2374,7 +2386,8 @@ func (s *DescribeJournalKinesisStreamOutput) SetStream(v *JournalKinesisStreamDe type DescribeJournalS3ExportInput struct { _ struct{} `type:"structure"` - // The unique ID of the journal export job that you want to describe. + // The UUID (represented in Base62-encoded text) of the journal export job to + // describe. // // ExportId is a required field ExportId *string `location:"uri" locationName:"exportId" min:"22" type:"string" required:"true"` @@ -2511,10 +2524,8 @@ type DescribeLedgerOutput struct { // on ledger creation, this feature is enabled (true) by default. // // If deletion protection is enabled, you must first disable it before you can - // delete the ledger using the QLDB API or the AWS Command Line Interface (AWS - // CLI). You can disable it by calling the UpdateLedger operation to set the - // flag to false. The QLDB console disables deletion protection for you when - // you use it to delete a ledger. + // delete the ledger. You can disable it by calling the UpdateLedger operation + // to set the flag to false. DeletionProtection *bool `type:"boolean"` // The name of the ledger. @@ -2576,11 +2587,10 @@ func (s *DescribeLedgerOutput) SetState(v string) *DescribeLedgerOutput { type ExportJournalToS3Input struct { _ struct{} `type:"structure"` - // The exclusive end date and time for the range of journal contents that you - // want to export. + // The exclusive end date and time for the range of journal contents to export. // // The ExclusiveEndTime must be in ISO 8601 date and time format and in Universal - // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z + // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z. // // The ExclusiveEndTime must be less than or equal to the current UTC date and // time. @@ -2588,11 +2598,10 @@ type ExportJournalToS3Input struct { // ExclusiveEndTime is a required field ExclusiveEndTime *time.Time `type:"timestamp" required:"true"` - // The inclusive start date and time for the range of journal contents that - // you want to export. + // The inclusive start date and time for the range of journal contents to export. // // The InclusiveStartTime must be in ISO 8601 date and time format and in Universal - // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z + // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z. // // The InclusiveStartTime must be before ExclusiveEndTime. // @@ -2704,7 +2713,8 @@ func (s *ExportJournalToS3Input) SetS3ExportConfiguration(v *S3ExportConfigurati type ExportJournalToS3Output struct { _ struct{} `type:"structure"` - // The unique ID that QLDB assigns to each journal export job. + // The UUID (represented in Base62-encoded text) that QLDB assigns to each journal + // export job. // // To describe your export request and check the status of the job, you can // use ExportId to call DescribeJournalS3Export. @@ -2735,7 +2745,7 @@ type GetBlockInput struct { // The location of the block that you want to request. An address is an Amazon // Ion structure that has two fields: strandId and sequenceNo. // - // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:14} + // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:14}. // // BlockAddress is a required field BlockAddress *ValueHolder `type:"structure" required:"true" sensitive:"true"` @@ -2743,7 +2753,7 @@ type GetBlockInput struct { // The latest block location covered by the digest for which to request a proof. // An address is an Amazon Ion structure that has two fields: strandId and sequenceNo. // - // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:49} + // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:49}. DigestTipAddress *ValueHolder `type:"structure" sensitive:"true"` // The name of the ledger. @@ -2931,7 +2941,7 @@ type GetRevisionInput struct { // The block location of the document revision to be verified. An address is // an Amazon Ion structure that has two fields: strandId and sequenceNo. // - // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:14} + // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:14}. // // BlockAddress is a required field BlockAddress *ValueHolder `type:"structure" required:"true" sensitive:"true"` @@ -2939,10 +2949,10 @@ type GetRevisionInput struct { // The latest block location covered by the digest for which to request a proof. // An address is an Amazon Ion structure that has two fields: strandId and sequenceNo. // - // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:49} + // For example: {strandId:"BlFTjlSXze9BIh1KOszcE3",sequenceNo:49}. DigestTipAddress *ValueHolder `type:"structure" sensitive:"true"` - // The unique ID of the document to be verified. + // The UUID (represented in Base62-encoded text) of the document to be verified. // // DocumentId is a required field DocumentId *string `min:"22" type:"string" required:"true"` @@ -3118,9 +3128,9 @@ func (s *InvalidParameterException) RequestID() string { return s.RespMetadata.RequestID } -// The information about an Amazon QLDB journal stream, including the Amazon -// Resource Name (ARN), stream name, creation time, current status, and the -// parameters of your original stream creation request. +// Information about an Amazon QLDB journal stream, including the Amazon Resource +// Name (ARN), stream name, creation time, current status, and the parameters +// of the original stream creation request. type JournalKinesisStreamDescription struct { _ struct{} `type:"structure"` @@ -3145,7 +3155,7 @@ type JournalKinesisStreamDescription struct { InclusiveStartTime *time.Time `type:"timestamp"` // The configuration settings of the Amazon Kinesis Data Streams destination - // for your QLDB journal stream. + // for a QLDB journal stream. // // KinesisConfiguration is a required field KinesisConfiguration *KinesisConfiguration `type:"structure" required:"true"` @@ -3166,7 +3176,7 @@ type JournalKinesisStreamDescription struct { // Status is a required field Status *string `type:"string" required:"true" enum:"StreamStatus"` - // The unique ID that QLDB assigns to each QLDB journal stream. + // The UUID (represented in Base62-encoded text) of the QLDB journal stream. // // StreamId is a required field StreamId *string `min:"22" type:"string" required:"true"` @@ -3253,9 +3263,9 @@ func (s *JournalKinesisStreamDescription) SetStreamName(v string) *JournalKinesi return s } -// The information about a journal export job, including the ledger name, export -// ID, when it was created, current status, and its start and end time export -// parameters. +// Information about a journal export job, including the ledger name, export +// ID, creation time, current status, and the parameters of the original export +// creation request. type JournalS3ExportDescription struct { _ struct{} `type:"structure"` @@ -3272,7 +3282,7 @@ type JournalS3ExportDescription struct { // ExportCreationTime is a required field ExportCreationTime *time.Time `type:"timestamp" required:"true"` - // The unique ID of the journal export job. + // The UUID (represented in Base62-encoded text) of the journal export job. // // ExportId is a required field ExportId *string `min:"22" type:"string" required:"true"` @@ -3370,16 +3380,21 @@ func (s *JournalS3ExportDescription) SetStatus(v string) *JournalS3ExportDescrip } // The configuration settings of the Amazon Kinesis Data Streams destination -// for your Amazon QLDB journal stream. +// for an Amazon QLDB journal stream. type KinesisConfiguration struct { _ struct{} `type:"structure"` // Enables QLDB to publish multiple data records in a single Kinesis Data Streams - // record. To learn more, see KPL Key Concepts (https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-concepts.html) + // record, increasing the number of records sent per API call. + // + // This option is enabled by default. Record aggregation has important implications + // for processing records and requires de-aggregation in your stream consumer. + // To learn more, see KPL Key Concepts (https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-concepts.html) + // and Consumer De-aggregation (https://docs.aws.amazon.com/streams/latest/dev/kinesis-kpl-consumer-deaggregation.html) // in the Amazon Kinesis Data Streams Developer Guide. AggregationEnabled *bool `type:"boolean"` - // The Amazon Resource Name (ARN) of the Kinesis data stream resource. + // The Amazon Resource Name (ARN) of the Kinesis Data Streams resource. // // StreamArn is a required field StreamArn *string `min:"20" type:"string" required:"true"` @@ -3923,7 +3938,7 @@ func (s *ListLedgersOutput) SetNextToken(v string) *ListLedgersOutput { type ListTagsForResourceInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) for which you want to list the tags. For example: + // The Amazon Resource Name (ARN) for which to list the tags. For example: // // arn:aws:qldb:us-east-1:123456789012:ledger/exampleLedger // @@ -4240,7 +4255,7 @@ type S3EncryptionConfiguration struct { _ struct{} `type:"structure"` // The Amazon Resource Name (ARN) for a symmetric customer master key (CMK) - // in AWS Key Management Service (AWS KMS). Amazon QLDB does not support asymmetric + // in AWS Key Management Service (AWS KMS). Amazon S3 does not support asymmetric // CMKs. // // You must provide a KmsKeyArn if you specify SSE_KMS as the ObjectEncryptionType. @@ -4398,12 +4413,12 @@ type StreamJournalToKinesisInput struct { // define this parameter, the stream runs indefinitely until you cancel it. // // The ExclusiveEndTime must be in ISO 8601 date and time format and in Universal - // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z + // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z. ExclusiveEndTime *time.Time `type:"timestamp"` // The inclusive start date and time from which to start streaming journal data. // This parameter must be in ISO 8601 date and time format and in Universal - // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z + // Coordinated Time (UTC). For example: 2019-06-13T21:36:34Z. // // The InclusiveStartTime cannot be in the future and must be before ExclusiveEndTime. // @@ -4540,7 +4555,8 @@ func (s *StreamJournalToKinesisInput) SetTags(v map[string]*string) *StreamJourn type StreamJournalToKinesisOutput struct { _ struct{} `type:"structure"` - // The unique ID that QLDB assigns to each QLDB journal stream. + // The UUID (represented in Base62-encoded text) that QLDB assigns to each QLDB + // journal stream. StreamId *string `min:"22" type:"string"` } @@ -4637,15 +4653,14 @@ func (s TagResourceOutput) GoString() string { type UntagResourceInput struct { _ struct{} `type:"structure"` - // The Amazon Resource Name (ARN) from which you want to remove the tags. For - // example: + // The Amazon Resource Name (ARN) from which to remove the tags. For example: // // arn:aws:qldb:us-east-1:123456789012:ledger/exampleLedger // // ResourceArn is a required field ResourceArn *string `location:"uri" locationName:"resourceArn" min:"20" type:"string" required:"true"` - // The list of tag keys that you want to remove. + // The list of tag keys to remove. // // TagKeys is a required field TagKeys []*string `location:"querystring" locationName:"tagKeys" type:"list" required:"true"` @@ -4713,10 +4728,8 @@ type UpdateLedgerInput struct { // on ledger creation, this feature is enabled (true) by default. // // If deletion protection is enabled, you must first disable it before you can - // delete the ledger using the QLDB API or the AWS Command Line Interface (AWS - // CLI). You can disable it by calling the UpdateLedger operation to set the - // flag to false. The QLDB console disables deletion protection for you when - // you use it to delete a ledger. + // delete the ledger. You can disable it by calling the UpdateLedger operation + // to set the flag to false. DeletionProtection *bool `type:"boolean"` // The name of the ledger. @@ -4778,10 +4791,8 @@ type UpdateLedgerOutput struct { // on ledger creation, this feature is enabled (true) by default. // // If deletion protection is enabled, you must first disable it before you can - // delete the ledger using the QLDB API or the AWS Command Line Interface (AWS - // CLI). You can disable it by calling the UpdateLedger operation to set the - // flag to false. The QLDB console disables deletion protection for you when - // you use it to delete a ledger. + // delete the ledger. You can disable it by calling the UpdateLedger operation + // to set the flag to false. DeletionProtection *bool `type:"boolean"` // The name of the ledger. @@ -4843,17 +4854,20 @@ type UpdateLedgerPermissionsModeInput struct { // of the following values: // // * ALLOW_ALL: A legacy permissions mode that enables access control with - // API-level granularity for ledgers. This mode allows users who have SendCommand - // permissions for this ledger to run all PartiQL commands (hence, ALLOW_ALL) - // on any tables in the specified ledger. This mode disregards any table-level - // or command-level IAM permissions policies that you create for the ledger. + // API-level granularity for ledgers. This mode allows users who have the + // SendCommand API permission for this ledger to run all PartiQL commands + // (hence, ALLOW_ALL) on any tables in the specified ledger. This mode disregards + // any table-level or command-level IAM permissions policies that you create + // for the ledger. // // * STANDARD: (Recommended) A permissions mode that enables access control // with finer granularity for ledgers, tables, and PartiQL commands. By default, // this mode denies all user requests to run any PartiQL commands on any // tables in this ledger. To allow PartiQL commands to run, you must create // IAM permissions policies for specific table resources and PartiQL actions, - // in addition to SendCommand API permissions for the ledger. + // in addition to the SendCommand API permission for the ledger. For information, + // see Getting started with the standard permissions mode (https://docs.aws.amazon.com/qldb/latest/developerguide/getting-started-standard-mode.html) + // in the Amazon QLDB Developer Guide. // // We strongly recommend using the STANDARD permissions mode to maximize the // security of your ledger data. diff --git a/service/rds/api.go b/service/rds/api.go index cb9b9c35f4e..c0b2778c20d 100644 --- a/service/rds/api.go +++ b/service/rds/api.go @@ -10805,10 +10805,6 @@ func (c *RDS) ModifyDBClusterParameterGroupRequest(input *ModifyDBClusterParamet // For more information on Amazon Aurora, see What Is Amazon Aurora? (https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html) // in the Amazon Aurora User Guide. // -// Changes to dynamic parameters are applied immediately. Changes to static -// parameters require a reboot without failover to the DB cluster associated -// with the parameter group before the change can take effect. -// // After you create a DB cluster parameter group, you should wait at least 5 // minutes before creating your first DB cluster that uses that DB cluster parameter // group as the default parameter group. This allows Amazon RDS to fully complete @@ -11163,10 +11159,6 @@ func (c *RDS) ModifyDBParameterGroupRequest(input *ModifyDBParameterGroupInput) // parameter, submit a list of the following: ParameterName, ParameterValue, // and ApplyMethod. A maximum of 20 parameters can be modified in a single request. // -// Changes to dynamic parameters are applied immediately. Changes to static -// parameters require a reboot without failover to the DB instance associated -// with the parameter group before the change can take effect. -// // After you modify a DB parameter group, you should wait at least 5 minutes // before creating your first DB instance that uses that DB parameter group // as the default parameter group. This allows Amazon RDS to fully complete @@ -18367,6 +18359,28 @@ type CreateDBClusterParameterGroupInput struct { // // Example: aurora-postgresql9.6 // + // To list all of the available parameter group families for a DB engine, use + // the following command: + // + // aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily" + // --engine + // + // For example, to list all of the available parameter group families for the + // Aurora PostgreSQL DB engine, use the following command: + // + // aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily" + // --engine aurora-postgresql + // + // The output contains duplicates. + // + // The following are the valid DB engine values: + // + // * aurora (for MySQL 5.6-compatible Aurora) + // + // * aurora-mysql (for MySQL 5.7-compatible Aurora) + // + // * aurora-postgresql + // // DBParameterGroupFamily is a required field DBParameterGroupFamily *string `type:"string" required:"true"` @@ -20319,13 +20333,50 @@ type CreateDBParameterGroupInput struct { // to a DB instance running a database engine and engine version compatible // with that DB parameter group family. // - // To list all of the available parameter group families, use the following - // command: + // To list all of the available parameter group families for a DB engine, use + // the following command: // // aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily" + // --engine + // + // For example, to list all of the available parameter group families for the + // MySQL DB engine, use the following command: + // + // aws rds describe-db-engine-versions --query "DBEngineVersions[].DBParameterGroupFamily" + // --engine mysql // // The output contains duplicates. // + // The following are the valid DB engine values: + // + // * aurora (for MySQL 5.6-compatible Aurora) + // + // * aurora-mysql (for MySQL 5.7-compatible Aurora) + // + // * aurora-postgresql + // + // * mariadb + // + // * mysql + // + // * oracle-ee + // + // * oracle-se2 + // + // * oracle-se1 + // + // * oracle-se + // + // * postgres + // + // * sqlserver-ee + // + // * sqlserver-se + // + // * sqlserver-ex + // + // * sqlserver-web + // // DBParameterGroupFamily is a required field DBParameterGroupFamily *string `type:"string" required:"true"` @@ -23061,7 +23112,8 @@ type DBInstance struct { // Specifies the current state of this database. // - // For information about DB instance statuses, see DB Instance Status (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Overview.DBInstance.Status.html) + // For information about DB instance statuses, see Viewing DB instance status + // (https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/accessing-monitoring.html#Overview.DBInstance.Status) // in the Amazon RDS User Guide. DBInstanceStatus *string `type:"string"` @@ -34646,6 +34698,17 @@ type ModifyDBClusterParameterGroupInput struct { // A list of parameters in the DB cluster parameter group to modify. // + // Valid Values (for the application method): immediate | pending-reboot + // + // You can use the immediate value with dynamic parameters only. You can use + // the pending-reboot value for both dynamic and static parameters. + // + // When the application method is immediate, changes to dynamic parameters are + // applied immediately to the DB clusters associated with the parameter group. + // When the application method is pending-reboot, changes to dynamic and static + // parameters are applied after a reboot without failover to the DB clusters + // associated with the parameter group. + // // Parameters is a required field Parameters []*Parameter `locationNameList:"Parameter" type:"list" required:"true"` } @@ -35728,16 +35791,21 @@ type ModifyDBParameterGroupInput struct { // DBParameterGroupName is a required field DBParameterGroupName *string `type:"string" required:"true"` - // An array of parameter names, values, and the apply method for the parameter - // update. At least one parameter name, value, and apply method must be supplied; - // later arguments are optional. A maximum of 20 parameters can be modified - // in a single request. + // An array of parameter names, values, and the application methods for the + // parameter update. At least one parameter name, value, and application method + // method must be supplied; later arguments are optional. A maximum of 20 parameters + // can be modified in a single request. // // Valid Values (for the application method): immediate | pending-reboot // // You can use the immediate value with dynamic parameters only. You can use - // the pending-reboot value for both dynamic and static parameters, and changes - // are applied when you reboot the DB instance without failover. + // the pending-reboot value for both dynamic and static parameters. + // + // When the application method is immediate, changes to dynamic parameters are + // applied immediately to the DB instances associated with the parameter group. + // When the application method is pending-reboot, changes to dynamic and static + // parameters are applied after a reboot without failover to the DB instances + // associated with the parameter group. // // Parameters is a required field Parameters []*Parameter `locationNameList:"Parameter" type:"list" required:"true"`