diff --git a/.changes/next-release/api-change-rds-75020.json b/.changes/next-release/api-change-rds-75020.json new file mode 100644 index 0000000000..b466519124 --- /dev/null +++ b/.changes/next-release/api-change-rds-75020.json @@ -0,0 +1,5 @@ +{ + "type": "api-change", + "category": "``rds``", + "description": "Updates Amazon RDS documentation to clarify the RestoreDBClusterToPointInTime description." +} diff --git a/.changes/next-release/api-change-route53-49506.json b/.changes/next-release/api-change-route53-49506.json new file mode 100644 index 0000000000..193ff1ba6a --- /dev/null +++ b/.changes/next-release/api-change-route53-49506.json @@ -0,0 +1,5 @@ +{ + "type": "api-change", + "category": "``route53``", + "description": "Amazon Route 53 now supports the Asia Pacific (Thailand) Region (ap-southeast-7) for latency records, geoproximity records, and private DNS for Amazon VPCs in that region" +} diff --git a/.changes/next-release/api-change-sagemaker-71036.json b/.changes/next-release/api-change-sagemaker-71036.json new file mode 100644 index 0000000000..3a59983974 --- /dev/null +++ b/.changes/next-release/api-change-sagemaker-71036.json @@ -0,0 +1,5 @@ +{ + "type": "api-change", + "category": "``sagemaker``", + "description": "Adds support for IPv6 for SageMaker HyperPod cluster nodes." +} diff --git a/botocore/data/rds/2014-10-31/service-2.json b/botocore/data/rds/2014-10-31/service-2.json index e4e051dc96..f9b93f2fae 100644 --- a/botocore/data/rds/2014-10-31/service-2.json +++ b/botocore/data/rds/2014-10-31/service-2.json @@ -4258,7 +4258,7 @@ }, "EnableHttpEndpoint":{ "shape":"BooleanOptional", - "documentation":"

Specifies whether to enable the HTTP endpoint for the DB cluster. By default, the HTTP endpoint isn't enabled.

When enabled, the HTTP endpoint provides a connectionless web service API (RDS Data API) for running SQL queries on the DB cluster. You can also query your database from inside the RDS console with the RDS query editor.

RDS Data API is supported with the following DB clusters:

For more information, see Using RDS Data API in the Amazon Aurora User Guide.

Valid for Cluster Type: Aurora DB clusters only

" + "documentation":"

Specifies whether to enable the HTTP endpoint for the DB cluster. By default, the HTTP endpoint isn't enabled.

When enabled, the HTTP endpoint provides a connectionless web service API (RDS Data API) for running SQL queries on the DB cluster. You can also query your database from inside the RDS console with the RDS query editor.

For more information, see Using RDS Data API in the Amazon Aurora User Guide.

Valid for Cluster Type: Aurora DB clusters only

" }, "CopyTagsToSnapshot":{ "shape":"BooleanOptional", @@ -12595,7 +12595,7 @@ }, "Iops":{ "shape":"IntegerOptional", - "documentation":"

The new Provisioned IOPS (I/O operations per second) value for the RDS instance.

Changing this setting doesn't result in an outage and the change is applied during the next maintenance window unless the ApplyImmediately parameter is enabled for this request. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.

If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.

Constraints:

Default: Uses existing setting

" + "documentation":"

The new Provisioned IOPS (I/O operations per second) value for the RDS instance.

Changing this setting doesn't result in an outage and the change is applied during the next maintenance window unless the ApplyImmediately parameter is enabled for this request. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.

If you choose to migrate your DB instance from using standard storage to Provisioned IOPS (io1), or from Provisioned IOPS to standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.

Constraints:

Default: Uses existing setting

" }, "OptionGroupName":{ "shape":"String", @@ -12607,7 +12607,7 @@ }, "StorageType":{ "shape":"String", - "documentation":"

The storage type to associate with the DB instance.

If you specify io1, io2, or gp3 you must also include a value for the Iops parameter.

If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.

Valid Values: gp2 | gp3 | io1 | io2 | standard

Default: io1, if the Iops parameter is specified. Otherwise, gp2.

" + "documentation":"

The storage type to associate with the DB instance.

If you specify io1, io2, or gp3 you must also include a value for the Iops parameter.

If you choose to migrate your DB instance from using standard storage to gp2 (General Purpose SSD), gp3, or Provisioned IOPS (io1), or from these storage types to standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.

Valid Values: gp2 | gp3 | io1 | io2 | standard

Default: io1, if the Iops parameter is specified. Otherwise, gp2.

" }, "TdeCredentialArn":{ "shape":"String", @@ -15255,7 +15255,7 @@ }, "EngineMode":{ "shape":"String", - "documentation":"

The engine mode of the new cluster. Specify provisioned or serverless, depending on the type of the cluster you are creating. You can create an Aurora Serverless v1 clone from a provisioned cluster, or a provisioned clone from an Aurora Serverless v1 cluster. To create a clone that is an Aurora Serverless v1 cluster, the original cluster must be an Aurora Serverless v1 cluster or an encrypted provisioned cluster.

Valid for: Aurora DB clusters only

" + "documentation":"

The engine mode of the new cluster. Specify provisioned or serverless, depending on the type of the cluster you are creating. You can create an Aurora Serverless v1 clone from a provisioned cluster, or a provisioned clone from an Aurora Serverless v1 cluster. To create a clone that is an Aurora Serverless v1 cluster, the original cluster must be an Aurora Serverless v1 cluster or an encrypted provisioned cluster. To create a full copy that is an Aurora Serverless v1 cluster, specify the engine mode serverless.

Valid for: Aurora DB clusters only

" }, "DBClusterInstanceClass":{ "shape":"String", diff --git a/botocore/data/route53/2013-04-01/service-2.json b/botocore/data/route53/2013-04-01/service-2.json index 917e401213..0b640508e4 100644 --- a/botocore/data/route53/2013-04-01/service-2.json +++ b/botocore/data/route53/2013-04-01/service-2.json @@ -1796,7 +1796,8 @@ "ap-southeast-4", "il-central-1", "ca-west-1", - "ap-southeast-5" + "ap-southeast-5", + "ap-southeast-7" ], "max":64, "min":1 @@ -5562,7 +5563,8 @@ "ap-southeast-4", "il-central-1", "ca-west-1", - "ap-southeast-5" + "ap-southeast-5", + "ap-southeast-7" ], "max":64, "min":1 @@ -6441,7 +6443,8 @@ "ap-southeast-4", "il-central-1", "ca-west-1", - "ap-southeast-5" + "ap-southeast-5", + "ap-southeast-7" ], "max":64, "min":1 diff --git a/botocore/data/sagemaker/2017-07-24/service-2.json b/botocore/data/sagemaker/2017-07-24/service-2.json index e391d196cc..622ea7e6b3 100644 --- a/botocore/data/sagemaker/2017-07-24/service-2.json +++ b/botocore/data/sagemaker/2017-07-24/service-2.json @@ -7947,6 +7947,10 @@ "shape":"ClusterPrivatePrimaryIp", "documentation":"

The private primary IP address of the SageMaker HyperPod cluster node.

" }, + "PrivatePrimaryIpv6":{ + "shape":"ClusterPrivatePrimaryIpv6", + "documentation":"

The private primary IPv6 address of the SageMaker HyperPod cluster node.

" + }, "PrivateDnsHostname":{ "shape":"ClusterPrivateDnsHostname", "documentation":"

The private DNS hostname of the SageMaker HyperPod cluster node.

" @@ -8048,6 +8052,7 @@ "type":"string", "pattern":"^((25[0-5]|(2[0-4]|1\\d|[1-9]|)\\d)\\.?\\b){4}$" }, + "ClusterPrivatePrimaryIpv6":{"type":"string"}, "ClusterSchedulerConfigArn":{ "type":"string", "max":256, @@ -29898,7 +29903,7 @@ }, "Image":{ "shape":"ContainerImage", - "documentation":"

The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored.

If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.

" + "documentation":"

The Amazon EC2 Container Registry path where inference code is stored.

If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both registry/repository[:tag] and registry/repository[@digest] image path formats. For more information, see Using Your Own Algorithms with Amazon SageMaker.

" }, "ImageDigest":{ "shape":"ImageDigest", @@ -35839,7 +35844,7 @@ "members":{ "S3DataType":{ "shape":"S3DataType", - "documentation":"

If you choose S3Prefix, S3Uri identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training.

If you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.

If you choose AugmentedManifestFile, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile can only be used if the Channel's input mode is Pipe.

" + "documentation":"

If you choose S3Prefix, S3Uri identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training.

If you choose ManifestFile, S3Uri identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training.

If you choose AugmentedManifestFile, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. AugmentedManifestFile can only be used if the Channel's input mode is Pipe.

" }, "S3Uri":{ "shape":"S3Uri", @@ -35906,7 +35911,7 @@ }, "ManifestEtag":{ "shape":"String", - "documentation":"

The ETag associated with Manifest S3URI.

" + "documentation":"

The ETag associated with Manifest S3 URI.

" } }, "documentation":"

Specifies the S3 location of ML model data to deploy.

"