Skip to content

Commit

Permalink
Update to latest models
Browse files Browse the repository at this point in the history
  • Loading branch information
aws-sdk-python-automation committed Jan 8, 2025
1 parent 410ede3 commit 283fee0
Show file tree
Hide file tree
Showing 6 changed files with 33 additions and 10 deletions.
5 changes: 5 additions & 0 deletions .changes/next-release/api-change-rds-75020.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"type": "api-change",
"category": "``rds``",
"description": "Updates Amazon RDS documentation to clarify the RestoreDBClusterToPointInTime description."
}
5 changes: 5 additions & 0 deletions .changes/next-release/api-change-route53-49506.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"type": "api-change",
"category": "``route53``",
"description": "Amazon Route 53 now supports the Asia Pacific (Thailand) Region (ap-southeast-7) for latency records, geoproximity records, and private DNS for Amazon VPCs in that region"
}
5 changes: 5 additions & 0 deletions .changes/next-release/api-change-sagemaker-71036.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
{
"type": "api-change",
"category": "``sagemaker``",
"description": "Adds support for IPv6 for SageMaker HyperPod cluster nodes."
}
8 changes: 4 additions & 4 deletions botocore/data/rds/2014-10-31/service-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -4258,7 +4258,7 @@
},
"EnableHttpEndpoint":{
"shape":"BooleanOptional",
"documentation":"<p>Specifies whether to enable the HTTP endpoint for the DB cluster. By default, the HTTP endpoint isn't enabled.</p> <p>When enabled, the HTTP endpoint provides a connectionless web service API (RDS Data API) for running SQL queries on the DB cluster. You can also query your database from inside the RDS console with the RDS query editor.</p> <p>RDS Data API is supported with the following DB clusters:</p> <ul> <li> <p>Aurora PostgreSQL Serverless v2 and provisioned</p> </li> <li> <p>Aurora PostgreSQL and Aurora MySQL Serverless v1</p> </li> </ul> <p>For more information, see <a href=\"https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html\">Using RDS Data API</a> in the <i>Amazon Aurora User Guide</i>.</p> <p>Valid for Cluster Type: Aurora DB clusters only</p>"
"documentation":"<p>Specifies whether to enable the HTTP endpoint for the DB cluster. By default, the HTTP endpoint isn't enabled.</p> <p>When enabled, the HTTP endpoint provides a connectionless web service API (RDS Data API) for running SQL queries on the DB cluster. You can also query your database from inside the RDS console with the RDS query editor.</p> <p>For more information, see <a href=\"https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html\">Using RDS Data API</a> in the <i>Amazon Aurora User Guide</i>.</p> <p>Valid for Cluster Type: Aurora DB clusters only</p>"
},
"CopyTagsToSnapshot":{
"shape":"BooleanOptional",
Expand Down Expand Up @@ -12595,7 +12595,7 @@
},
"Iops":{
"shape":"IntegerOptional",
"documentation":"<p>The new Provisioned IOPS (I/O operations per second) value for the RDS instance.</p> <p>Changing this setting doesn't result in an outage and the change is applied during the next maintenance window unless the <code>ApplyImmediately</code> parameter is enabled for this request. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.</p> <p>If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.</p> <p>Constraints:</p> <ul> <li> <p>For RDS for MariaDB, RDS for MySQL, RDS for Oracle, and RDS for PostgreSQL - The value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.</p> </li> <li> <p>When you increase the Provisioned IOPS, you must also specify the <code>AllocatedStorage</code> parameter. You can use the current value for <code>AllocatedStorage</code>.</p> </li> </ul> <p>Default: Uses existing setting</p>"
"documentation":"<p>The new Provisioned IOPS (I/O operations per second) value for the RDS instance.</p> <p>Changing this setting doesn't result in an outage and the change is applied during the next maintenance window unless the <code>ApplyImmediately</code> parameter is enabled for this request. If you are migrating from Provisioned IOPS to standard storage, set this value to 0. The DB instance will require a reboot for the change in storage type to take effect.</p> <p>If you choose to migrate your DB instance from using standard storage to Provisioned IOPS (io1), or from Provisioned IOPS to standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.</p> <p/> <p>Constraints:</p> <ul> <li> <p>For RDS for MariaDB, RDS for MySQL, RDS for Oracle, and RDS for PostgreSQL - The value supplied must be at least 10% greater than the current value. Values that are not at least 10% greater than the existing value are rounded up so that they are 10% greater than the current value.</p> </li> <li> <p>When you increase the Provisioned IOPS, you must also specify the <code>AllocatedStorage</code> parameter. You can use the current value for <code>AllocatedStorage</code>.</p> </li> </ul> <p>Default: Uses existing setting</p>"
},
"OptionGroupName":{
"shape":"String",
Expand All @@ -12607,7 +12607,7 @@
},
"StorageType":{
"shape":"String",
"documentation":"<p>The storage type to associate with the DB instance.</p> <p>If you specify <code>io1</code>, <code>io2</code>, or <code>gp3</code> you must also include a value for the <code>Iops</code> parameter.</p> <p>If you choose to migrate your DB instance from using standard storage to using Provisioned IOPS, or from using Provisioned IOPS to using standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.</p> <p>Valid Values: <code>gp2 | gp3 | io1 | io2 | standard</code> </p> <p>Default: <code>io1</code>, if the <code>Iops</code> parameter is specified. Otherwise, <code>gp2</code>.</p>"
"documentation":"<p>The storage type to associate with the DB instance.</p> <p>If you specify <code>io1</code>, <code>io2</code>, or <code>gp3</code> you must also include a value for the <code>Iops</code> parameter.</p> <p>If you choose to migrate your DB instance from using standard storage to gp2 (General Purpose SSD), gp3, or Provisioned IOPS (io1), or from these storage types to standard storage, the process can take time. The duration of the migration depends on several factors such as database load, storage size, storage type (standard or Provisioned IOPS), amount of IOPS provisioned (if any), and the number of prior scale storage operations. Typical migration times are under 24 hours, but the process can take up to several days in some cases. During the migration, the DB instance is available for use, but might experience performance degradation. While the migration takes place, nightly backups for the instance are suspended. No other Amazon RDS operations can take place for the instance, including modifying the instance, rebooting the instance, deleting the instance, creating a read replica for the instance, and creating a DB snapshot of the instance.</p> <p>Valid Values: <code>gp2 | gp3 | io1 | io2 | standard</code> </p> <p>Default: <code>io1</code>, if the <code>Iops</code> parameter is specified. Otherwise, <code>gp2</code>.</p>"
},
"TdeCredentialArn":{
"shape":"String",
Expand Down Expand Up @@ -15255,7 +15255,7 @@
},
"EngineMode":{
"shape":"String",
"documentation":"<p>The engine mode of the new cluster. Specify <code>provisioned</code> or <code>serverless</code>, depending on the type of the cluster you are creating. You can create an Aurora Serverless v1 clone from a provisioned cluster, or a provisioned clone from an Aurora Serverless v1 cluster. To create a clone that is an Aurora Serverless v1 cluster, the original cluster must be an Aurora Serverless v1 cluster or an encrypted provisioned cluster.</p> <p>Valid for: Aurora DB clusters only</p>"
"documentation":"<p>The engine mode of the new cluster. Specify <code>provisioned</code> or <code>serverless</code>, depending on the type of the cluster you are creating. You can create an Aurora Serverless v1 clone from a provisioned cluster, or a provisioned clone from an Aurora Serverless v1 cluster. To create a clone that is an Aurora Serverless v1 cluster, the original cluster must be an Aurora Serverless v1 cluster or an encrypted provisioned cluster. To create a full copy that is an Aurora Serverless v1 cluster, specify the engine mode <code>serverless</code>.</p> <p>Valid for: Aurora DB clusters only</p>"
},
"DBClusterInstanceClass":{
"shape":"String",
Expand Down
9 changes: 6 additions & 3 deletions botocore/data/route53/2013-04-01/service-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -1796,7 +1796,8 @@
"ap-southeast-4",
"il-central-1",
"ca-west-1",
"ap-southeast-5"
"ap-southeast-5",
"ap-southeast-7"
],
"max":64,
"min":1
Expand Down Expand Up @@ -5562,7 +5563,8 @@
"ap-southeast-4",
"il-central-1",
"ca-west-1",
"ap-southeast-5"
"ap-southeast-5",
"ap-southeast-7"
],
"max":64,
"min":1
Expand Down Expand Up @@ -6441,7 +6443,8 @@
"ap-southeast-4",
"il-central-1",
"ca-west-1",
"ap-southeast-5"
"ap-southeast-5",
"ap-southeast-7"
],
"max":64,
"min":1
Expand Down
11 changes: 8 additions & 3 deletions botocore/data/sagemaker/2017-07-24/service-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -7947,6 +7947,10 @@
"shape":"ClusterPrivatePrimaryIp",
"documentation":"<p>The private primary IP address of the SageMaker HyperPod cluster node.</p>"
},
"PrivatePrimaryIpv6":{
"shape":"ClusterPrivatePrimaryIpv6",
"documentation":"<p>The private primary IPv6 address of the SageMaker HyperPod cluster node.</p>"
},
"PrivateDnsHostname":{
"shape":"ClusterPrivateDnsHostname",
"documentation":"<p>The private DNS hostname of the SageMaker HyperPod cluster node.</p>"
Expand Down Expand Up @@ -8048,6 +8052,7 @@
"type":"string",
"pattern":"^((25[0-5]|(2[0-4]|1\\d|[1-9]|)\\d)\\.?\\b){4}$"
},
"ClusterPrivatePrimaryIpv6":{"type":"string"},
"ClusterSchedulerConfigArn":{
"type":"string",
"max":256,
Expand Down Expand Up @@ -29898,7 +29903,7 @@
},
"Image":{
"shape":"ContainerImage",
"documentation":"<p>The Amazon EC2 Container Registry (Amazon ECR) path where inference code is stored.</p> <p>If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both <code>registry/repository[:tag]</code> and <code>registry/repository[@digest]</code> image path formats. For more information, see <a href=\"https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html\">Using Your Own Algorithms with Amazon SageMaker</a>.</p>"
"documentation":"<p>The Amazon EC2 Container Registry path where inference code is stored.</p> <p>If you are using your own custom algorithm instead of an algorithm provided by SageMaker, the inference code must meet SageMaker requirements. SageMaker supports both <code>registry/repository[:tag]</code> and <code>registry/repository[@digest]</code> image path formats. For more information, see <a href=\"https://docs.aws.amazon.com/sagemaker/latest/dg/your-algorithms.html\">Using Your Own Algorithms with Amazon SageMaker</a>.</p>"
},
"ImageDigest":{
"shape":"ImageDigest",
Expand Down Expand Up @@ -35839,7 +35844,7 @@
"members":{
"S3DataType":{
"shape":"S3DataType",
"documentation":"<p>If you choose <code>S3Prefix</code>, <code>S3Uri</code> identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training. </p> <p>If you choose <code>ManifestFile</code>, <code>S3Uri</code> identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training. </p> <p>If you choose <code>AugmentedManifestFile</code>, S3Uri identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. <code>AugmentedManifestFile</code> can only be used if the Channel's input mode is <code>Pipe</code>.</p>"
"documentation":"<p>If you choose <code>S3Prefix</code>, <code>S3Uri</code> identifies a key name prefix. SageMaker uses all objects that match the specified key name prefix for model training. </p> <p>If you choose <code>ManifestFile</code>, <code>S3Uri</code> identifies an object that is a manifest file containing a list of object keys that you want SageMaker to use for model training. </p> <p>If you choose <code>AugmentedManifestFile</code>, <code>S3Uri</code> identifies an object that is an augmented manifest file in JSON lines format. This file contains the data you want to use for model training. <code>AugmentedManifestFile</code> can only be used if the Channel's input mode is <code>Pipe</code>.</p>"
},
"S3Uri":{
"shape":"S3Uri",
Expand Down Expand Up @@ -35906,7 +35911,7 @@
},
"ManifestEtag":{
"shape":"String",
"documentation":"<p>The ETag associated with Manifest S3URI.</p>"
"documentation":"<p>The ETag associated with Manifest S3 URI.</p>"
}
},
"documentation":"<p>Specifies the S3 location of ML model data to deploy.</p>"
Expand Down

0 comments on commit 283fee0

Please sign in to comment.