diff --git a/codegen/sdk-codegen/aws-models/ec2.json b/codegen/sdk-codegen/aws-models/ec2.json index fcfa79e9cf3..9bb20e4a893 100644 --- a/codegen/sdk-codegen/aws-models/ec2.json +++ b/codegen/sdk-codegen/aws-models/ec2.json @@ -9558,7 +9558,7 @@ "target": "com.amazonaws.ec2#ByoipCidrState", "traits": { "aws.protocols#ec2QueryName": "State", - "smithy.api#documentation": "
The state of the address pool.
", + "smithy.api#documentation": "The state of the address range.
\n\n advertised
: The address range is being advertised to the internet by Amazon Web Services.
\n deprovisioned
: The address range is deprovisioned.
\n failed-deprovision
: The request to deprovision the address range was unsuccessful. Ensure that all EIPs from the range have been deallocated and try again.
\n failed-provision
: The request to provision the address range was unsuccessful.
\n pending-deprovision
: You’ve submitted a request to deprovision an address range and it's pending.
\n pending-provision
: You’ve submitted a request to provision an address range and it's pending.
\n provisioned
: The address range is provisioned and can be advertised. The range is not currently advertised.
\n provisioned-not-publicly-advertisable
: The address range is provisioned and cannot be advertised.
Indicates whether your client's IP address is preserved as the source. The value is true
or false
.
If true
, your client's IP address is used when you connect to a resource.
If false
, the elastic network interface IP address is used when you connect to a resource.
Default: true
\n
Indicates whether the client IP address is preserved as the source. The following are the possible values.
\n\n true
- Use the client IP address as the source.
\n false
- Use the network interface IP address as the source.
Default: false
\n
The Amazon Resource Name (ARN) of the Outpost.
" + "smithy.api#documentation": "The Amazon Resource Name (ARN) of the Outpost on which to create the volume.
\nIf you intend to use a volume with an instance running on an outpost, then you must \n create the volume on the same outpost as the instance. You can't use a volume created \n in an Amazon Web Services Region with an instance on an Amazon Web Services outpost, or the other way around.
" } }, "Size": { @@ -40956,7 +40956,7 @@ "VolumeIds": { "target": "com.amazonaws.ec2#VolumeIdStringList", "traits": { - "smithy.api#documentation": "The volume IDs.
", + "smithy.api#documentation": "The volume IDs. If not specified, then all volumes are included in the response.
", "smithy.api#xmlName": "VolumeId" } }, @@ -55619,6 +55619,12 @@ "com.amazonaws.ec2#HostTenancy": { "type": "enum", "members": { + "default": { + "target": "smithy.api#Unit", + "traits": { + "smithy.api#enumValue": "default" + } + }, "dedicated": { "target": "smithy.api#Unit", "traits": { @@ -60658,7 +60664,7 @@ "target": "com.amazonaws.ec2#Integer", "traits": { "aws.protocols#ec2QueryName": "MaxSpotPriceAsPercentageOfOptimalOnDemandPrice", - "smithy.api#documentation": "[Price protection] The price protection threshold for Spot Instances, as a percentage of an\n identified On-Demand price. The identified On-Demand price is the price of the lowest\n priced current generation C, M, or R instance type with your specified attributes. If no\n current generation C, M, or R instance type matches your attributes, then the identified\n price is from the lowest priced current generation instance types, and failing that, from\n the lowest priced previous generation instance types that match your attributes. When Amazon EC2\n selects instance types with your attributes, it will exclude instance types whose price\n exceeds your specified threshold.
\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.
\nIf you set DesiredCapacityType
to vcpu
or\n memory-mib
, the price protection threshold is based on the per vCPU or per\n memory price instead of the per instance price.
Only one of SpotMaxPricePercentageOverLowestPrice
or\n MaxSpotPriceAsPercentageOfOptimalOnDemandPrice
can be specified. If you\n don't specify either, Amazon EC2 will automatically apply optimal price protection to\n consistently select from a wide range of instance types. To indicate no price protection\n threshold for Spot Instances, meaning you want to consider all instance types that match your\n attributes, include one of these parameters and specify a high value, such as\n 999999
.
[Price protection] The price protection threshold for Spot Instances, as a percentage of an\n identified On-Demand price. The identified On-Demand price is the price of the lowest\n priced current generation C, M, or R instance type with your specified attributes. If no\n current generation C, M, or R instance type matches your attributes, then the identified\n price is from the lowest priced current generation instance types, and failing that, from\n the lowest priced previous generation instance types that match your attributes. When Amazon EC2\n selects instance types with your attributes, it will exclude instance types whose price\n exceeds your specified threshold.
\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.
\nIf you set TargetCapacityUnitType
to vcpu
or\n memory-mib
, the price protection threshold is based on the per vCPU or per\n memory price instead of the per instance price.
Only one of SpotMaxPricePercentageOverLowestPrice
or\n MaxSpotPriceAsPercentageOfOptimalOnDemandPrice
can be specified. If you\n don't specify either, Amazon EC2 will automatically apply optimal price protection to\n consistently select from a wide range of instance types. To indicate no price protection\n threshold for Spot Instances, meaning you want to consider all instance types that match your\n attributes, include one of these parameters and specify a high value, such as\n 999999
.
[Price protection] The price protection threshold for Spot Instances, as a percentage of an\n identified On-Demand price. The identified On-Demand price is the price of the lowest\n priced current generation C, M, or R instance type with your specified attributes. If no\n current generation C, M, or R instance type matches your attributes, then the identified\n price is from the lowest priced current generation instance types, and failing that, from\n the lowest priced previous generation instance types that match your attributes. When Amazon EC2\n selects instance types with your attributes, it will exclude instance types whose price\n exceeds your specified threshold.
\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.
\nIf you set DesiredCapacityType
to vcpu
or\n memory-mib
, the price protection threshold is based on the per vCPU or per\n memory price instead of the per instance price.
Only one of SpotMaxPricePercentageOverLowestPrice
or\n MaxSpotPriceAsPercentageOfOptimalOnDemandPrice
can be specified. If you\n don't specify either, Amazon EC2 will automatically apply optimal price protection to\n consistently select from a wide range of instance types. To indicate no price protection\n threshold for Spot Instances, meaning you want to consider all instance types that match your\n attributes, include one of these parameters and specify a high value, such as\n 999999
.
[Price protection] The price protection threshold for Spot Instances, as a percentage of an\n identified On-Demand price. The identified On-Demand price is the price of the lowest\n priced current generation C, M, or R instance type with your specified attributes. If no\n current generation C, M, or R instance type matches your attributes, then the identified\n price is from the lowest priced current generation instance types, and failing that, from\n the lowest priced previous generation instance types that match your attributes. When Amazon EC2\n selects instance types with your attributes, it will exclude instance types whose price\n exceeds your specified threshold.
\nThe parameter accepts an integer, which Amazon EC2 interprets as a percentage.
\nIf you set TargetCapacityUnitType
to vcpu
or\n memory-mib
, the price protection threshold is based on the per vCPU or per\n memory price instead of the per instance price.
Only one of SpotMaxPricePercentageOverLowestPrice
or\n MaxSpotPriceAsPercentageOfOptimalOnDemandPrice
can be specified. If you\n don't specify either, Amazon EC2 will automatically apply optimal price protection to\n consistently select from a wide range of instance types. To indicate no price protection\n threshold for Spot Instances, meaning you want to consider all instance types that match your\n attributes, include one of these parameters and specify a high value, such as\n 999999
.
Changes the instance's user data to the specified value. If you are using an Amazon Web Services SDK or command line tool, base64-encoding is performed for you, and you\n can load the text from a file. Otherwise, you must provide base64-encoded text.
", + "smithy.api#documentation": "Changes the instance's user data to the specified value. User data must be base64-encoded.\n Depending on the tool or SDK that you're using, the base64-encoding might be performed for you.\n For more information, see Work with instance user data.
", "smithy.api#xmlName": "userData" } }, @@ -92438,7 +92444,7 @@ "UserData": { "target": "com.amazonaws.ec2#RunInstancesUserData", "traits": { - "smithy.api#documentation": "The user data script to make available to the instance. For more information, see\n Run\n commands on your Amazon EC2 instance at launch in the Amazon EC2 User\n Guide. If you are using a command line tool, base64-encoding is performed\n for you, and you can load the text from a file. Otherwise, you must provide\n base64-encoded text. User data is limited to 16 KB.
" + "smithy.api#documentation": "The user data to make available to the instance. User data must be base64-encoded.\n Depending on the tool or SDK that you're using, the base64-encoding might be performed for you.\n For more information, see Work with instance user data.
" } }, "AdditionalInfo": { diff --git a/codegen/sdk-codegen/aws-models/fms.json b/codegen/sdk-codegen/aws-models/fms.json index b6d87a8a357..2ed2d7eeab8 100644 --- a/codegen/sdk-codegen/aws-models/fms.json +++ b/codegen/sdk-codegen/aws-models/fms.json @@ -5145,7 +5145,7 @@ "traits": { "smithy.api#length": { "min": 1, - "max": 10000 + "max": 30000 }, "smithy.api#pattern": "^((?!\\\\[nr]).)+$" } diff --git a/codegen/sdk-codegen/aws-models/s3.json b/codegen/sdk-codegen/aws-models/s3.json index 3b48bc38581..9bfb1d7b40d 100644 --- a/codegen/sdk-codegen/aws-models/s3.json +++ b/codegen/sdk-codegen/aws-models/s3.json @@ -18694,7 +18694,7 @@ } ], "traits": { - "smithy.api#documentation": "Creates a copy of an object that is already stored in Amazon S3.
\nYou can store individual objects of up to 5 TB in Amazon S3. You create a copy of your\n object up to 5 GB in size in a single atomic action using this API. However, to copy an\n object greater than 5 GB, you must use the multipart upload Upload Part - Copy\n (UploadPartCopy) API. For more information, see Copy Object Using the\n REST Multipart Upload API.
\nYou can copy individual objects between general purpose buckets, between directory buckets, and \n between general purpose buckets and directory buckets.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n
. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
Both the\n Region that you want to copy the object from and the Region that you want to copy the\n object to must be enabled for your account. For more information about how to enable a Region for your account, see Enable \n or disable a Region for standalone accounts in the\n Amazon Web Services Account Management Guide.
\nAmazon S3 transfer acceleration does not support cross-Region copies. If you request a\n cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad\n Request
error. For more information, see Transfer\n Acceleration.
All CopyObject
requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz-
prefix, including\n x-amz-copy-source
, must be signed. For more information, see REST Authentication.
\n Directory buckets - You must use the IAM credentials to authenticate and authorize your access to the CopyObject
API operation, instead of using the \n temporary security credentials through the CreateSession
API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\nYou must have\n read access to the source object and write\n access to the destination bucket.
\n\n General purpose bucket permissions -\n You must have permissions in an IAM policy based on the source and destination\n bucket types in a CopyObject
operation.
If the source object is in a general purpose bucket, you must have\n \n s3:GetObject
\n \n permission to read the source object that is being copied.
If the destination bucket is a general purpose bucket, you must have\n \n s3:PutObject
\n \n permission to write the object copy to the destination bucket.
\n Directory bucket permissions -\n You must have permissions in a bucket policy or an IAM identity-based policy based on the source and destination\n bucket types in a CopyObject
operation.
If the source object that you want to copy is in a\n directory bucket, you must have the \n s3express:CreateSession
\n permission in\n the Action
element of a policy to read the object. By default, the session is in the ReadWrite
mode. If you want to restrict the access, you can explicitly set the s3express:SessionMode
condition key to ReadOnly
on the copy source bucket.
If the copy destination is a directory bucket, you must have the \n s3express:CreateSession
\n permission in the\n Action
element of a policy to write the object\n to the destination. The s3express:SessionMode
condition\n key can't be set to ReadOnly
on the copy destination bucket.
For example policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the\n Amazon S3 User Guide.
\nWhen the request is an HTTP 1.1 request, the response is chunk encoded. \n When the request is not an HTTP 1.1 request, the response would not contain the Content-Length
. \n You always need to read the entire response body to check if the copy succeeds. \n to keep the connection alive while we copy the data.
If the copy is successful, you receive a response with information about the copied\n object.
\nA copy request might return an error when Amazon S3 receives the copy request or while Amazon S3\n is copying the files. A 200 OK
response can contain either a success or an error.
If the error occurs before the copy action starts, you receive a\n standard Amazon S3 error.
\nIf the error occurs during the copy operation, the error response is\n embedded in the 200 OK
response. For example, in a cross-region copy, you \n may encounter throttling and receive a 200 OK
response. \n For more information, see Resolve \n the Error 200 response when copying objects to Amazon S3. \n The 200 OK
status code means the copy was accepted, but \n it doesn't mean the copy is complete. Another example is \n when you disconnect from Amazon S3 before the copy is complete, Amazon S3 might cancel the copy and you may receive a 200 OK
response. \n You must stay connected to Amazon S3 until the entire response is successfully received and processed.
If you call this API operation directly, make\n sure to design your application to parse the content of the response and handle it\n appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the\n embedded error and apply error handling per your configuration settings (including\n automatically retrying the request as appropriate). If the condition persists, the SDKs\n throw an exception (or, for the SDKs that don't use exceptions, they return an \n error).
\nThe copy request charge is based on the storage class and Region that you specify for\n the destination object. The request can also result in a data retrieval charge for the\n source if the source storage class bills for data retrieval. If the copy source is in a different region, the data transfer is billed to the copy source account. For pricing information, see\n Amazon S3 pricing.
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com
.
The following operations are related to CopyObject
:
Creates a copy of an object that is already stored in Amazon S3.
\nYou can store individual objects of up to 5 TB in Amazon S3. You create a copy of your\n object up to 5 GB in size in a single atomic action using this API. However, to copy an\n object greater than 5 GB, you must use the multipart upload Upload Part - Copy\n (UploadPartCopy) API. For more information, see Copy Object Using the\n REST Multipart Upload API.
\nYou can copy individual objects between general purpose buckets, between directory buckets, and \n between general purpose buckets and directory buckets.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n
. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
Both the\n Region that you want to copy the object from and the Region that you want to copy the\n object to must be enabled for your account. For more information about how to enable a Region for your account, see Enable \n or disable a Region for standalone accounts in the\n Amazon Web Services Account Management Guide.
\nAmazon S3 transfer acceleration does not support cross-Region copies. If you request a\n cross-Region copy using a transfer acceleration endpoint, you get a 400 Bad\n Request
error. For more information, see Transfer\n Acceleration.
All CopyObject
requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz-
prefix, including\n x-amz-copy-source
, must be signed. For more information, see REST Authentication.
\n Directory buckets - You must use the IAM credentials to authenticate and authorize your access to the CopyObject
API operation, instead of using the \n temporary security credentials through the CreateSession
API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\nYou must have\n read access to the source object and write\n access to the destination bucket.
\n\n General purpose bucket permissions -\n You must have permissions in an IAM policy based on the source and destination\n bucket types in a CopyObject
operation.
If the source object is in a general purpose bucket, you must have\n \n s3:GetObject
\n \n permission to read the source object that is being copied.
If the destination bucket is a general purpose bucket, you must have\n \n s3:PutObject
\n \n permission to write the object copy to the destination bucket.
\n Directory bucket permissions -\n You must have permissions in a bucket policy or an IAM identity-based policy based on the source and destination\n bucket types in a CopyObject
operation.
If the source object that you want to copy is in a\n directory bucket, you must have the \n s3express:CreateSession
\n permission in\n the Action
element of a policy to read the object. By default, the session is in the ReadWrite
mode. If you want to restrict the access, you can explicitly set the s3express:SessionMode
condition key to ReadOnly
on the copy source bucket.
If the copy destination is a directory bucket, you must have the \n s3express:CreateSession
\n permission in the\n Action
element of a policy to write the object\n to the destination. The s3express:SessionMode
condition\n key can't be set to ReadOnly
on the copy destination bucket.
For example policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the\n Amazon S3 User Guide.
\nWhen the request is an HTTP 1.1 request, the response is chunk encoded. When\n the request is not an HTTP 1.1 request, the response would not contain the\n Content-Length
. You always need to read the entire response body\n to check if the copy succeeds.
If the copy is successful, you receive a response with information about the copied\n object.
\nA copy request might return an error when Amazon S3 receives the copy request or while Amazon S3\n is copying the files. A 200 OK
response can contain either a success or an error.
If the error occurs before the copy action starts, you receive a\n standard Amazon S3 error.
\nIf the error occurs during the copy operation, the error response is\n embedded in the 200 OK
response. For example, in a cross-region copy, you \n may encounter throttling and receive a 200 OK
response. \n For more information, see Resolve \n the Error 200 response when copying objects to Amazon S3. \n The 200 OK
status code means the copy was accepted, but \n it doesn't mean the copy is complete. Another example is \n when you disconnect from Amazon S3 before the copy is complete, Amazon S3 might cancel the copy and you may receive a 200 OK
response. \n You must stay connected to Amazon S3 until the entire response is successfully received and processed.
If you call this API operation directly, make\n sure to design your application to parse the content of the response and handle it\n appropriately. If you use Amazon Web Services SDKs, SDKs handle this condition. The SDKs detect the\n embedded error and apply error handling per your configuration settings (including\n automatically retrying the request as appropriate). If the condition persists, the SDKs\n throw an exception (or, for the SDKs that don't use exceptions, they return an \n error).
\nThe copy request charge is based on the storage class and Region that you specify for\n the destination object. The request can also result in a data retrieval charge for the\n source if the source storage class bills for data retrieval. If the copy source is in a different region, the data transfer is billed to the copy source account. For pricing information, see\n Amazon S3 pricing.
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com
.
The following operations are related to CopyObject
:
This action creates an Amazon S3 bucket. To create an Amazon S3 on Outposts bucket, see \n CreateBucket
\n .
Creates a new S3 bucket. To create a bucket, you must set up Amazon S3 and have a\n valid Amazon Web Services Access Key ID to authenticate requests. Anonymous requests are never allowed to\n create buckets. By creating the bucket, you become the bucket owner.
\nThere are two types of buckets: general purpose buckets and directory buckets. For more\n information about these bucket types, see Creating, configuring, and\n working with Amazon S3 buckets in the Amazon S3 User Guide.
\n\n General purpose buckets - If you send your CreateBucket
request to the s3.amazonaws.com
global endpoint,\n the request goes to the us-east-1
Region. So the signature\n calculations in Signature Version 4 must use us-east-1
as the Region, even\n if the location constraint in the request specifies another Region where the bucket is\n to be created. If you create a bucket in a Region other than US East (N. Virginia), your\n application must be able to handle 307 redirect. For more information, see Virtual hosting of\n buckets in the Amazon S3 User Guide.
\n Directory buckets - For directory buckets, you must make requests for this API operation to the Regional endpoint. These endpoints support path-style requests in the format https://s3express-control.region_code.amazonaws.com/bucket-name\n
. Virtual-hosted-style requests aren't supported. \nFor more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
\n General purpose bucket permissions - In addition to the s3:CreateBucket
permission, the following permissions are\n required in a policy when your CreateBucket
request includes specific\n headers:
\n Access control lists (ACLs) - In your CreateBucket
request, if you specify an access control list (ACL) \n and set it to public-read
, public-read-write
,\n authenticated-read
, or if you explicitly specify any other custom ACLs, both s3:CreateBucket
and\n s3:PutBucketAcl
permissions are required. In your CreateBucket
request, if you set the ACL to private
, \n or if you don't specify any ACLs, only the s3:CreateBucket
permission is required.\n
\n Object Lock - In your\n CreateBucket
request, if you set \n x-amz-bucket-object-lock-enabled
to true, the \n s3:PutBucketObjectLockConfiguration
and\n s3:PutBucketVersioning
permissions are required.
\n S3 Object Ownership - If your\n CreateBucket
request includes the\n x-amz-object-ownership
header, then the\n s3:PutBucketOwnershipControls
permission is required.
To set an ACL on a bucket as part of a\n CreateBucket
request, you must explicitly set S3\n Object Ownership for the bucket to a different value than the\n default, BucketOwnerEnforced
. Additionally, if your\n desired bucket ACL grants public access, you must first create the\n bucket (without the bucket ACL) and then explicitly disable Block\n Public Access on the bucket before using PutBucketAcl
\n to set the ACL. If you try to create a bucket with a public ACL,\n the request will fail.
For the majority of modern use cases in S3, we recommend\n that you keep all Block Public Access settings enabled and keep\n ACLs disabled. If you would like to share data with users outside\n of your account, you can use bucket policies as needed. For more\n information, see Controlling ownership of objects and disabling ACLs for your\n bucket and Blocking public access to your Amazon S3 storage in\n the Amazon S3 User Guide.
\n\n S3 Block Public Access - If your\n specific use case requires granting public access to your S3 resources, you\n can disable Block Public Access. Specifically, you can create a new bucket with Block\n Public Access enabled, then separately call the \n DeletePublicAccessBlock
\n API. To use this operation, you must have the\n s3:PutBucketPublicAccessBlock
permission. For more information about S3 Block Public\n Access, see Blocking\n public access to your Amazon S3 storage in the\n Amazon S3 User Guide.
\n Directory bucket permissions - You must have the s3express:CreateBucket
permission in an IAM identity-based policy instead of a bucket policy. Cross-account access to this API operation isn't supported. This operation can only be performed by the Amazon Web Services account that owns the resource. For more information about directory bucket policies and permissions, see Amazon Web Services Identity and Access Management (IAM) for S3 Express One Zone in the Amazon S3 User Guide.
The permissions for ACLs, Object Lock, S3 Object Ownership, and S3 Block Public Access are not supported for directory buckets. \n For directory buckets, all Block Public Access settings are enabled at the bucket level and S3 \n Object Ownership is set to Bucket owner enforced (ACLs disabled). These settings can't be modified.\n
\nFor more information about permissions for creating and working with \n directory buckets, see Directory buckets in the Amazon S3 User Guide. \n For more information about supported S3 features for directory buckets, see Features of S3 Express One Zone in the Amazon S3 User Guide.
\n\n Directory buckets - The HTTP Host header syntax is s3express-control.region.amazonaws.com
.
The following operations are related to CreateBucket
:
\n PutObject\n
\n\n DeleteBucket\n
\nThe established temporary security credentials for the created session..
", + "smithy.api#documentation": "The established temporary security credentials for the created session.
", "smithy.api#required": {}, "smithy.api#xmlName": "Credentials" } @@ -20850,26 +20850,26 @@ "smithy.api#documentation": "This operation is not supported by directory buckets.
\nRemoves the entire tag set from the specified object. For more information about\n managing object tags, see Object Tagging.
\nTo use this operation, you must have permission to perform the\n s3:DeleteObjectTagging
action.
To delete tags of a specific object version, add the versionId
query\n parameter in the request. You will need permission for the\n s3:DeleteObjectVersionTagging
action.
The following operations are related to DeleteObjectTagging
:
\n PutObjectTagging\n
\n\n GetObjectTagging\n
\nThis operation enables you to delete multiple objects from a bucket using a single HTTP\n request. If you know the object keys that you want to delete, then this operation provides a\n suitable alternative to sending individual delete requests, reducing per-request\n overhead.
\nThe request can contain a list of up to 1000 keys that you want to delete. In the XML, you\n provide the object key names, and optionally, version IDs if you want to delete a specific\n version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a\n delete operation and returns the result of that delete, success or failure, in the response.\n Note that if the object specified in the request is not found, Amazon S3 returns the result as\n deleted.
\n\n Directory buckets - S3 Versioning isn't enabled and supported for directory buckets.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n
. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
The operation supports two modes for the response: verbose and quiet. By default, the\n operation uses verbose mode in which the response includes the result of deletion of each key\n in your request. In quiet mode the response includes only keys where the delete operation \n encountered an error. For a successful deletion in a quiet mode, the operation does not return any information\n about the delete in the response body.
\nWhen performing this action on an MFA Delete enabled bucket, that attempts to delete any\n versioned objects, you must include an MFA token. If you do not provide one, the entire\n request will fail, even if there are non-versioned objects you are trying to delete. If you\n provide an invalid token, whether there are versioned keys in the request or not, the\n entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA\n Delete in the Amazon S3\n User Guide.
\n\n Directory buckets - MFA delete is not supported by directory buckets.
\n\n General purpose bucket permissions - The following permissions are required in your policies when your \n DeleteObjects
request includes specific headers.
\n \n s3:DeleteObject
\n - To delete an object from a bucket, you must always specify the s3:DeleteObject
permission.
\n \n s3:DeleteObjectVersion
\n - To delete a specific version of an object from a versiong-enabled bucket, you must specify the s3:DeleteObjectVersion
permission.
\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession
\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession
permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession
API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession
API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession
\n .
\n General purpose bucket - The Content-MD5 request header is required for all Multi-Object Delete requests. Amazon S3\n uses the header value to ensure that your request body has not been altered in\n transit.
\n\n Directory bucket - The Content-MD5 request header or a additional checksum request header \n (including x-amz-checksum-crc32
, x-amz-checksum-crc32c
, x-amz-checksum-sha1
, or \n x-amz-checksum-sha256
) is required for all Multi-Object Delete requests.
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com
.
The following operations are related to DeleteObjects
:
\n UploadPart\n
\n\n ListParts\n
\n\n AbortMultipartUpload\n
\nThis operation enables you to delete multiple objects from a bucket using a single HTTP\n request. If you know the object keys that you want to delete, then this operation provides a\n suitable alternative to sending individual delete requests, reducing per-request\n overhead.
\nThe request can contain a list of up to 1000 keys that you want to delete. In the XML, you\n provide the object key names, and optionally, version IDs if you want to delete a specific\n version of the object from a versioning-enabled bucket. For each key, Amazon S3 performs a\n delete operation and returns the result of that delete, success or failure, in the response.\n Note that if the object specified in the request is not found, Amazon S3 returns the result as\n deleted.
\n\n Directory buckets - S3 Versioning isn't enabled and supported for directory buckets.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n
. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
The operation supports two modes for the response: verbose and quiet. By default, the\n operation uses verbose mode in which the response includes the result of deletion of each key\n in your request. In quiet mode the response includes only keys where the delete operation \n encountered an error. For a successful deletion in a quiet mode, the operation does not return any information\n about the delete in the response body.
\nWhen performing this action on an MFA Delete enabled bucket, that attempts to delete any\n versioned objects, you must include an MFA token. If you do not provide one, the entire\n request will fail, even if there are non-versioned objects you are trying to delete. If you\n provide an invalid token, whether there are versioned keys in the request or not, the\n entire Multi-Object Delete request will fail. For information about MFA Delete, see MFA\n Delete in the Amazon S3\n User Guide.
\n\n Directory buckets - MFA delete is not supported by directory buckets.
\n\n General purpose bucket permissions - The following permissions are required in your policies when your \n DeleteObjects
request includes specific headers.
\n \n s3:DeleteObject
\n - To delete an object from a bucket, you must always specify the s3:DeleteObject
permission.
\n \n s3:DeleteObjectVersion
\n - To delete a specific version of an object from a versioning-enabled bucket, you must specify the s3:DeleteObjectVersion
permission.
\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession
\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession
permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession
API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession
API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession
\n .
\n General purpose bucket - The Content-MD5 request header is required for all Multi-Object Delete requests. Amazon S3\n uses the header value to ensure that your request body has not been altered in\n transit.
\n\n Directory bucket - The Content-MD5 request header or a additional checksum request header \n (including x-amz-checksum-crc32
, x-amz-checksum-crc32c
, x-amz-checksum-sha1
, or \n x-amz-checksum-sha256
) is required for all Multi-Object Delete requests.
\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com
.
The following operations are related to DeleteObjects
:
\n UploadPart\n
\n\n ListParts\n
\n\n AbortMultipartUpload\n
\nThis operation is not supported by directory buckets.
\nReturns the tag-set of an object. You send the GET request against the tagging\n subresource associated with the object.
\nTo use this operation, you must have permission to perform the\n s3:GetObjectTagging
action. By default, the GET action returns information\n about current version of an object. For a versioned bucket, you can have multiple versions\n of an object in your bucket. To retrieve tags of any other version, use the versionId query\n parameter. You also need permission for the s3:GetObjectVersionTagging
\n action.
By default, the bucket owner has this permission and can grant this permission to\n others.
\nFor information about the Amazon S3 object tagging feature, see Object Tagging.
\nThe following actions are related to GetObjectTagging
:
\n DeleteObjectTagging\n
\n\n GetObjectAttributes\n
\n\n PutObjectTagging\n
\nSets the Cache-Control
header of the response.
Sets the Content-Disposition
header of the response.
Sets the Content-Encoding
header of the response.
Sets the Content-Language
header of the response.
Sets the Content-Type
header of the response.
Sets the Expires
header of the response.
A suffix that is appended to a request that is for a directory on the website endpoint\n (for example,if the suffix is index.html and you make a request to samplebucket/images/ the\n data that is returned will be for the object with the key name images/index.html) The\n suffix must not be empty and must not include a slash character.
\nReplacement must be made for object keys containing special characters (such as carriage returns) when using \n XML requests. For more information, see \n XML related object key constraints.
\nA suffix that is appended to a request that is for a directory on the website endpoint.\n (For example, if the suffix is index.html
and you make a request to\n samplebucket/images/
, the data that is returned will be for the object with\n the key name images/index.html
.) The suffix must not be empty and must not\n include a slash character.
Replacement must be made for object keys containing special characters (such as carriage returns) when using \n XML requests. For more information, see \n XML related object key constraints.
\nEncoding type used by Amazon S3 to encode object keys in the response. If using\n url
, non-ASCII characters used in an object's key name will be URL encoded.\n For example, the object test_file(3).png will appear as test_file%283%29.png.
Encoding type used by Amazon S3 to encode object keys in the response. If using\n url
, non-ASCII characters used in an object's key name will be URL encoded.\n For example, the object test_file(3).png
will appear as\n test_file%283%29.png
.
Encoding type used by Amazon S3 to encode object keys in the response. If using\n url
, non-ASCII characters used in an object's key name will be URL encoded.\n For example, the object test_file(3).png will appear as test_file%283%29.png.
Encoding type used by Amazon S3 to encode object keys in the response. If using\n url
, non-ASCII characters used in an object's key name will be URL encoded.\n For example, the object test_file(3).png
will appear as\n test_file%283%29.png
.
Specifies how many newer noncurrent versions must exist before Amazon S3 can perform the\n associated action on a given version. If there are this many more recent noncurrent\n versions, Amazon S3 will take the associated action. For more information about noncurrent\n versions, see Lifecycle configuration\n elements in the Amazon S3 User Guide.
" + "smithy.api#documentation": "Specifies how many noncurrent versions Amazon S3 will retain. You can specify up to 100\n noncurrent versions to retain. Amazon S3 will permanently delete any additional noncurrent\n versions beyond the specified number to retain. For more information about noncurrent\n versions, see Lifecycle configuration\n elements in the Amazon S3 User Guide.
" } } }, @@ -28017,7 +28059,7 @@ "NewerNoncurrentVersions": { "target": "com.amazonaws.s3#VersionCount", "traits": { - "smithy.api#documentation": "Specifies how many newer noncurrent versions must exist before Amazon S3 can perform the\n associated action on a given version. If there are this many more recent noncurrent\n versions, Amazon S3 will take the associated action. For more information about noncurrent\n versions, see Lifecycle configuration\n elements in the Amazon S3 User Guide.
" + "smithy.api#documentation": "Specifies how many noncurrent versions Amazon S3 will retain in the same storage class before\n transitioning objects. You can specify up to 100 noncurrent versions to retain. Amazon S3 will\n transition any additional noncurrent versions beyond the specified number to retain. For\n more information about noncurrent versions, see Lifecycle configuration\n elements in the Amazon S3 User Guide.
" } } }, @@ -30649,22 +30691,6 @@ }, "smithy.api#documentation": "Adds an object to a bucket.
\nAmazon S3 never adds partial objects; if you receive a success response, Amazon S3 added the\n entire object to the bucket. You cannot use PutObject
to only update a\n single piece of metadata for an existing object. You must put the entire object with\n updated metadata if you want to update some values.
If your bucket uses the bucket owner enforced setting for Object Ownership, ACLs are disabled and no longer affect permissions. All\n objects written to the bucket by any account will be owned by the bucket owner.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n
. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
Amazon S3 is a distributed system. If it receives multiple write requests for the same object\n simultaneously, it overwrites all but the last object written. However, Amazon S3 provides features that can modify this behavior:
\n\n S3 Object Lock - To prevent objects from\n being deleted or overwritten, you can use Amazon S3 Object\n Lock in the Amazon S3 User Guide.
\nThis functionality is not supported for directory buckets.
\n\n S3 Versioning - When you enable\n versioning for a bucket, if Amazon S3 receives multiple write requests for the same object\n simultaneously, it stores all versions of the objects. For each write request that is made to the same object, Amazon S3 automatically generates a unique version ID\n of that object being stored in Amazon S3. \n You can retrieve, replace, or delete any version of the object. For more information about versioning, see\n Adding Objects to\n Versioning-Enabled Buckets in the Amazon S3\n User Guide. For information about returning the versioning state\n of a bucket, see GetBucketVersioning.
\nThis functionality is not supported for directory buckets.
\n\n General purpose bucket permissions - The following permissions are required in your policies when your \n PutObject
request includes specific headers.
\n \n s3:PutObject
\n - To successfully complete the PutObject
request, you must always have the s3:PutObject
permission on a bucket to add an object\n to it.
\n \n s3:PutObjectAcl
\n - To successfully change the objects ACL of your PutObject
request, you must have the s3:PutObjectAcl
.
\n \n s3:PutObjectTagging
\n - To successfully set the tag-set with your PutObject
request, you\n must have the s3:PutObjectTagging
.
\n Directory bucket permissions - To grant access to this API operation on a directory bucket, we recommend that you use the \n CreateSession
\n API operation for session-based authorization. Specifically, you grant the s3express:CreateSession
permission to the directory bucket in a bucket policy or an IAM identity-based policy. Then, you make the CreateSession
API call on the bucket to obtain a session token. With the session token in your request header, you can make API requests to this operation. After the session token expires, you make another CreateSession
API call to generate a new session token for use. \nAmazon Web Services CLI or SDKs create session and refresh the session token automatically to avoid service interruptions when a session expires. For more information about authorization, see \n CreateSession
\n .
\n General purpose bucket - To ensure that data is not corrupted traversing the network, use the\n Content-MD5
header. When you use this header, Amazon S3 checks the object\n against the provided MD5 value and, if they do not match, Amazon S3 returns an error. Alternatively, when the object's ETag is its MD5 digest, \n you can calculate the MD5 while putting the object to Amazon S3 and compare the returned ETag to\n the calculated MD5 value.
\n Directory bucket - This functionality is not supported for directory buckets.
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com
.
For more information about related Amazon S3 APIs, see the following:
\n\n CopyObject\n
\n\n DeleteObject\n
\nUploads a part by copying data from an existing object as data source. To specify the\n data source, you add the request header x-amz-copy-source
in your request. To specify \n a byte range, you add the request header x-amz-copy-source-range
in your\n request.
For information about maximum and minimum part sizes and other multipart upload\n specifications, see Multipart upload limits in the Amazon S3 User Guide.
\nInstead of copying data from an existing object as part data, you might use the UploadPart\n action to upload new data as a part of an object in your request.
\nYou must initiate a multipart upload before you can upload any part. In response to your\n initiate request, Amazon S3 returns the upload ID, a unique identifier that you must include in\n your upload part request.
\nFor conceptual information about multipart uploads, see Uploading\n Objects Using Multipart Upload in the\n Amazon S3 User Guide. For information about copying objects using a single atomic action vs. a multipart\n upload, see Operations on Objects in\n the Amazon S3 User Guide.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n
. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
All UploadPartCopy
requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz-
prefix, including\n x-amz-copy-source
, must be signed. For more information, see REST Authentication.
\n Directory buckets - You must use IAM credentials to authenticate and authorize your access to the UploadPartCopy
API operation, instead of using the \n temporary security credentials through the CreateSession
API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\nYou must have READ
access to the source object and WRITE
\n access to the destination bucket.
\n General purpose bucket permissions - You must have the permissions in a policy based on the bucket types of your source bucket and destination bucket in an UploadPartCopy
operation.
If the source object is in a general purpose bucket, you must have the \n s3:GetObject
\n permission to read the source object that is being copied.
If the destination bucket is a general purpose bucket, you must have the \n s3:PutObject
\n permission to write the object copy to the destination bucket.\n
For information about permissions required to use the multipart upload API, see\n Multipart Upload and Permissions in the\n Amazon S3 User Guide.
\n\n Directory bucket permissions -\n You must have permissions in a bucket policy or an IAM identity-based policy based on the source and destination\n bucket types in an UploadPartCopy
operation.
If the source object that you want to copy is in a\n directory bucket, you must have the \n s3express:CreateSession
\n permission in\n the Action
element of a policy to read the object\n . \n By default, the session is in the ReadWrite
mode. If you want to restrict the access, you can explicitly set the s3express:SessionMode
condition key to ReadOnly
on the copy source bucket.
If the copy destination is a directory bucket, you must have the \n \n s3express:CreateSession
\n permission in the\n Action
element of a policy to write the object\n to the destination. The s3express:SessionMode
condition\n key cannot be set to ReadOnly
on the copy destination.
For example policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the\n Amazon S3 User Guide.
\n\n General purpose buckets - \n \n For information about using server-side encryption with customer-provided\n encryption keys with the UploadPartCopy
operation, see CopyObject and UploadPart.\n
\n Directory buckets - For directory buckets, only server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256
) is supported.
Error Code: NoSuchUpload
\n
Description: The specified multipart upload does not exist. The\n upload ID might be invalid, or the multipart upload might have been\n aborted or completed.
\nHTTP Status Code: 404 Not Found
\nError Code: InvalidRequest
\n
Description: The specified copy source is not supported as a\n byte-range copy source.
\nHTTP Status Code: 400 Bad Request
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com
.
The following operations are related to UploadPartCopy
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\nUploads a part by copying data from an existing object as data source. To specify the\n data source, you add the request header x-amz-copy-source
in your request. To specify \n a byte range, you add the request header x-amz-copy-source-range
in your\n request.
For information about maximum and minimum part sizes and other multipart upload\n specifications, see Multipart upload limits in the Amazon S3 User Guide.
\nInstead of copying data from an existing object as part data, you might use the UploadPart\n action to upload new data as a part of an object in your request.
\nYou must initiate a multipart upload before you can upload any part. In response to your\n initiate request, Amazon S3 returns the upload ID, a unique identifier that you must include in\n your upload part request.
\nFor conceptual information about multipart uploads, see Uploading\n Objects Using Multipart Upload in the\n Amazon S3 User Guide. For information about copying objects using a single atomic action vs. a multipart\n upload, see Operations on Objects in\n the Amazon S3 User Guide.
\n\n Directory buckets - For directory buckets, you must make requests for this API operation to the Zonal endpoint. These endpoints support virtual-hosted-style requests in the format https://bucket_name.s3express-az_id.region.amazonaws.com/key-name\n
. Path-style requests are not supported. For more information, see Regional and Zonal endpoints in the\n Amazon S3 User Guide.
All UploadPartCopy
requests must be authenticated and signed by using IAM credentials (access key ID and secret access key for the IAM identities). All headers with the x-amz-
prefix, including\n x-amz-copy-source
, must be signed. For more information, see REST Authentication.
\n Directory buckets - You must use IAM credentials to authenticate and authorize your access to the UploadPartCopy
API operation, instead of using the \n temporary security credentials through the CreateSession
API operation.
Amazon Web Services CLI or SDKs handles authentication and authorization on your behalf.
\nYou must have READ
access to the source object and WRITE
\n access to the destination bucket.
\n General purpose bucket permissions - You must have the permissions in a policy based on the bucket types of your source bucket and destination bucket in an UploadPartCopy
operation.
If the source object is in a general purpose bucket, you must have the \n s3:GetObject
\n permission to read the source object that is being copied.
If the destination bucket is a general purpose bucket, you must have the \n s3:PutObject
\n permission to write the object copy to the destination bucket.\n
For information about permissions required to use the multipart upload API, see\n Multipart upload API and permissions in the\n Amazon S3 User Guide.
\n\n Directory bucket permissions -\n You must have permissions in a bucket policy or an IAM identity-based policy based on the source and destination\n bucket types in an UploadPartCopy
operation.
If the source object that you want to copy is in a\n directory bucket, you must have the \n s3express:CreateSession
\n permission in\n the Action
element of a policy to read the object. By\n default, the session is in the ReadWrite
mode. If you\n want to restrict the access, you can explicitly set the\n s3express:SessionMode
condition key to\n ReadOnly
on the copy source bucket.
If the copy destination is a directory bucket, you must have the \n \n s3express:CreateSession
\n permission in the\n Action
element of a policy to write the object\n to the destination. The s3express:SessionMode
condition\n key cannot be set to ReadOnly
on the copy destination.
For example policies, see Example bucket policies for S3 Express One Zone and Amazon Web Services Identity and Access Management (IAM) identity-based policies for S3 Express One Zone in the\n Amazon S3 User Guide.
\n\n General purpose buckets - \n \n For information about using server-side encryption with customer-provided\n encryption keys with the UploadPartCopy
operation, see CopyObject and UploadPart.\n
\n Directory buckets - For directory buckets, only server-side encryption with Amazon S3 managed keys (SSE-S3) (AES256
) is supported.
Error Code: NoSuchUpload
\n
Description: The specified multipart upload does not exist. The\n upload ID might be invalid, or the multipart upload might have been\n aborted or completed.
\nHTTP Status Code: 404 Not Found
\nError Code: InvalidRequest
\n
Description: The specified copy source is not supported as a\n byte-range copy source.
\nHTTP Status Code: 400 Bad Request
\n\n Directory buckets - The HTTP Host header syntax is \n Bucket_name.s3express-az_id.region.amazonaws.com
.
The following operations are related to UploadPartCopy
:
\n UploadPart\n
\n\n AbortMultipartUpload\n
\n\n ListParts\n
\n\n ListMultipartUploads\n
\n