diff --git a/CHANGELOG.md b/CHANGELOG.md index 27d8af5b380..b27125e519b 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,3 +1,14 @@ +Release v1.37.20 (2021-02-26) +=== + +### Service Client Updates +* `service/eks`: Updates service API and documentation +* `service/elasticmapreduce`: Updates service API and documentation + * Added UpdateStudio API that allows updating a few attributes of an EMR Studio. +* `service/s3`: Updates service API, documentation, and examples + * Add RequestPayer to GetObjectTagging and PutObjectTagging. +* `service/sso-admin`: Updates service API + Release v1.37.19 (2021-02-25) === diff --git a/aws/version.go b/aws/version.go index 9849554f46a..0c403a140c3 100644 --- a/aws/version.go +++ b/aws/version.go @@ -5,4 +5,4 @@ package aws const SDKName = "aws-sdk-go" // SDKVersion is the version of this SDK -const SDKVersion = "1.37.19" +const SDKVersion = "1.37.20" diff --git a/models/apis/eks/2017-11-01/api-2.json b/models/apis/eks/2017-11-01/api-2.json index ae58b148678..ce087af01e2 100644 --- a/models/apis/eks/2017-11-01/api-2.json +++ b/models/apis/eks/2017-11-01/api-2.json @@ -13,6 +13,23 @@ "uid":"eks-2017-11-01" }, "operations":{ + "AssociateEncryptionConfig":{ + "name":"AssociateEncryptionConfig", + "http":{ + "method":"POST", + "requestUri":"/clusters/{name}/encryption-config/associate" + }, + "input":{"shape":"AssociateEncryptionConfigRequest"}, + "output":{"shape":"AssociateEncryptionConfigResponse"}, + "errors":[ + {"shape":"InvalidParameterException"}, + {"shape":"ClientException"}, + {"shape":"ServerException"}, + {"shape":"ResourceInUseException"}, + {"shape":"ResourceNotFoundException"}, + {"shape":"InvalidRequestException"} + ] + }, "AssociateIdentityProviderConfig":{ "name":"AssociateIdentityProviderConfig", "http":{ @@ -594,6 +611,31 @@ "type":"list", "member":{"shape":"AddonInfo"} }, + "AssociateEncryptionConfigRequest":{ + "type":"structure", + "required":[ + "clusterName", + "encryptionConfig" + ], + "members":{ + "clusterName":{ + "shape":"String", + "location":"uri", + "locationName":"name" + }, + "encryptionConfig":{"shape":"EncryptionConfigList"}, + "clientRequestToken":{ + "shape":"String", + "idempotencyToken":true + } + } + }, + "AssociateEncryptionConfigResponse":{ + "type":"structure", + "members":{ + "update":{"shape":"Update"} + } + }, "AssociateIdentityProviderConfigRequest":{ "type":"structure", "required":[ @@ -2055,6 +2097,7 @@ "ReleaseVersion", "PublicAccessCidrs", "IdentityProviderConfig", + "EncryptionConfig", "AddonVersion", "ServiceAccountRoleArn", "ResolveConflicts" @@ -2082,6 +2125,7 @@ "ConfigUpdate", "AssociateIdentityProviderConfig", "DisassociateIdentityProviderConfig", + "AssociateEncryptionConfig", "AddonUpdate" ] }, diff --git a/models/apis/eks/2017-11-01/docs-2.json b/models/apis/eks/2017-11-01/docs-2.json index 1b86c8da4f5..1673d584358 100644 --- a/models/apis/eks/2017-11-01/docs-2.json +++ b/models/apis/eks/2017-11-01/docs-2.json @@ -2,9 +2,10 @@ "version": "2.0", "service": "
Amazon Elastic Kubernetes Service (Amazon EKS) is a managed service that makes it easy for you to run Kubernetes on AWS without needing to stand up or maintain your own Kubernetes control plane. Kubernetes is an open-source system for automating the deployment, scaling, and management of containerized applications.
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are fully compatible with applications running on any standard Kubernetes environment, whether running in on-premises data centers or public clouds. This means that you can easily migrate any standard Kubernetes application to Amazon EKS without any code modification required.
", "operations": { + "AssociateEncryptionConfig": "Associate encryption configuration to an existing cluster.
You can use this API to enable encryption on existing clusters which do not have encryption already enabled. This allows you to implement a defense-in-depth security strategy without migrating applications to new EKS clusters.
", "AssociateIdentityProviderConfig": "Associate an identity provider configuration to a cluster.
If you want to authenticate identities using an identity provider, you can create an identity provider configuration and associate it to your cluster. After configuring authentication to your cluster you can create Kubernetes roles
and clusterroles
to assign permissions to the roles, and then bind the roles to the identities using Kubernetes rolebindings
and clusterrolebindings
. For more information see Using RBAC Authorization in the Kubernetes documentation.
Creates an Amazon EKS add-on.
Amazon EKS add-ons help to automate the provisioning and lifecycle management of common operational software for Amazon EKS clusters. Amazon EKS add-ons can only be used with Amazon EKS clusters running version 1.18 with platform version eks.3
or later because add-ons rely on the Server-side Apply Kubernetes feature, which is only available in Kubernetes 1.18 and later.
Creates an Amazon EKS control plane.
The Amazon EKS control plane consists of control plane instances that run the Kubernetes software, such as etcd
and the API server. The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS API server endpoint. Each Amazon EKS cluster control plane is single-tenant and unique and runs on its own set of Amazon EC2 instances.
The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your VPC subnets to provide connectivity from the control plane instances to the nodes (for example, to support kubectl exec
, logs
, and proxy
data flows).
Amazon EKS nodes run in your AWS account and connect to your cluster's control plane via the Kubernetes API server endpoint and a certificate file that is created for your cluster.
You can use the endpointPublicAccess
and endpointPrivateAccess
parameters to enable or disable public and private access to your cluster's Kubernetes API server endpoint. By default, public access is enabled, and private access is disabled. For more information, see Amazon EKS Cluster Endpoint Access Control in the Amazon EKS User Guide .
You can use the logging
parameter to enable or disable exporting the Kubernetes control plane logs for your cluster to CloudWatch Logs. By default, cluster control plane logs aren't exported to CloudWatch Logs. For more information, see Amazon EKS Cluster Control Plane Logs in the Amazon EKS User Guide .
CloudWatch Logs ingestion, archive storage, and data scanning rates apply to exported control plane logs. For more information, see Amazon CloudWatch Pricing.
Cluster creation typically takes between 10 and 15 minutes. After you create an Amazon EKS cluster, you must configure your Kubernetes tooling to communicate with the API server and launch nodes into your cluster. For more information, see Managing Cluster Authentication and Launching Amazon EKS nodes in the Amazon EKS User Guide.
", + "CreateCluster": "Creates an Amazon EKS control plane.
The Amazon EKS control plane consists of control plane instances that run the Kubernetes software, such as etcd
and the API server. The control plane runs in an account managed by AWS, and the Kubernetes API is exposed via the Amazon EKS API server endpoint. Each Amazon EKS cluster control plane is single-tenant and unique and runs on its own set of Amazon EC2 instances.
The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your VPC subnets to provide connectivity from the control plane instances to the nodes (for example, to support kubectl exec
, logs
, and proxy
data flows).
Amazon EKS nodes run in your AWS account and connect to your cluster's control plane via the Kubernetes API server endpoint and a certificate file that is created for your cluster.
Cluster creation typically takes several minutes. After you create an Amazon EKS cluster, you must configure your Kubernetes tooling to communicate with the API server and launch nodes into your cluster. For more information, see Managing Cluster Authentication and Launching Amazon EKS nodes in the Amazon EKS User Guide.
", "CreateFargateProfile": "Creates an AWS Fargate profile for your Amazon EKS cluster. You must have at least one Fargate profile in a cluster to be able to run pods on Fargate.
The Fargate profile allows an administrator to declare which pods run on Fargate and specify which pods run on which Fargate profile. This declaration is done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace and labels. A namespace is required for every selector. The label field consists of multiple optional key-value pairs. Pods that match the selectors are scheduled on Fargate. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is run on Fargate.
When you create a Fargate profile, you must specify a pod execution role to use with the pods that are scheduled with the profile. This role is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization so that the kubelet
that is running on the Fargate infrastructure can register with your Amazon EKS cluster so that it can appear in your cluster as a node. The pod execution role also provides IAM permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For more information, see Pod Execution Role in the Amazon EKS User Guide.
Fargate profiles are immutable. However, you can create a new updated profile to replace an existing profile and then delete the original after the updated profile has finished creating.
If any Fargate profiles in a cluster are in the DELETING
status, you must wait for that Fargate profile to finish deleting before you can create any other profiles in that cluster.
For more information, see AWS Fargate Profile in the Amazon EKS User Guide.
", "CreateNodegroup": "Creates a managed node group for an Amazon EKS cluster. You can only create a node group for your cluster that is equal to the current Kubernetes version for the cluster. All node groups are created with the latest AMI release version for the respective minor Kubernetes version of the cluster, unless you deploy a custom AMI using a launch template. For more information about using launch templates, see Launch template support.
An Amazon EKS managed node group is an Amazon EC2 Auto Scaling group and associated Amazon EC2 instances that are managed by AWS for an Amazon EKS cluster. Each node group uses a version of the Amazon EKS optimized Amazon Linux 2 AMI. For more information, see Managed Node Groups in the Amazon EKS User Guide.
", "DeleteAddon": "Delete an Amazon EKS add-on.
When you remove the add-on, it will also be deleted from the cluster. You can always manually start an add-on on the cluster using the Kubernetes API.
", @@ -104,6 +105,16 @@ "DescribeAddonVersionsResponse$addons": "The list of available versions with Kubernetes version compatibility.
" } }, + "AssociateEncryptionConfigRequest": { + "base": null, + "refs": { + } + }, + "AssociateEncryptionConfigResponse": { + "base": null, + "refs": { + } + }, "AssociateIdentityProviderConfigRequest": { "base": null, "refs": { @@ -394,6 +405,7 @@ "EncryptionConfigList": { "base": null, "refs": { + "AssociateEncryptionConfigRequest$encryptionConfig": "The configuration you are using for encryption.
", "Cluster$encryptionConfig": "The encryption configuration for the cluster.
", "CreateClusterRequest$encryptionConfig": "The encryption configuration for the cluster.
" } @@ -781,6 +793,8 @@ "AddonInfo$type": "The type of the add-on.
", "AddonIssue$message": "A message that provides details about the issue and what might cause it.
", "AddonVersionInfo$addonVersion": "The version of the add-on.
", + "AssociateEncryptionConfigRequest$clusterName": "The name of the cluster that you are associating with encryption configuration.
", + "AssociateEncryptionConfigRequest$clientRequestToken": "The client request token you are using with the encryption configuration.
", "AssociateIdentityProviderConfigRequest$clusterName": "The name of the cluster to associate the configuration to.
", "AssociateIdentityProviderConfigRequest$clientRequestToken": "Unique, case-sensitive identifier that you provide to ensure the idempotency of the request.
", "AutoScalingGroup$name": "The name of the Auto Scaling group associated with an Amazon EKS managed node group.
", @@ -1061,6 +1075,7 @@ "Update": { "base": "An object representing an asynchronous update.
", "refs": { + "AssociateEncryptionConfigResponse$update": null, "AssociateIdentityProviderConfigResponse$update": null, "DescribeUpdateResponse$update": "The full description of the specified update.
", "DisassociateIdentityProviderConfigResponse$update": null, diff --git a/models/apis/elasticmapreduce/2009-03-31/api-2.json b/models/apis/elasticmapreduce/2009-03-31/api-2.json index 38b16f27f5f..e6b23c9f5d0 100644 --- a/models/apis/elasticmapreduce/2009-03-31/api-2.json +++ b/models/apis/elasticmapreduce/2009-03-31/api-2.json @@ -562,6 +562,18 @@ {"shape":"InternalServerError"} ] }, + "UpdateStudio":{ + "name":"UpdateStudio", + "http":{ + "method":"POST", + "requestUri":"/" + }, + "input":{"shape":"UpdateStudioInput"}, + "errors":[ + {"shape":"InternalServerException"}, + {"shape":"InvalidRequestException"} + ] + }, "UpdateStudioSessionMapping":{ "name":"UpdateStudioSessionMapping", "http":{ @@ -1035,7 +1047,8 @@ "ServiceRole", "UserRole", "WorkspaceSecurityGroupId", - "EngineSecurityGroupId" + "EngineSecurityGroupId", + "DefaultS3Location" ], "members":{ "Name":{"shape":"XmlStringMaxLen256"}, @@ -2836,6 +2849,17 @@ "COUNT_PER_SECOND" ] }, + "UpdateStudioInput":{ + "type":"structure", + "required":["StudioId"], + "members":{ + "StudioId":{"shape":"XmlStringMaxLen256"}, + "Name":{"shape":"XmlStringMaxLen256"}, + "Description":{"shape":"XmlStringMaxLen256"}, + "SubnetIds":{"shape":"SubnetIdList"}, + "DefaultS3Location":{"shape":"XmlString"} + } + }, "UpdateStudioSessionMappingInput":{ "type":"structure", "required":[ diff --git a/models/apis/elasticmapreduce/2009-03-31/docs-2.json b/models/apis/elasticmapreduce/2009-03-31/docs-2.json index dd7c6d13f2f..09da8c20149 100644 --- a/models/apis/elasticmapreduce/2009-03-31/docs-2.json +++ b/models/apis/elasticmapreduce/2009-03-31/docs-2.json @@ -8,20 +8,20 @@ "AddTags": "Adds tags to an Amazon EMR resource. Tags make it easier to associate clusters in various ways, such as grouping clusters to track your Amazon EMR resource allocation costs. For more information, see Tag Clusters.
", "CancelSteps": "Cancels a pending step or steps in a running cluster. Available only in Amazon EMR versions 4.8.0 and later, excluding version 5.0.0. A maximum of 256 steps are allowed in each CancelSteps request. CancelSteps is idempotent but asynchronous; it does not guarantee that a step will be canceled, even if the request is successfully submitted. You can only cancel steps that are in a PENDING
state.
Creates a security configuration, which is stored in the service and can be specified when a cluster is created.
", - "CreateStudio": "The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Creates a new Amazon EMR Studio.
", - "CreateStudioSessionMapping": "The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Maps a user or group to the Amazon EMR Studio specified by StudioId
, and applies a session policy to refine Studio permissions for that user or group.
Creates a new Amazon EMR Studio.
", + "CreateStudioSessionMapping": "Maps a user or group to the Amazon EMR Studio specified by StudioId
, and applies a session policy to refine Studio permissions for that user or group.
Deletes a security configuration.
", - "DeleteStudio": "The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Removes an Amazon EMR Studio from the Studio metadata store.
", - "DeleteStudioSessionMapping": "The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Removes a user or group from an Amazon EMR Studio.
", + "DeleteStudio": "Removes an Amazon EMR Studio from the Studio metadata store.
", + "DeleteStudioSessionMapping": "Removes a user or group from an Amazon EMR Studio.
", "DescribeCluster": "Provides cluster-level details including status, hardware and software configuration, VPC settings, and so on.
", "DescribeJobFlows": "This API is no longer supported and will eventually be removed. We recommend you use ListClusters, DescribeCluster, ListSteps, ListInstanceGroups and ListBootstrapActions instead.
DescribeJobFlows returns a list of job flows that match all of the supplied parameters. The parameters can include a list of job flow IDs, job flow states, and restrictions on job flow creation date and time.
Regardless of supplied parameters, only job flows created within the last two months are returned.
If no parameters are supplied, then job flows matching either of the following criteria are returned:
Job flows created and completed in the last two weeks
Job flows created within the last two months that are in one of the following states: RUNNING
, WAITING
, SHUTTING_DOWN
, STARTING
Amazon EMR can return a maximum of 512 job flow descriptions.
", "DescribeNotebookExecution": "Provides details of a notebook execution.
", "DescribeSecurityConfiguration": "Provides the details of a security configuration by returning the configuration JSON.
", "DescribeStep": "Provides more detail about the cluster step.
", - "DescribeStudio": "The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Returns details for the specified Amazon EMR Studio including ID, Name, VPC, Studio access URL, and so on.
", + "DescribeStudio": "Returns details for the specified Amazon EMR Studio including ID, Name, VPC, Studio access URL, and so on.
", "GetBlockPublicAccessConfiguration": "Returns the Amazon EMR block public access configuration for your AWS account in the current Region. For more information see Configure Block Public Access for Amazon EMR in the Amazon EMR Management Guide.
", "GetManagedScalingPolicy": "Fetches the attached managed scaling policy for an Amazon EMR cluster.
", - "GetStudioSessionMapping": "The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Fetches mapping details for the specified Amazon EMR Studio and identity (user or group).
", + "GetStudioSessionMapping": "Fetches mapping details for the specified Amazon EMR Studio and identity (user or group).
", "ListBootstrapActions": "Provides information about the bootstrap actions associated with a cluster.
", "ListClusters": "Provides the status of all clusters visible to this AWS account. Allows you to filter the list of clusters based on certain criteria; for example, filtering by cluster creation date and time or by status. This call returns a maximum of 50 clusters per call, but returns a marker to track the paging of the cluster list across multiple ListClusters calls.
", "ListInstanceFleets": "Lists all available details about the instance fleets in a cluster.
The instance fleet configuration is available only in Amazon EMR versions 4.8.0 and later, excluding 5.0.x versions.
Provides information for all active EC2 instances and EC2 instances terminated in the last 30 days, up to a maximum of 2,000. EC2 instances in any of the following states are considered active: AWAITING_FULFILLMENT, PROVISIONING, BOOTSTRAPPING, RUNNING.
", "ListNotebookExecutions": "Provides summaries of all notebook executions. You can filter the list based on multiple criteria such as status, time range, and editor id. Returns a maximum of 50 notebook executions and a marker to track the paging of a longer notebook execution list across multiple ListNotebookExecution
calls.
Lists all the security configurations visible to this account, providing their creation dates and times, and their names. This call returns a maximum of 50 clusters per call, but returns a marker to track the paging of the cluster list across multiple ListSecurityConfigurations calls.
", - "ListSteps": "Provides a list of steps for the cluster in reverse order unless you specify stepIds
with the request of filter by StepStates
. You can specify a maximum of ten stepIDs
.
The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Returns a list of all user or group session mappings for the EMR Studio specified by StudioId
.
The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Returns a list of all Amazon EMR Studios associated with the AWS account. The list includes details such as ID, Studio Access URL, and creation time for each Studio.
", + "ListSteps": "Provides a list of steps for the cluster in reverse order unless you specify stepIds
with the request of filter by StepStates
. You can specify a maximum of 10 stepIDs
.
Returns a list of all user or group session mappings for the Amazon EMR Studio specified by StudioId
.
Returns a list of all Amazon EMR Studios associated with the AWS account. The list includes details such as ID, Studio Access URL, and creation time for each Studio.
", "ModifyCluster": "Modifies the number of steps that can be executed concurrently for the cluster specified using ClusterID.
", "ModifyInstanceFleet": "Modifies the target On-Demand and target Spot capacities for the instance fleet with the specified InstanceFleetID within the cluster specified using ClusterID. The call either succeeds or fails atomically.
The instance fleet configuration is available only in Amazon EMR versions 4.8.0 and later, excluding 5.0.x versions.
ModifyInstanceGroups modifies the number of nodes and configuration settings of an instance group. The input parameters include the new target instance count for the group and the instance group ID. The call will either succeed or fail atomically.
", @@ -47,7 +47,8 @@ "StartNotebookExecution": "Starts a notebook execution.
", "StopNotebookExecution": "Stops a notebook execution.
", "TerminateJobFlows": "TerminateJobFlows shuts a list of clusters (job flows) down. When a job flow is shut down, any step not yet completed is canceled and the EC2 instances on which the cluster is running are stopped. Any log files not already saved are uploaded to Amazon S3 if a LogUri was specified when the cluster was created.
The maximum number of clusters allowed is 10. The call to TerminateJobFlows
is asynchronous. Depending on the configuration of the cluster, it may take up to 1-5 minutes for the cluster to completely terminate and release allocated resources, such as Amazon EC2 instances.
The Amazon EMR Studio APIs are in preview release for Amazon EMR and are subject to change.
Updates the session policy attached to the user or group for the specified Amazon EMR Studio.
" + "UpdateStudio": "Updates an Amazon EMR Studio configuration, including attributes such as name, description, and subnets.
", + "UpdateStudioSessionMapping": "Updates the session policy attached to the user or group for the specified Amazon EMR Studio.
" }, "shapes": { "ActionOnFailure": { @@ -125,15 +126,15 @@ "BlockPublicAccessConfigurationMetadata$CreatedByArn": "The Amazon Resource Name that created or last modified the configuration.
", "Cluster$ClusterArn": "The Amazon Resource Name of the cluster.
", "ClusterSummary$ClusterArn": "The Amazon Resource Name of the cluster.
", - "PutAutoScalingPolicyOutput$ClusterArn": "The Amazon Resource Name of the cluster.
", - "RunJobFlowOutput$ClusterArn": "The Amazon Resource Name of the cluster.
" + "PutAutoScalingPolicyOutput$ClusterArn": "The Amazon Resource Name (ARN) of the cluster.
", + "RunJobFlowOutput$ClusterArn": "The Amazon Resource Name (ARN) of the cluster.
" } }, "AuthMode": { "base": null, "refs": { "CreateStudioInput$AuthMode": "Specifies whether the Studio authenticates users using single sign-on (SSO) or IAM. Amazon EMR Studio currently only supports SSO authentication.
", - "Studio$AuthMode": "Specifies whether the Studio authenticates users using single sign-on (SSO) or IAM.
" + "Studio$AuthMode": "Specifies whether the Amazon EMR Studio authenticates users using single sign-on (SSO) or IAM.
" } }, "AutoScalingPolicy": { @@ -695,12 +696,12 @@ "IdentityType": { "base": null, "refs": { - "CreateStudioSessionMappingInput$IdentityType": "Specifies whether the identity to map to the Studio is a user or a group.
", - "DeleteStudioSessionMappingInput$IdentityType": "Specifies whether the identity to delete from the Studio is a user or a group.
", + "CreateStudioSessionMappingInput$IdentityType": "Specifies whether the identity to map to the Amazon EMR Studio is a user or a group.
", + "DeleteStudioSessionMappingInput$IdentityType": "Specifies whether the identity to delete from the Amazon EMR Studio is a user or a group.
", "GetStudioSessionMappingInput$IdentityType": "Specifies whether the identity to fetch is a user or a group.
", "ListStudioSessionMappingsInput$IdentityType": "Specifies whether to return session mappings for users or groups. If not specified, the results include session mapping details for both users and groups.
", - "SessionMappingDetail$IdentityType": "Specifies whether the identity mapped to the Studio is a user or a group.
", - "SessionMappingSummary$IdentityType": "Specifies whether the identity mapped to the Studio is a user or a group.
", + "SessionMappingDetail$IdentityType": "Specifies whether the identity mapped to the Amazon EMR Studio is a user or a group.
", + "SessionMappingSummary$IdentityType": "Specifies whether the identity mapped to the Amazon EMR Studio is a user or a group.
", "UpdateStudioSessionMappingInput$IdentityType": "Specifies whether the identity to update is a user or a group.
" } }, @@ -798,7 +799,7 @@ "base": null, "refs": { "InstanceFleet$InstanceFleetType": "The node type that the instance fleet hosts. Valid values are MASTER, CORE, or TASK.
", - "InstanceFleetConfig$InstanceFleetType": "The node type that the instance fleet hosts. Valid values are MASTER,CORE,and TASK.
", + "InstanceFleetConfig$InstanceFleetType": "The node type that the instance fleet hosts. Valid values are MASTER, CORE, and TASK.
", "ListInstancesInput$InstanceFleetType": "The node type of the instance fleet. For example MASTER, CORE, or TASK.
" } }, @@ -1059,7 +1060,7 @@ } }, "InternalServerException": { - "base": "This exception occurs when there is an internal failure in the EMR service.
", + "base": "This exception occurs when there is an internal failure in the Amazon EMR service.
", "refs": { } }, @@ -1127,7 +1128,7 @@ "KeyValueList": { "base": null, "refs": { - "HadoopJarStepConfig$Properties": "A list of Java properties that are set when the step runs. You can use these properties to pass key value pairs to your main function.
" + "HadoopJarStepConfig$Properties": "A list of Java properties that are set when the step runs. You can use these properties to pass key-value pairs to your main function.
" } }, "ListBootstrapActionsInput": { @@ -1315,7 +1316,7 @@ "NewSupportedProductsList": { "base": null, "refs": { - "RunJobFlowInput$NewSupportedProducts": "For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and later, use Applications.
A list of strings that indicates third-party software to use with the job flow that accepts a user argument list. EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action arguments. For more information, see \"Launch a Job Flow on the MapR Distribution for Hadoop\" in the Amazon EMR Developer Guide. Supported values are:
\"mapr-m3\" - launch the cluster using MapR M3 Edition.
\"mapr-m5\" - launch the cluster using MapR M5 Edition.
\"mapr\" with the user arguments specifying \"--edition,m3\" or \"--edition,m5\" - launch the job flow using MapR M3 or M5 Edition respectively.
\"mapr-m7\" - launch the cluster using MapR M7 Edition.
\"hunk\" - launch the cluster with the Hunk Big Data Analtics Platform.
\"hue\"- launch the cluster with Hue installed.
\"spark\" - launch the cluster with Apache Spark installed.
\"ganglia\" - launch the cluster with the Ganglia Monitoring System installed.
For Amazon EMR releases 3.x and 2.x. For Amazon EMR releases 4.x and later, use Applications.
A list of strings that indicates third-party software to use with the job flow that accepts a user argument list. EMR accepts and forwards the argument list to the corresponding installation script as bootstrap action arguments. For more information, see \"Launch a Job Flow on the MapR Distribution for Hadoop\" in the Amazon EMR Developer Guide. Supported values are:
\"mapr-m3\" - launch the cluster using MapR M3 Edition.
\"mapr-m5\" - launch the cluster using MapR M5 Edition.
\"mapr\" with the user arguments specifying \"--edition,m3\" or \"--edition,m5\" - launch the job flow using MapR M3 or M5 Edition respectively.
\"mapr-m7\" - launch the cluster using MapR M7 Edition.
\"hunk\" - launch the cluster with the Hunk Big Data Analytics Platform.
\"hue\"- launch the cluster with Hue installed.
\"spark\" - launch the cluster with Apache Spark installed.
\"ganglia\" - launch the cluster with the Ganglia Monitoring System installed.
A list of session mapping summary objects. Each object includes session mapping details such as creation time, identity type (user or group), and Studio ID.
" + "ListStudioSessionMappingsOutput$SessionMappings": "A list of session mapping summary objects. Each object includes session mapping details such as creation time, identity type (user or group), and Amazon EMR Studio ID.
" } }, "SetTerminationProtectionInput": { @@ -1867,12 +1868,13 @@ "SubnetIdList": { "base": null, "refs": { - "CreateStudioInput$SubnetIds": "A list of subnet IDs to associate with the Studio. The subnets must belong to the VPC specified by VpcId
. Studio users can create a Workspace in any of the specified subnets.
The list of IDs of the subnets associated with the Amazon EMR Studio.
" + "CreateStudioInput$SubnetIds": "A list of subnet IDs to associate with the Amazon EMR Studio. A Studio can have a maximum of 5 subnets. The subnets must belong to the VPC specified by VpcId
. Studio users can create a Workspace in any of the specified subnets.
The list of IDs of the subnets associated with the Amazon EMR Studio.
", + "UpdateStudioInput$SubnetIds": "A list of subnet IDs to associate with the Amazon EMR Studio. The list can include new subnet IDs, but must also include all of the subnet IDs previously associated with the Studio. The list order does not matter. A Studio can have a maximum of 5 subnets. The subnets must belong to the same VPC as the Studio.
" } }, "SupportedProductConfig": { - "base": "The list of supported product configurations which allow user-supplied arguments. EMR accepts these arguments and forwards them to the corresponding installation script as bootstrap action arguments.
", + "base": "The list of supported product configurations that allow user-supplied arguments. EMR accepts these arguments and forwards them to the corresponding installation script as bootstrap action arguments.
", "refs": { "NewSupportedProductsList$member": null } @@ -1895,7 +1897,7 @@ "refs": { "AddTagsInput$Tags": "A list of tags to associate with a cluster and propagate to EC2 instances. Tags are user-defined key-value pairs that consist of a required key string with a maximum of 128 characters, and an optional value string with a maximum of 256 characters.
", "Cluster$Tags": "A list of tags associated with a cluster.
", - "CreateStudioInput$Tags": "A list of tags to associate with the Studio. Tags are user-defined key-value pairs that consist of a required key string with a maximum of 128 characters, and an optional value string with a maximum of 256 characters.
", + "CreateStudioInput$Tags": "A list of tags to associate with the Amazon EMR Studio. Tags are user-defined key-value pairs that consist of a required key string with a maximum of 128 characters, and an optional value string with a maximum of 256 characters.
", "NotebookExecution$Tags": "A list of tags associated with a notebook execution. Tags are user-defined key-value pairs that consist of a required key string with a maximum of 128 characters and an optional value string with a maximum of 256 characters.
", "RunJobFlowInput$Tags": "A list of tags to associate with a cluster and propagate to Amazon EC2 instances.
", "StartNotebookExecutionInput$Tags": "A list of tags associated with a notebook execution. Tags are user-defined key-value pairs that consist of a required key string with a maximum of 128 characters and an optional value string with a maximum of 256 characters.
", @@ -1913,6 +1915,11 @@ "CloudWatchAlarmDefinition$Unit": "The unit of measure associated with the CloudWatch metric being watched. The value specified for Unit
must correspond to the units specified in the CloudWatch metric.
The target capacity of On-Demand units for the instance fleet, which determines how many On-Demand Instances to provision. When the instance fleet launches, Amazon EMR tries to provision On-Demand Instances as specified by InstanceTypeConfig. Each instance configuration has a specified WeightedCapacity
. When an On-Demand Instance is provisioned, the WeightedCapacity
units count toward the target capacity. Amazon EMR provisions instances until the target capacity is totally fulfilled, even if this results in an overage. For example, if there are 2 units remaining to fulfill capacity, and Amazon EMR can only provision an instance with a WeightedCapacity
of 5 units, the instance is provisioned, and the target capacity is exceeded by 3 units. You can use InstanceFleet$ProvisionedOnDemandCapacity to determine the Spot capacity units that have been provisioned for the instance fleet.
If not specified or set to 0, only Spot Instances are provisioned for the instance fleet using TargetSpotCapacity
. At least one of TargetSpotCapacity
and TargetOnDemandCapacity
should be greater than 0. For a master instance fleet, only one of TargetSpotCapacity
and TargetOnDemandCapacity
can be specified, and its value must be 1.
The target capacity of Spot units for the instance fleet, which determines how many Spot instances to provision. When the instance fleet launches, Amazon EMR tries to provision Spot instances as specified by InstanceTypeConfig. Each instance configuration has a specified WeightedCapacity
. When a Spot instance is provisioned, the WeightedCapacity
units count toward the target capacity. Amazon EMR provisions instances until the target capacity is totally fulfilled, even if this results in an overage. For example, if there are 2 units remaining to fulfill capacity, and Amazon EMR can only provision an instance with a WeightedCapacity
of 5 units, the instance is provisioned, and the target capacity is exceeded by 3 units. You can use InstanceFleet$ProvisionedSpotCapacity to determine the Spot capacity units that have been provisioned for the instance fleet.
If not specified or set to 0, only On-Demand instances are provisioned for the instance fleet. At least one of TargetSpotCapacity
and TargetOnDemandCapacity
should be greater than 0. For a master instance fleet, only one of TargetSpotCapacity
and TargetOnDemandCapacity
can be specified, and its value must be 1.
The target capacity of Spot units for the instance fleet, which determines how many Spot Instances to provision. When the instance fleet launches, Amazon EMR tries to provision Spot Instances as specified by InstanceTypeConfig. Each instance configuration has a specified WeightedCapacity
. When a Spot instance is provisioned, the WeightedCapacity
units count toward the target capacity. Amazon EMR provisions instances until the target capacity is totally fulfilled, even if this results in an overage. For example, if there are 2 units remaining to fulfill capacity, and Amazon EMR can only provision an instance with a WeightedCapacity
of 5 units, the instance is provisioned, and the target capacity is exceeded by 3 units. You can use InstanceFleet$ProvisionedSpotCapacity to determine the Spot capacity units that have been provisioned for the instance fleet.
If not specified or set to 0, only On-Demand Instances are provisioned for the instance fleet. At least one of TargetSpotCapacity
and TargetOnDemandCapacity
should be greater than 0. For a master instance fleet, only one of TargetSpotCapacity
and TargetOnDemandCapacity
can be specified, and its value must be 1.
The number of On-Demand units that have been provisioned for the instance fleet to fulfill TargetOnDemandCapacity
. This provisioned capacity might be less than or greater than TargetOnDemandCapacity
.
The number of Spot units that have been provisioned for this instance fleet to fulfill TargetSpotCapacity
. This provisioned capacity might be less than or greater than TargetSpotCapacity
.
The target capacity of On-Demand units for the instance fleet, which determines how many On-Demand Instances to provision. When the instance fleet launches, Amazon EMR tries to provision On-Demand Instances as specified by InstanceTypeConfig. Each instance configuration has a specified WeightedCapacity
. When an On-Demand Instance is provisioned, the WeightedCapacity
units count toward the target capacity. Amazon EMR provisions instances until the target capacity is totally fulfilled, even if this results in an overage. For example, if there are 2 units remaining to fulfill capacity, and Amazon EMR can only provision an instance with a WeightedCapacity
of 5 units, the instance is provisioned, and the target capacity is exceeded by 3 units.
If not specified or set to 0, only Spot Instances are provisioned for the instance fleet using TargetSpotCapacity
. At least one of TargetSpotCapacity
and TargetOnDemandCapacity
should be greater than 0. For a master instance fleet, only one of TargetSpotCapacity
and TargetOnDemandCapacity
can be specified, and its value must be 1.
The number of units that a provisioned instance of this type provides toward fulfilling the target capacities defined in InstanceFleetConfig. This value is 1 for a master instance fleet, and must be 1 or greater for core and task instance fleets. Defaults to 1 if not specified.
", "InstanceTypeSpecification$WeightedCapacity": "The number of units that a provisioned instance of this type provides toward fulfilling the target capacities defined in InstanceFleetConfig. Capacity values represent performance characteristics such as vCPUs, memory, or I/O. If not specified, the default value is 1.
", "SpotProvisioningSpecification$TimeoutDurationMinutes": "The spot provisioning timeout period in minutes. If Spot Instances are not provisioned within this time period, the TimeOutAction
is taken. Minimum value is 5 and maximum value is 1440. The timeout applies only during initial provisioning, when the cluster is first created.
The defined duration for Spot Instances (also known as Spot blocks) in minutes. When specified, the Spot Instance does not terminate before the defined duration expires, and defined duration pricing for Spot instances applies. Valid values are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as a Spot Instance receives its instance ID. At the end of the duration, Amazon EC2 marks the Spot Instance for termination and provides a Spot Instance termination notice, which gives the instance a two-minute warning before it terminates.
" + "SpotProvisioningSpecification$BlockDurationMinutes": "The defined duration for Spot Instances (also known as Spot blocks) in minutes. When specified, the Spot Instance does not terminate before the defined duration expires, and defined duration pricing for Spot Instances applies. Valid values are 60, 120, 180, 240, 300, or 360. The duration period starts as soon as a Spot Instance receives its instance ID. At the end of the duration, Amazon EC2 marks the Spot Instance for termination and provides a Spot Instance termination notice, which gives the instance a two-minute warning before it terminates.
" } }, "XmlString": { @@ -1950,8 +1957,8 @@ "CreateSecurityConfigurationInput$Name": "The name of the security configuration.
", "CreateSecurityConfigurationOutput$Name": "The name of the security configuration.
", "CreateStudioInput$ServiceRole": "The IAM role that will be assumed by the Amazon EMR Studio. The service role provides a way for Amazon EMR Studio to interoperate with other AWS services.
", - "CreateStudioInput$UserRole": "The IAM user role that will be assumed by users and groups logged in to a Studio. The permissions attached to this IAM role can be scoped down for each user or group using session policies.
", - "CreateStudioInput$DefaultS3Location": "The default Amazon S3 location to back up EMR Studio Workspaces and notebook files. A Studio user can select an alternative Amazon S3 location when creating a Workspace.
", + "CreateStudioInput$UserRole": "The IAM user role that will be assumed by users and groups logged in to an Amazon EMR Studio. The permissions attached to this IAM role can be scoped down for each user or group using session policies.
", + "CreateStudioInput$DefaultS3Location": "The default Amazon S3 location to back up Amazon EMR Studio Workspaces and notebook files. A Studio user can select an alternative Amazon S3 location when creating a Workspace.
", "CreateStudioOutput$Url": "The unique Studio access URL.
", "DeleteSecurityConfigurationInput$Name": "The name of the security configuration.
", "DescribeSecurityConfigurationInput$Name": "The name of the security configuration.
", @@ -1990,6 +1997,7 @@ "Studio$UserRole": "The name of the IAM role assumed by users logged in to the Amazon EMR Studio.
", "Studio$Url": "The unique access URL of the Amazon EMR Studio.
", "Studio$DefaultS3Location": "The default Amazon S3 location to back up Amazon EMR Studio Workspaces and notebook files.
", + "UpdateStudioInput$DefaultS3Location": "A default Amazon S3 location to back up Workspaces and notebook files for the Amazon EMR Studio. A Studio user can select an alternative Amazon S3 location when creating a Workspace.
", "XmlStringList$member": null } }, @@ -2018,33 +2026,33 @@ "CancelStepsInput$ClusterId": "The ClusterID
for the specified steps that will be canceled. Use RunJobFlow and ListClusters to get ClusterIDs.
Available only in Amazon EMR version 5.7.0 and later. The ID of a custom Amazon EBS-backed Linux AMI if the cluster uses a custom AMI.
", "CreateStudioInput$Name": "A descriptive name for the Amazon EMR Studio.
", - "CreateStudioInput$Description": "A detailed description of the Studio.
", + "CreateStudioInput$Description": "A detailed description of the Amazon EMR Studio.
", "CreateStudioInput$VpcId": "The ID of the Amazon Virtual Private Cloud (Amazon VPC) to associate with the Studio.
", "CreateStudioInput$WorkspaceSecurityGroupId": "The ID of the Amazon EMR Studio Workspace security group. The Workspace security group allows outbound network traffic to resources in the Engine security group, and it must be in the same VPC specified by VpcId
.
The ID of the Amazon EMR Studio Engine security group. The Engine security group allows inbound network traffic from the Workspace security group, and it must be in the same VPC specified by VpcId
.
The ID of the Amazon EMR Studio.
", "CreateStudioSessionMappingInput$StudioId": "The ID of the Amazon EMR Studio to which the user or group will be mapped.
", "CreateStudioSessionMappingInput$IdentityId": "The globally unique identifier (GUID) of the user or group from the AWS SSO Identity Store. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The Amazon Resource Name (ARN) for the session policy that will be applied to the user or group. Session policies refine Studio user permissions without the need to use multiple IAM user roles.
", "DeleteStudioInput$StudioId": "The ID of the Amazon EMR Studio.
", "DeleteStudioSessionMappingInput$StudioId": "The ID of the Amazon EMR Studio.
", "DeleteStudioSessionMappingInput$IdentityId": "The globally unique identifier (GUID) of the user or group to remove from the Amazon EMR Studio. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user name or group to remove from the Studio. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user name or group to remove from the Amazon EMR Studio. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The unique identifier of the notebook execution.
", "DescribeStudioInput$StudioId": "The Amazon EMR Studio ID.
", "ExecutionEngineConfig$Id": "The unique identifier of the execution engine. For an EMR cluster, this is the cluster ID.
", "ExecutionEngineConfig$MasterInstanceSecurityGroupId": "An optional unique ID of an EC2 security group to associate with the master instance of the EMR cluster for this notebook execution. For more information see Specifying EC2 Security Groups for EMR Notebooks in the EMR Management Guide.
", "GetStudioSessionMappingInput$StudioId": "The ID of the Amazon EMR Studio.
", "GetStudioSessionMappingInput$IdentityId": "The globally unique identifier (GUID) of the user or group. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to fetch. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to fetch. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
A friendly name for the instance fleet.
", "InstanceFleetConfig$Name": "The friendly name of the instance fleet.
", "InstanceGroupConfig$Name": "Friendly name given to the instance group.
", - "InstanceGroupConfig$BidPrice": "The bid price for each EC2 Spot Instance type as defined by InstanceType
. Expressed in USD. If neither BidPrice
nor BidPriceAsPercentageOfOnDemandPrice
is provided, BidPriceAsPercentageOfOnDemandPrice
defaults to 100%.
The bid price for each EC2 Spot Instance as defined by InstanceType
. Expressed in USD. If neither BidPrice
nor BidPriceAsPercentageOfOnDemandPrice
is provided, BidPriceAsPercentageOfOnDemandPrice
defaults to 100%.
Unique identifier for the instance group.
", "InstanceGroupDetail$Name": "Friendly name for the instance group.
", - "InstanceGroupDetail$BidPrice": "The bid price for each EC2 Spot Instance type as defined by InstanceType
. Expressed in USD. If neither BidPrice
nor BidPriceAsPercentageOfOnDemandPrice
is provided, BidPriceAsPercentageOfOnDemandPrice
defaults to 100%.
The bid price for each EC2 Spot Instance as defined by InstanceType
. Expressed in USD. If neither BidPrice
nor BidPriceAsPercentageOfOnDemandPrice
is provided, BidPriceAsPercentageOfOnDemandPrice
defaults to 100%.
Unique ID of the instance group to modify.
", "InstanceTypeConfig$BidPrice": "The bid price for each EC2 Spot Instance type as defined by InstanceType
. Expressed in USD. If neither BidPrice
nor BidPriceAsPercentageOfOnDemandPrice
is provided, BidPriceAsPercentageOfOnDemandPrice
defaults to 100%.
Applies only to Amazon EMR AMI versions 3.x and 2.x. For Amazon EMR releases 4.0 and later, ReleaseLabel
is used. To specify a custom AMI, use CustomAmiID
.
The Amazon EMR release label, which determines the version of open-source application packages installed on the cluster. Release labels are in the form emr-x.x.x
, where x.x.x is an Amazon EMR release version such as emr-5.14.0
. For more information about Amazon EMR release versions and included application versions and features, see https://docs.aws.amazon.com/emr/latest/ReleaseGuide/. The release label applies only to Amazon EMR releases version 4.0 and later. Earlier versions use AmiVersion
.
Available only in Amazon EMR version 5.7.0 and later. The ID of a custom Amazon EBS-backed Linux AMI. If specified, Amazon EMR uses this AMI when it launches cluster EC2 instances. For more information about custom AMIs in Amazon EMR, see Using a Custom AMI in the Amazon EMR Management Guide. If omitted, the cluster uses the base Linux AMI for the ReleaseLabel
specified. For Amazon EMR versions 2.x and 3.x, use AmiVersion
instead.
For information about creating a custom AMI, see Creating an Amazon EBS-Backed Linux AMI in the Amazon Elastic Compute Cloud User Guide for Linux Instances. For information about finding an AMI ID, see Finding a Linux AMI.
", - "RunJobFlowOutput$JobFlowId": "An unique identifier for the job flow.
", + "RunJobFlowOutput$JobFlowId": "A unique identifier for the job flow.
", "SecurityGroupsList$member": null, "SessionMappingDetail$StudioId": "The ID of the Amazon EMR Studio.
", "SessionMappingDetail$IdentityId": "The globally unique identifier (GUID) of the user or group.
", - "SessionMappingDetail$IdentityName": "The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference.
", + "SessionMappingDetail$IdentityName": "The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference.
", "SessionMappingDetail$SessionPolicyArn": "The Amazon Resource Name (ARN) of the session policy associated with the user or group.
", "SessionMappingSummary$StudioId": "The ID of the Amazon EMR Studio.
", "SessionMappingSummary$IdentityId": "The globally unique identifier (GUID) of the user or group from the AWS SSO Identity Store.
", - "SessionMappingSummary$IdentityName": "The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference.
", + "SessionMappingSummary$IdentityName": "The name of the user or group. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference.
", "SessionMappingSummary$SessionPolicyArn": "The Amazon Resource Name (ARN) of the session policy associated with the user or group.
", "StartNotebookExecutionInput$EditorId": "The unique identifier of the EMR Notebook to use for notebook execution.
", "StartNotebookExecutionInput$NotebookExecutionName": "An optional name for the notebook execution.
", @@ -2097,23 +2105,26 @@ "StepConfig$Name": "The name of the step.
", "StepIdsList$member": null, "StopNotebookExecutionInput$NotebookExecutionId": "The unique identifier of the notebook execution.
", - "Studio$StudioId": "The ID of the EMR Studio.
", - "Studio$StudioArn": "The Amazon Resource Name (ARN) of the EMR Studio.
", - "Studio$Name": "The name of the EMR Studio.
", - "Studio$Description": "The detailed description of the EMR Studio.
", - "Studio$VpcId": "The ID of the VPC associated with the EMR Studio.
", + "Studio$StudioId": "The ID of the Amazon EMR Studio.
", + "Studio$StudioArn": "The Amazon Resource Name (ARN) of the Amazon EMR Studio.
", + "Studio$Name": "The name of the Amazon EMR Studio.
", + "Studio$Description": "The detailed description of the Amazon EMR Studio.
", + "Studio$VpcId": "The ID of the VPC associated with the Amazon EMR Studio.
", "Studio$WorkspaceSecurityGroupId": "The ID of the Workspace security group associated with the Amazon EMR Studio. The Workspace security group allows outbound network traffic to resources in the Engine security group and to the internet.
", "Studio$EngineSecurityGroupId": "The ID of the Engine security group associated with the Amazon EMR Studio. The Engine security group allows inbound network traffic from resources in the Workspace security group.
", "StudioSummary$StudioId": "The ID of the Amazon EMR Studio.
", "StudioSummary$Name": "The name of the Amazon EMR Studio.
", "StudioSummary$VpcId": "The ID of the Virtual Private Cloud (Amazon VPC) associated with the Amazon EMR Studio.
", - "StudioSummary$Description": "The detailed description of the EMR Studio.
", + "StudioSummary$Description": "The detailed description of the Amazon EMR Studio.
", "StudioSummary$Url": "The unique access URL of the Amazon EMR Studio.
", "SupportedProductConfig$Name": "The name of the product configuration.
", "SupportedProductsList$member": null, - "UpdateStudioSessionMappingInput$StudioId": "The ID of the EMR Studio.
", + "UpdateStudioInput$StudioId": "The ID of the Amazon EMR Studio to update.
", + "UpdateStudioInput$Name": "A descriptive name for the Amazon EMR Studio.
", + "UpdateStudioInput$Description": "A detailed description to assign to the Amazon EMR Studio.
", + "UpdateStudioSessionMappingInput$StudioId": "The ID of the Amazon EMR Studio.
", "UpdateStudioSessionMappingInput$IdentityId": "The globally unique identifier (GUID) of the user or group. For more information, see UserId and GroupId in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to update. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The name of the user or group to update. For more information, see UserName and DisplayName in the AWS SSO Identity Store API Reference. Either IdentityName
or IdentityId
must be specified.
The Amazon Resource Name (ARN) of the session policy to associate with the specified user or group.
", "XmlStringMaxLen256List$member": null } diff --git a/models/apis/s3/2006-03-01/api-2.json b/models/apis/s3/2006-03-01/api-2.json index a1cfa8b45aa..4592af96b9c 100644 --- a/models/apis/s3/2006-03-01/api-2.json +++ b/models/apis/s3/2006-03-01/api-2.json @@ -3662,6 +3662,11 @@ "shape":"AccountId", "location":"header", "locationName":"x-amz-expected-bucket-owner" + }, + "RequestPayer":{ + "shape":"RequestPayer", + "location":"header", + "locationName":"x-amz-request-payer" } } }, @@ -6478,6 +6483,11 @@ "shape":"AccountId", "location":"header", "locationName":"x-amz-expected-bucket-owner" + }, + "RequestPayer":{ + "shape":"RequestPayer", + "location":"header", + "locationName":"x-amz-request-payer" } }, "payload":"Tagging" diff --git a/models/apis/s3/2006-03-01/docs-2.json b/models/apis/s3/2006-03-01/docs-2.json index 6c20973aab9..0be3035e58d 100644 --- a/models/apis/s3/2006-03-01/docs-2.json +++ b/models/apis/s3/2006-03-01/docs-2.json @@ -28,7 +28,7 @@ "GetBucketAcl": "This implementation of the GET
operation uses the acl
subresource to return the access control list (ACL) of a bucket. To use GET
to return the ACL of the bucket, you must have READ_ACP
access to the bucket. If READ_ACP
permission is granted to the anonymous user, you can return the ACL of the bucket without using an authorization header.
Related Resources
", "GetBucketAnalyticsConfiguration": "This implementation of the GET operation returns an analytics configuration (identified by the analytics configuration ID) from the bucket.
To use this operation, you must have permissions to perform the s3:GetAnalyticsConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis in the Amazon Simple Storage Service Developer Guide.
Related Resources
Returns the cors configuration information set for the bucket.
To use this operation, you must have permission to perform the s3:GetBucketCORS action. By default, the bucket owner has this permission and can grant it to others.
For more information about cors, see Enabling Cross-Origin Resource Sharing.
The following operations are related to GetBucketCors
:
Returns the default encryption configuration for an Amazon S3 bucket. For information about the Amazon S3 default encryption feature, see Amazon S3 Default Bucket Encryption.
To use this operation, you must have permission to perform the s3:GetEncryptionConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
The following operations are related to GetBucketEncryption
:
Returns the default encryption configuration for an Amazon S3 bucket. If the bucket does not have a default encryption configuration, GetBucketEncryption returns ServerSideEncryptionConfigurationNotFoundError
.
For information about the Amazon S3 default encryption feature, see Amazon S3 Default Bucket Encryption.
To use this operation, you must have permission to perform the s3:GetEncryptionConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
The following operations are related to GetBucketEncryption
:
Gets the S3 Intelligent-Tiering configuration from the specified bucket.
The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without additional operational overhead. S3 Intelligent-Tiering delivers automatic cost savings by moving data between access tiers, when access patterns change.
The S3 Intelligent-Tiering storage class is suitable for objects larger than 128 KB that you plan to store for at least 30 days. If the size of an object is less than 128 KB, it is not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the frequent access tier rates in the S3 Intelligent-Tiering storage class.
If you delete an object before the end of the 30-day minimum storage duration period, you are charged for 30 days. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to GetBucketIntelligentTieringConfiguration
include:
Returns an inventory configuration (identified by the inventory configuration ID) from the bucket.
To use this operation, you must have permissions to perform the s3:GetInventoryConfiguration
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
For information about the Amazon S3 inventory feature, see Amazon S3 Inventory.
The following operations are related to GetBucketInventoryConfiguration
:
For an updated version of this API, see GetBucketLifecycleConfiguration. If you configured a bucket lifecycle using the filter
element, you should see the updated version of this topic. This topic is provided for backward compatibility.
Returns the lifecycle configuration information set on the bucket. For information about lifecycle configuration, see Object Lifecycle Management.
To use this operation, you must have permission to perform the s3:GetLifecycleConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
GetBucketLifecycle
has the following special error:
Error code: NoSuchLifecycleConfiguration
Description: The lifecycle configuration does not exist.
HTTP Status Code: 404 Not Found
SOAP Fault Code Prefix: Client
The following operations are related to GetBucketLifecycle
:
Gets an object's current Legal Hold status. For more information, see Locking Objects.
This action is not supported by Amazon S3 on Outposts.
", "GetObjectLockConfiguration": "Gets the Object Lock configuration for a bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket. For more information, see Locking Objects.
", "GetObjectRetention": "Retrieves an object's retention settings. For more information, see Locking Objects.
This action is not supported by Amazon S3 on Outposts.
", - "GetObjectTagging": "Returns the tag-set of an object. You send the GET request against the tagging subresource associated with the object.
To use this operation, you must have permission to perform the s3:GetObjectTagging
action. By default, the GET operation returns information about current version of an object. For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, use the versionId query parameter. You also need permission for the s3:GetObjectVersionTagging
action.
By default, the bucket owner has this permission and can grant this permission to others.
For information about the Amazon S3 object tagging feature, see Object Tagging.
The following operation is related to GetObjectTagging
:
Returns the tag-set of an object. You send the GET request against the tagging subresource associated with the object.
To use this operation, you must have permission to perform the s3:GetObjectTagging
action. By default, the GET operation returns information about current version of an object. For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, use the versionId query parameter. You also need permission for the s3:GetObjectVersionTagging
action.
By default, the bucket owner has this permission and can grant this permission to others.
For information about the Amazon S3 object tagging feature, see Object Tagging.
The following operation is related to GetObjectTagging
:
Returns torrent files from a bucket. BitTorrent can save you bandwidth when you're distributing large files. For more information about BitTorrent, see Using BitTorrent with Amazon S3.
You can get torrent only for objects that are less than 5 GB in size, and that are not encrypted using server-side encryption with a customer-provided encryption key.
To use GET, you must have READ access to the object.
This action is not supported by Amazon S3 on Outposts.
The following operation is related to GetObjectTorrent
:
Retrieves the PublicAccessBlock
configuration for an Amazon S3 bucket. To use this operation, you must have the s3:GetBucketPublicAccessBlock
permission. For more information about Amazon S3 permissions, see Specifying Permissions in a Policy.
When Amazon S3 evaluates the PublicAccessBlock
configuration for a bucket or an object, it checks the PublicAccessBlock
configuration for both the bucket (or the bucket that contains the object) and the bucket owner's account. If the PublicAccessBlock
settings are different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings.
For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of \"Public\".
The following operations are related to GetPublicAccessBlock
:
This operation is useful to determine if a bucket exists and you have permission to access it. The operation returns a 200 OK
if the bucket exists and you have permission to access it. Otherwise, the operation might return responses such as 404 Not Found
and 403 Forbidden
.
To use this operation, you must have permissions to perform the s3:ListBucket
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A HEAD
request has the same options as a GET
operation on an object. The response is identical to the GET
response except that there is no response body.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Encryption request headers, like x-amz-server-side-encryption
, should not be sent for GET requests if your object uses server-side encryption with CMKs stored in AWS KMS (SSE-KMS) or server-side encryption with Amazon S3–managed encryption keys (SSE-S3). If your object does use these types of keys, you’ll get an HTTP 400 BadRequest error.
Request headers are limited to 8 KB in size. For more information, see Common Request Headers.
Consider the following when using request headers:
Consideration 1 – If both of the If-Match
and If-Unmodified-Since
headers are present in the request as follows:
If-Match
condition evaluates to true
, and;
If-Unmodified-Since
condition evaluates to false
;
Then Amazon S3 returns 200 OK
and the data requested.
Consideration 2 – If both of the If-None-Match
and If-Modified-Since
headers are present in the request as follows:
If-None-Match
condition evaluates to false
, and;
If-Modified-Since
condition evaluates to true
;
Then Amazon S3 returns the 304 Not Modified
response code.
For more information about conditional requests, see RFC 7232.
Permissions
You need the s3:GetObject
permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3 returns an HTTP status code 404 (\"no such key\") error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns an HTTP status code 403 (\"access denied\") error.
The following operation is related to HeadObject
:
This operation is useful to determine if a bucket exists and you have permission to access it. The operation returns a 200 OK
if the bucket exists and you have permission to access it.
If the bucket does not exist or you do not have permission to access it, the HEAD
request returns a generic 404 Not Found
or 403 Forbidden
code. A message body is not included, so you cannot determine the exception beyond these error codes.
To use this operation, you must have permissions to perform the s3:ListBucket
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you're only interested in an object's metadata. To use HEAD, you must have READ access to the object.
A HEAD
request has the same options as a GET
operation on an object. The response is identical to the GET
response except that there is no response body. Because of this, if the HEAD
request generates an error, it returns a generic 404 Not Found
or 403 Forbidden
code. It is not possible to retrieve the exact exception beyond these error codes.
If you encrypt an object by using server-side encryption with customer-provided encryption keys (SSE-C) when you store the object in Amazon S3, then when you retrieve the metadata from the object, you must use the following headers:
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-key-MD5
For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys).
Encryption request headers, like x-amz-server-side-encryption
, should not be sent for GET requests if your object uses server-side encryption with CMKs stored in AWS KMS (SSE-KMS) or server-side encryption with Amazon S3–managed encryption keys (SSE-S3). If your object does use these types of keys, you’ll get an HTTP 400 BadRequest error.
The last modified property in this case is the creation date of the object.
Request headers are limited to 8 KB in size. For more information, see Common Request Headers.
Consider the following when using request headers:
Consideration 1 – If both of the If-Match
and If-Unmodified-Since
headers are present in the request as follows:
If-Match
condition evaluates to true
, and;
If-Unmodified-Since
condition evaluates to false
;
Then Amazon S3 returns 200 OK
and the data requested.
Consideration 2 – If both of the If-None-Match
and If-Modified-Since
headers are present in the request as follows:
If-None-Match
condition evaluates to false
, and;
If-Modified-Since
condition evaluates to true
;
Then Amazon S3 returns the 304 Not Modified
response code.
For more information about conditional requests, see RFC 7232.
Permissions
You need the s3:GetObject
permission for this operation. For more information, see Specifying Permissions in a Policy. If the object you request does not exist, the error Amazon S3 returns depends on whether you also have the s3:ListBucket permission.
If you have the s3:ListBucket
permission on the bucket, Amazon S3 returns an HTTP status code 404 (\"no such key\") error.
If you don’t have the s3:ListBucket
permission, Amazon S3 returns an HTTP status code 403 (\"access denied\") error.
The following operation is related to HeadObject
:
Lists the analytics configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This operation supports list pagination and does not return more than 100 configurations at a time. You should always check the IsTruncated
element in the response. If there are no more configurations to list, IsTruncated
is set to false. If there are more configurations to list, IsTruncated
is set to true, and there will be a value in NextContinuationToken
. You use the NextContinuationToken
value to continue the pagination of the list by passing the value in continuation-token in the request to GET
the next page.
To use this operation, you must have permissions to perform the s3:GetAnalyticsConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
For information about Amazon S3 analytics feature, see Amazon S3 Analytics – Storage Class Analysis.
The following operations are related to ListBucketAnalyticsConfigurations
:
Lists the S3 Intelligent-Tiering configuration from the specified bucket.
The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without additional operational overhead. S3 Intelligent-Tiering delivers automatic cost savings by moving data between access tiers, when access patterns change.
The S3 Intelligent-Tiering storage class is suitable for objects larger than 128 KB that you plan to store for at least 30 days. If the size of an object is less than 128 KB, it is not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the frequent access tier rates in the S3 Intelligent-Tiering storage class.
If you delete an object before the end of the 30-day minimum storage duration period, you are charged for 30 days. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to ListBucketIntelligentTieringConfigurations
include:
Returns a list of inventory configurations for the bucket. You can have up to 1,000 analytics configurations per bucket.
This operation supports list pagination and does not return more than 100 configurations at a time. Always check the IsTruncated
element in the response. If there are no more configurations to list, IsTruncated
is set to false. If there are more configurations to list, IsTruncated
is set to true, and there is a value in NextContinuationToken
. You use the NextContinuationToken
value to continue the pagination of the list by passing the value in continuation-token in the request to GET
the next page.
To use this operation, you must have permissions to perform the s3:GetInventoryConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
For information about the Amazon S3 inventory feature, see Amazon S3 Inventory
The following operations are related to ListBucketInventoryConfigurations
:
This operation lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request, but has not yet been completed or aborted.
This operation returns at most 1,000 multipart uploads in the response. 1,000 multipart uploads is the maximum number of uploads a response can include, which is also the default value. You can further limit the number of uploads in a response by specifying the max-uploads
parameter in the response. If additional multipart uploads satisfy the list criteria, the response will contain an IsTruncated
element with the value true. To list the additional multipart uploads, use the key-marker
and upload-id-marker
request parameters.
In the response, the uploads are sorted by key. If your application has initiated more than one multipart upload using the same object key, then uploads in the response are first sorted by key. Additionally, uploads are sorted in ascending order within each key by the upload initiation time.
For more information on multipart uploads, see Uploading Objects Using Multipart Upload.
For information on permissions required to use the multipart upload API, see Multipart Upload API and Permissions.
The following operations are related to ListMultipartUploads
:
Returns metadata about all versions of the objects in a bucket. You can also use request parameters as selection criteria to return metadata about a subset of all the object versions.
A 200 OK response can contain valid or invalid XML. Make sure to design your application to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket.
This action is not supported by Amazon S3 on Outposts.
The following operations are related to ListObjectVersions
:
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request parameters as selection criteria to return a subset of the objects in a bucket. A 200 OK response can contain valid or invalid XML. Be sure to design your application to parse the contents of the response and handle it appropriately.
This API has been revised. We recommend that you use the newer version, ListObjectsV2, when developing applications. For backward compatibility, Amazon S3 continues to support ListObjects
.
The following operations are related to ListObjects
:
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request parameters as selection criteria to return a subset of the objects in a bucket. A 200 OK
response can contain valid or invalid XML. Make sure to design your application to parse the contents of the response and handle it appropriately.
To use this operation, you must have READ access to the bucket.
To use this operation in an AWS Identity and Access Management (IAM) policy, you must have permissions to perform the s3:ListBucket
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
This section describes the latest revision of the API. We recommend that you use this revised API for application development. For backward compatibility, Amazon S3 continues to support the prior version of this API, ListObjects.
To get a list of your buckets, see ListBuckets.
The following operations are related to ListObjectsV2
:
Returns some or all (up to 1,000) of the objects in a bucket. You can use the request parameters as selection criteria to return a subset of the objects in a bucket. A 200 OK
response can contain valid or invalid XML. Make sure to design your application to parse the contents of the response and handle it appropriately. Objects are returned sorted in an ascending order of the respective key names in the list.
To use this operation, you must have READ access to the bucket.
To use this operation in an AWS Identity and Access Management (IAM) policy, you must have permissions to perform the s3:ListBucket
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
This section describes the latest revision of the API. We recommend that you use this revised API for application development. For backward compatibility, Amazon S3 continues to support the prior version of this API, ListObjects.
To get a list of your buckets, see ListBuckets.
The following operations are related to ListObjectsV2
:
Lists the parts that have been uploaded for a specific multipart upload. This operation must include the upload ID, which you obtain by sending the initiate multipart upload request (see CreateMultipartUpload). This request returns a maximum of 1,000 uploaded parts. The default number of parts returned is 1,000 parts. You can restrict the number of parts returned by specifying the max-parts
request parameter. If your multipart upload consists of more than 1,000 parts, the response returns an IsTruncated
field with the value of true, and a NextPartNumberMarker
element. In subsequent ListParts
requests you can include the part-number-marker query string parameter and set its value to the NextPartNumberMarker
field value from the previous response.
For more information on multipart uploads, see Uploading Objects Using Multipart Upload.
For information on permissions required to use the multipart upload API, see Multipart Upload API and Permissions.
The following operations are related to ListParts
:
Sets the accelerate configuration of an existing bucket. Amazon S3 Transfer Acceleration is a bucket-level feature that enables you to perform faster data transfers to Amazon S3.
To use this operation, you must have permission to perform the s3:PutAccelerateConfiguration action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
The Transfer Acceleration state of a bucket can be set to one of the following two values:
Enabled – Enables accelerated data transfers to the bucket.
Suspended – Disables accelerated data transfers to the bucket.
The GetBucketAccelerateConfiguration operation returns the transfer acceleration state of a bucket.
After setting the Transfer Acceleration state of a bucket to Enabled, it might take up to thirty minutes before the data transfer rates to the bucket increase.
The name of the bucket used for Transfer Acceleration must be DNS-compliant and must not contain periods (\".\").
For more information about transfer acceleration, see Transfer Acceleration.
The following operations are related to PutBucketAccelerateConfiguration
:
Sets the permissions on an existing bucket using access control lists (ACL). For more information, see Using ACLs. To set the ACL of a bucket, you must have WRITE_ACP
permission.
You can use one of the following two ways to set a bucket's permissions:
Specify the ACL in the request body
Specify permissions using request headers
You cannot specify access permission using both the body and the request headers.
Depending on your application needs, you may choose to set the ACL on a bucket using either the request body or the headers. For example, if you have an existing application that updates a bucket ACL using the request body, then you can continue to use that approach.
Access Permissions
You can set access permissions using one of the following methods:
Specify a canned ACL with the x-amz-acl
request header. Amazon S3 supports a set of predefined ACLs, known as canned ACLs. Each canned ACL has a predefined set of grantees and permissions. Specify the canned ACL name as the value of x-amz-acl
. If you use this header, you cannot use other access control-specific headers in your request. For more information, see Canned ACL.
Specify access permissions explicitly with the x-amz-grant-read
, x-amz-grant-read-acp
, x-amz-grant-write-acp
, and x-amz-grant-full-control
headers. When using these headers, you specify explicit access permissions and grantees (AWS accounts or Amazon S3 groups) who will receive the permission. If you use these ACL-specific headers, you cannot use the x-amz-acl
header to set a canned ACL. These parameters map to the set of permissions that Amazon S3 supports in an ACL. For more information, see Access Control List (ACL) Overview.
You specify each grantee as a type=value pair, where the type is one of the following:
id
– if the value specified is the canonical user ID of an AWS account
uri
– if you are granting permissions to a predefined group
emailAddress
– if the value specified is the email address of an AWS account
Using email addresses to specify a grantee is only supported in the following AWS Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Europe (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the AWS General Reference.
For example, the following x-amz-grant-write
header grants create, overwrite, and delete objects permission to LogDelivery group predefined by Amazon S3 and two AWS accounts identified by their email addresses.
x-amz-grant-write: uri=\"http://acs.amazonaws.com/groups/s3/LogDelivery\", id=\"111122223333\", id=\"555566667777\"
You can use either a canned ACL or specify access permissions explicitly. You cannot do both.
Grantee Values
You can specify the person (grantee) to whom you're assigning access rights (using request elements) in the following ways:
By the person's ID:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"CanonicalUser\"><ID><>ID<></ID><DisplayName><>GranteesEmail<></DisplayName> </Grantee>
DisplayName is optional and ignored in the request
By URI:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"Group\"><URI><>http://acs.amazonaws.com/groups/global/AuthenticatedUsers<></URI></Grantee>
By Email address:
<Grantee xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:type=\"AmazonCustomerByEmail\"><EmailAddress><>Grantees@email.com<></EmailAddress>lt;/Grantee>
The grantee is resolved to the CanonicalUser and, in a response to a GET Object acl request, appears as the CanonicalUser.
Using email addresses to specify a grantee is only supported in the following AWS Regions:
US East (N. Virginia)
US West (N. California)
US West (Oregon)
Asia Pacific (Singapore)
Asia Pacific (Sydney)
Asia Pacific (Tokyo)
Europe (Ireland)
South America (São Paulo)
For a list of all the Amazon S3 supported Regions and endpoints, see Regions and Endpoints in the AWS General Reference.
Related Resources
", "PutBucketAnalyticsConfiguration": "Sets an analytics configuration for the bucket (specified by the analytics configuration ID). You can have up to 1,000 analytics configurations per bucket.
You can choose to have storage class analysis export analysis reports sent to a comma-separated values (CSV) flat file. See the DataExport
request element. Reports are updated daily and are based on the object filters that you configure. When selecting data export, you specify a destination bucket and an optional destination prefix where the file is written. You can export the data to a destination bucket in a different account. However, the destination bucket must be in the same Region as the bucket that you are making the PUT analytics configuration to. For more information, see Amazon S3 Analytics – Storage Class Analysis.
You must create a bucket policy on the destination bucket where the exported file is written to grant permissions to Amazon S3 to write objects to the bucket. For an example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
To use this operation, you must have permissions to perform the s3:PutAnalyticsConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources.
Special Errors
HTTP Error: HTTP 400 Bad Request
Code: InvalidArgument
Cause: Invalid argument.
HTTP Error: HTTP 400 Bad Request
Code: TooManyConfigurations
Cause: You are attempting to create a new configuration but have already reached the 1,000-configuration limit.
HTTP Error: HTTP 403 Forbidden
Code: AccessDenied
Cause: You are not the owner of the specified bucket, or you do not have the s3:PutAnalyticsConfiguration bucket permission to set the configuration on the bucket.
Related Resources
Sets the cors
configuration for your bucket. If the configuration exists, Amazon S3 replaces it.
To use this operation, you must be allowed to perform the s3:PutBucketCORS
action. By default, the bucket owner has this permission and can grant it to others.
You set this configuration on a bucket so that the bucket can service cross-origin requests. For example, you might want to enable a request whose origin is http://www.example.com
to access your Amazon S3 bucket at my.example.bucket.com
by using the browser's XMLHttpRequest
capability.
To enable cross-origin resource sharing (CORS) on a bucket, you add the cors
subresource to the bucket. The cors
subresource is an XML document in which you configure rules that identify origins and the HTTP methods that can be executed on your bucket. The document is limited to 64 KB in size.
When Amazon S3 receives a cross-origin request (or a pre-flight OPTIONS request) against a bucket, it evaluates the cors
configuration on the bucket and uses the first CORSRule
rule that matches the incoming browser request to enable a cross-origin request. For a rule to match, the following conditions must be met:
The request's Origin
header must match AllowedOrigin
elements.
The request method (for example, GET, PUT, HEAD, and so on) or the Access-Control-Request-Method
header in case of a pre-flight OPTIONS
request must be one of the AllowedMethod
elements.
Every header specified in the Access-Control-Request-Headers
request header of a pre-flight request must match an AllowedHeader
element.
For more information about CORS, go to Enabling Cross-Origin Resource Sharing in the Amazon Simple Storage Service Developer Guide.
Related Resources
", "PutBucketEncryption": "This operation uses the encryption
subresource to configure default encryption and Amazon S3 Bucket Key for an existing bucket.
Default encryption for a bucket can use server-side encryption with Amazon S3-managed keys (SSE-S3) or AWS KMS customer master keys (SSE-KMS). If you specify default encryption using SSE-KMS, you can also configure Amazon S3 Bucket Key. For information about default encryption, see Amazon S3 default bucket encryption in the Amazon Simple Storage Service Developer Guide. For more information about S3 Bucket Keys, see Amazon S3 Bucket Keys in the Amazon Simple Storage Service Developer Guide.
This operation requires AWS Signature Version 4. For more information, see Authenticating Requests (AWS Signature Version 4).
To use this operation, you must have permissions to perform the s3:PutEncryptionConfiguration
action. The bucket owner has this permission by default. The bucket owner can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
Related Resources
", - "PutBucketIntelligentTieringConfiguration": "Puts a S3 Intelligent-Tiering configuration to the specified bucket.
The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without additional operational overhead. S3 Intelligent-Tiering delivers automatic cost savings by moving data between access tiers, when access patterns change.
The S3 Intelligent-Tiering storage class is suitable for objects larger than 128 KB that you plan to store for at least 30 days. If the size of an object is less than 128 KB, it is not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the frequent access tier rates in the S3 Intelligent-Tiering storage class.
If you delete an object before the end of the 30-day minimum storage duration period, you are charged for 30 days. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to PutBucketIntelligentTieringConfiguration
include:
Puts a S3 Intelligent-Tiering configuration to the specified bucket. You can have up to 1,000 S3 Intelligent-Tiering configurations per bucket.
The S3 Intelligent-Tiering storage class is designed to optimize storage costs by automatically moving data to the most cost-effective storage access tier, without additional operational overhead. S3 Intelligent-Tiering delivers automatic cost savings by moving data between access tiers, when access patterns change.
The S3 Intelligent-Tiering storage class is suitable for objects larger than 128 KB that you plan to store for at least 30 days. If the size of an object is less than 128 KB, it is not eligible for auto-tiering. Smaller objects can be stored, but they are always charged at the frequent access tier rates in the S3 Intelligent-Tiering storage class.
If you delete an object before the end of the 30-day minimum storage duration period, you are charged for 30 days. For more information, see Storage class for automatically optimizing frequently and infrequently accessed objects.
Operations related to PutBucketIntelligentTieringConfiguration
include:
You only need S3 Intelligent-Tiering enabled on a bucket if you want to automatically move objects stored in the S3 Intelligent-Tiering storage class to the Archive Access or Deep Archive Access tier.
Special Errors
HTTP 400 Bad Request Error
Code: InvalidArgument
Cause: Invalid Argument
HTTP 400 Bad Request Error
Code: TooManyConfigurations
Cause: You are attempting to create a new configuration but have already reached the 1,000-configuration limit.
HTTP 403 Forbidden Error
Code: AccessDenied
Cause: You are not the owner of the specified bucket, or you do not have the s3:PutIntelligentTieringConfiguration
bucket permission to set the configuration on the bucket.
This implementation of the PUT
operation adds an inventory configuration (identified by the inventory ID) to the bucket. You can have up to 1,000 inventory configurations per bucket.
Amazon S3 inventory generates inventories of the objects in the bucket on a daily or weekly basis, and the results are published to a flat file. The bucket that is inventoried is called the source bucket, and the bucket where the inventory flat file is stored is called the destination bucket. The destination bucket must be in the same AWS Region as the source bucket.
When you configure an inventory for a source bucket, you specify the destination bucket where you want the inventory to be stored, and whether to generate the inventory daily or weekly. You can also configure what object metadata to include and whether to inventory all object versions or only current versions. For more information, see Amazon S3 Inventory in the Amazon Simple Storage Service Developer Guide.
You must create a bucket policy on the destination bucket to grant permissions to Amazon S3 to write objects to the bucket in the defined location. For an example policy, see Granting Permissions for Amazon S3 Inventory and Storage Class Analysis.
To use this operation, you must have permissions to perform the s3:PutInventoryConfiguration
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
Special Errors
HTTP 400 Bad Request Error
Code: InvalidArgument
Cause: Invalid Argument
HTTP 400 Bad Request Error
Code: TooManyConfigurations
Cause: You are attempting to create a new configuration but have already reached the 1,000-configuration limit.
HTTP 403 Forbidden Error
Code: AccessDenied
Cause: You are not the owner of the specified bucket, or you do not have the s3:PutInventoryConfiguration
bucket permission to set the configuration on the bucket.
Related Resources
For an updated version of this API, see PutBucketLifecycleConfiguration. This version has been deprecated. Existing lifecycle configurations will work. For new lifecycle configurations, use the updated API.
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. For information about lifecycle configuration, see Object Lifecycle Management in the Amazon Simple Storage Service Developer Guide.
By default, all Amazon S3 resources, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration) are private. Only the resource owner, the AWS account that created the resource, can access it. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, users must get the s3:PutLifecycleConfiguration
permission.
You can also explicitly deny permissions. Explicit denial also supersedes any other permissions. If you want to prevent users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
s3:DeleteObject
s3:DeleteObjectVersion
s3:PutLifecycleConfiguration
For more information about permissions, see Managing Access Permissions to your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
For more examples of transitioning objects to storage classes such as STANDARD_IA or ONEZONE_IA, see Examples of Lifecycle Configuration.
Related Resources
GetBucketLifecycle(Deprecated)
By default, a resource owner—in this case, a bucket owner, which is the AWS account that created the bucket—can perform any of the operations. A resource owner can also grant others permission to perform the operation. For more information, see the following topics in the Amazon Simple Storage Service Developer Guide:
Creates a new lifecycle configuration for the bucket or replaces an existing lifecycle configuration. For information about lifecycle configuration, see Managing Access Permissions to Your Amazon S3 Resources.
Bucket lifecycle configuration now supports specifying a lifecycle rule using an object key name prefix, one or more object tags, or a combination of both. Accordingly, this section describes the latest API. The previous version of the API supported filtering based only on an object key name prefix, which is supported for backward compatibility. For the related API description, see PutBucketLifecycle.
Rules
You specify the lifecycle configuration in your request body. The lifecycle configuration is specified as XML consisting of one or more rules. Each rule consists of the following:
Filter identifying a subset of objects to which the rule applies. The filter can be based on a key name prefix, object tags, or a combination of both.
Status whether the rule is in effect.
One or more lifecycle transition and expiration actions that you want Amazon S3 to perform on the objects identified by the filter. If the state of your bucket is versioning-enabled or versioning-suspended, you can have many versions of the same object (one current version and zero or more noncurrent versions). Amazon S3 provides predefined actions that you can specify for current and noncurrent object versions.
For more information, see Object Lifecycle Management and Lifecycle Configuration Elements.
Permissions
By default, all Amazon S3 resources are private, including buckets, objects, and related subresources (for example, lifecycle configuration and website configuration). Only the resource owner (that is, the AWS account that created it) can access the resource. The resource owner can optionally grant access permissions to others by writing an access policy. For this operation, a user must get the s3:PutLifecycleConfiguration permission.
You can also explicitly deny permissions. Explicit deny also supersedes any other permissions. If you want to block users or accounts from removing or deleting objects from your bucket, you must deny them permissions for the following actions:
s3:DeleteObject
s3:DeleteObjectVersion
s3:PutLifecycleConfiguration
For more information about permissions, see Managing Access Permissions to Your Amazon S3 Resources.
The following are related to PutBucketLifecycleConfiguration
:
Applies a Legal Hold configuration to the specified object.
This action is not supported by Amazon S3 on Outposts.
Related Resources
", "PutObjectLockConfiguration": "Places an Object Lock configuration on the specified bucket. The rule specified in the Object Lock configuration will be applied by default to every new object placed in the specified bucket.
DefaultRetention
requires either Days or Years. You can't specify both at the same time.
Related Resources
", "PutObjectRetention": "Places an Object Retention configuration on an object.
This action is not supported by Amazon S3 on Outposts.
Related Resources
", - "PutObjectTagging": "Sets the supplied tag-set to an object that already exists in a bucket.
A tag is a key-value pair. You can associate tags with an object by sending a PUT request against the tagging subresource that is associated with the object. You can retrieve tags by sending a GET request. For more information, see GetObjectTagging.
For tagging-related restrictions related to characters and encodings, see Tag Restrictions. Note that Amazon S3 limits the maximum number of tags to 10 tags per object.
To use this operation, you must have permission to perform the s3:PutObjectTagging
action. By default, the bucket owner has this permission and can grant this permission to others.
To put tags of any other version, use the versionId
query parameter. You also need permission for the s3:PutObjectVersionTagging
action.
For information about the Amazon S3 object tagging feature, see Object Tagging.
Special Errors
Code: InvalidTagError
Cause: The tag provided was not a valid tag. This error can occur if the tag did not pass input validation. For more information, see Object Tagging.
Code: MalformedXMLError
Cause: The XML provided does not match the schema.
Code: OperationAbortedError
Cause: A conflicting conditional operation is currently in progress against this resource. Please try again.
Code: InternalError
Cause: The service was unable to apply the provided tag to the object.
Related Resources
", + "PutObjectTagging": "Sets the supplied tag-set to an object that already exists in a bucket.
A tag is a key-value pair. You can associate tags with an object by sending a PUT request against the tagging subresource that is associated with the object. You can retrieve tags by sending a GET request. For more information, see GetObjectTagging.
For tagging-related restrictions related to characters and encodings, see Tag Restrictions. Note that Amazon S3 limits the maximum number of tags to 10 tags per object.
To use this operation, you must have permission to perform the s3:PutObjectTagging
action. By default, the bucket owner has this permission and can grant this permission to others.
To put tags of any other version, use the versionId
query parameter. You also need permission for the s3:PutObjectVersionTagging
action.
For information about the Amazon S3 object tagging feature, see Object Tagging.
Special Errors
Code: InvalidTagError
Cause: The tag provided was not a valid tag. This error can occur if the tag did not pass input validation. For more information, see Object Tagging.
Code: MalformedXMLError
Cause: The XML provided does not match the schema.
Code: OperationAbortedError
Cause: A conflicting conditional operation is currently in progress against this resource. Please try again.
Code: InternalError
Cause: The service was unable to apply the provided tag to the object.
Related Resources
", "PutPublicAccessBlock": "Creates or modifies the PublicAccessBlock
configuration for an Amazon S3 bucket. To use this operation, you must have the s3:PutBucketPublicAccessBlock
permission. For more information about Amazon S3 permissions, see Specifying Permissions in a Policy.
When Amazon S3 evaluates the PublicAccessBlock
configuration for a bucket or an object, it checks the PublicAccessBlock
configuration for both the bucket (or the bucket that contains the object) and the bucket owner's account. If the PublicAccessBlock
configurations are different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings.
For more information about when Amazon S3 considers a bucket or an object public, see The Meaning of \"Public\".
Related Resources
Restores an archived copy of an object back into Amazon S3
This action is not supported by Amazon S3 on Outposts.
This action performs the following types of requests:
select
- Perform a select query on an archived object
restore an archive
- Restore an archived object
To use this operation, you must have permissions to perform the s3:RestoreObject
action. The bucket owner has this permission by default and can grant this permission to others. For more information about permissions, see Permissions Related to Bucket Subresource Operations and Managing Access Permissions to Your Amazon S3 Resources in the Amazon Simple Storage Service Developer Guide.
Querying Archives with Select Requests
You use a select type of request to perform SQL queries on archived objects. The archived objects that are being queried by the select request must be formatted as uncompressed comma-separated values (CSV) files. You can run queries and custom analytics on your archived data without having to restore your data to a hotter Amazon S3 tier. For an overview about select requests, see Querying Archived Objects in the Amazon Simple Storage Service Developer Guide.
When making a select request, do the following:
Define an output location for the select query's output. This must be an Amazon S3 bucket in the same AWS Region as the bucket that contains the archive object that is being queried. The AWS account that initiates the job must have permissions to write to the S3 bucket. You can specify the storage class and encryption for the output objects stored in the bucket. For more information about output, see Querying Archived Objects in the Amazon Simple Storage Service Developer Guide.
For more information about the S3
structure in the request body, see the following:
Managing Access with ACLs in the Amazon Simple Storage Service Developer Guide
Protecting Data Using Server-Side Encryption in the Amazon Simple Storage Service Developer Guide
Define the SQL expression for the SELECT
type of restoration for your query in the request body's SelectParameters
structure. You can use expressions like the following examples.
The following expression returns all records from the specified object.
SELECT * FROM Object
Assuming that you are not using any headers for data stored in the object, you can specify columns with positional headers.
SELECT s._1, s._2 FROM Object s WHERE s._3 > 100
If you have headers and you set the fileHeaderInfo
in the CSV
structure in the request body to USE
, you can specify headers in the query. (If you set the fileHeaderInfo
field to IGNORE
, the first row is skipped for the query.) You cannot mix ordinal positions with header column names.
SELECT s.Id, s.FirstName, s.SSN FROM S3Object s
For more information about using SQL with S3 Glacier Select restore, see SQL Reference for Amazon S3 Select and S3 Glacier Select in the Amazon Simple Storage Service Developer Guide.
When making a select request, you can also do the following:
To expedite your queries, specify the Expedited
tier. For more information about tiers, see \"Restoring Archives,\" later in this topic.
Specify details about the data serialization format of both the input object that is being queried and the serialization of the CSV-encoded query results.
The following are additional important facts about the select feature:
The output results are new Amazon S3 objects. Unlike archive retrievals, they are stored until explicitly deleted-manually or through a lifecycle policy.
You can issue more than one select request on the same Amazon S3 object. Amazon S3 doesn't deduplicate requests, so avoid issuing duplicate requests.
Amazon S3 accepts a select request even if the object has already been restored. A select request doesn’t return error response 409
.
Restoring objects
Objects that you archive to the S3 Glacier or S3 Glacier Deep Archive storage class, and S3 Intelligent-Tiering Archive or S3 Intelligent-Tiering Deep Archive tiers are not accessible in real time. For objects in Archive Access or Deep Archive Access tiers you must first initiate a restore request, and then wait until the object is moved into the Frequent Access tier. For objects in S3 Glacier or S3 Glacier Deep Archive storage classes you must first initiate a restore request, and then wait until a temporary copy of the object is available. To access an archived object, you must restore the object for the duration (number of days) that you specify.
To restore a specific object version, you can provide a version ID. If you don't provide a version ID, Amazon S3 restores the current version.
When restoring an archived object (or using a select request), you can specify one of the following data access tier options in the Tier
element of the request body:
Expedited
- Expedited retrievals allow you to quickly access your data stored in the S3 Glacier storage class or S3 Intelligent-Tiering Archive tier when occasional urgent requests for a subset of archives are required. For all but the largest archived objects (250 MB+), data accessed using Expedited retrievals is typically made available within 1–5 minutes. Provisioned capacity ensures that retrieval capacity for Expedited retrievals is available when you need it. Expedited retrievals and provisioned capacity are not available for objects stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive tier.
Standard
- Standard retrievals allow you to access any of your archived objects within several hours. This is the default option for retrieval requests that do not specify the retrieval option. Standard retrievals typically finish within 3–5 hours for objects stored in the S3 Glacier storage class or S3 Intelligent-Tiering Archive tier. They typically finish within 12 hours for objects stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive tier. Standard retrievals are free for objects stored in S3 Intelligent-Tiering.
Bulk
- Bulk retrievals are the lowest-cost retrieval option in S3 Glacier, enabling you to retrieve large amounts, even petabytes, of data inexpensively. Bulk retrievals typically finish within 5–12 hours for objects stored in the S3 Glacier storage class or S3 Intelligent-Tiering Archive tier. They typically finish within 48 hours for objects stored in the S3 Glacier Deep Archive storage class or S3 Intelligent-Tiering Deep Archive tier. Bulk retrievals are free for objects stored in S3 Intelligent-Tiering.
For more information about archive retrieval options and provisioned capacity for Expedited
data access, see Restoring Archived Objects in the Amazon Simple Storage Service Developer Guide.
You can use Amazon S3 restore speed upgrade to change the restore speed to a faster speed while it is in progress. For more information, see Upgrading the speed of an in-progress restore in the Amazon Simple Storage Service Developer Guide.
To get the status of object restoration, you can send a HEAD
request. Operations return the x-amz-restore
header, which provides information about the restoration status, in the response. You can use Amazon S3 event notifications to notify you when a restore is initiated or completed. For more information, see Configuring Amazon S3 Event Notifications in the Amazon Simple Storage Service Developer Guide.
After restoring an archived object, you can update the restoration period by reissuing the request with a new period. Amazon S3 updates the restoration period relative to the current time and charges only for the request-there are no data transfer charges. You cannot update the restoration period when Amazon S3 is actively processing your current restore request for the object.
If your bucket has a lifecycle configuration with a rule that includes an expiration action, the object expiration overrides the life span that you specify in a restore request. For example, if you restore an object copy for 10 days, but the object is scheduled to expire in 3 days, Amazon S3 deletes the object in 3 days. For more information about lifecycle configuration, see PutBucketLifecycleConfiguration and Object Lifecycle Management in Amazon Simple Storage Service Developer Guide.
Responses
A successful operation returns either the 200 OK
or 202 Accepted
status code.
If the object is not previously restored, then Amazon S3 returns 202 Accepted
in the response.
If the object is previously restored, Amazon S3 returns 200 OK
in the response.
Special Errors
Code: RestoreAlreadyInProgress
Cause: Object restore is already in progress. (This error does not apply to SELECT type requests.)
HTTP Status Code: 409 Conflict
SOAP Fault Code Prefix: Client
Code: GlacierExpeditedRetrievalNotAvailable
Cause: expedited retrievals are currently not available. Try again later. (Returned if there is insufficient capacity to process the Expedited request. This error applies only to Expedited retrievals and not to S3 Standard or Bulk retrievals.)
HTTP Status Code: 503
SOAP Fault Code Prefix: N/A
Related Resources
SQL Reference for Amazon S3 Select and S3 Glacier Select in the Amazon Simple Storage Service Developer Guide
This operation filters the contents of an Amazon S3 object based on a simple structured query language (SQL) statement. In the request, along with the SQL expression, you must also specify a data serialization format (JSON, CSV, or Apache Parquet) of the object. Amazon S3 uses this format to parse object data into records, and returns only records that match the specified SQL expression. You must also specify the data serialization format for the response.
This action is not supported by Amazon S3 on Outposts.
For more information about Amazon S3 Select, see Selecting Content from Objects in the Amazon Simple Storage Service Developer Guide.
For more information about using SQL with Amazon S3 Select, see SQL Reference for Amazon S3 Select and S3 Glacier Select in the Amazon Simple Storage Service Developer Guide.
Permissions
You must have s3:GetObject
permission for this operation. Amazon S3 Select does not support anonymous access. For more information about permissions, see Specifying Permissions in a Policy in the Amazon Simple Storage Service Developer Guide.
Object Data Formats
You can use Amazon S3 Select to query objects that have the following format properties:
CSV, JSON, and Parquet - Objects must be in CSV, JSON, or Parquet format.
UTF-8 - UTF-8 is the only encoding type Amazon S3 Select supports.
GZIP or BZIP2 - CSV and JSON files can be compressed using GZIP or BZIP2. GZIP and BZIP2 are the only compression formats that Amazon S3 Select supports for CSV and JSON files. Amazon S3 Select supports columnar compression for Parquet using GZIP or Snappy. Amazon S3 Select does not support whole-object compression for Parquet objects.
Server-side encryption - Amazon S3 Select supports querying objects that are protected with server-side encryption.
For objects that are encrypted with customer-provided encryption keys (SSE-C), you must use HTTPS, and you must use the headers that are documented in the GetObject. For more information about SSE-C, see Server-Side Encryption (Using Customer-Provided Encryption Keys) in the Amazon Simple Storage Service Developer Guide.
For objects that are encrypted with Amazon S3 managed encryption keys (SSE-S3) and customer master keys (CMKs) stored in AWS Key Management Service (SSE-KMS), server-side encryption is handled transparently, so you don't need to specify anything. For more information about server-side encryption, including SSE-S3 and SSE-KMS, see Protecting Data Using Server-Side Encryption in the Amazon Simple Storage Service Developer Guide.
Working with the Response Body
Given the response size is unknown, Amazon S3 Select streams the response as a series of messages and includes a Transfer-Encoding
header with chunked
as its value in the response. For more information, see Appendix: SelectObjectContent Response .
GetObject Support
The SelectObjectContent
operation does not support the following GetObject
functionality. For more information, see GetObject.
Range
: Although you can specify a scan range for an Amazon S3 Select request (see SelectObjectContentRequest - ScanRange in the request parameters), you cannot specify the range of bytes of an object to return.
GLACIER, DEEP_ARCHIVE and REDUCED_REDUNDANCY storage classes: You cannot specify the GLACIER, DEEP_ARCHIVE, or REDUCED_REDUNDANCY
storage classes. For more information, about storage classes see Storage Classes in the Amazon Simple Storage Service Developer Guide.
Special Errors
For a list of special errors for this operation, see List of SELECT Object Content Error Codes
Related Resources
", @@ -671,8 +671,8 @@ "refs": { "ListMultipartUploadsOutput$CommonPrefixes": "If you specify a delimiter in the request, then the result returns each distinct key prefix containing the delimiter in a CommonPrefixes
element. The distinct key prefixes are returned in the Prefix
child element.
All of the keys rolled up into a common prefix count as a single return when calculating the number of returns.
", - "ListObjectsOutput$CommonPrefixes": "All of the keys rolled up in a common prefix count as a single return when calculating the number of returns.
A response can contain CommonPrefixes only if you specify a delimiter.
CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by the delimiter.
CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix.
For example, if the prefix is notes/ and the delimiter is a slash (/) as in notes/summer/july, the common prefix is notes/summer/. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.
", - "ListObjectsV2Output$CommonPrefixes": "All of the keys rolled up into a common prefix count as a single return when calculating the number of returns.
A response can contain CommonPrefixes
only if you specify a delimiter.
CommonPrefixes
contains all (if there are any) keys between Prefix
and the next occurrence of the string specified by a delimiter.
CommonPrefixes
lists keys that act like subdirectories in the directory specified by Prefix
.
For example, if the prefix is notes/
and the delimiter is a slash (/
) as in notes/summer/july
, the common prefix is notes/summer/
. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.
All of the keys (up to 1,000) rolled up in a common prefix count as a single return when calculating the number of returns.
A response can contain CommonPrefixes only if you specify a delimiter.
CommonPrefixes contains all (if there are any) keys between Prefix and the next occurrence of the string specified by the delimiter.
CommonPrefixes lists keys that act like subdirectories in the directory specified by Prefix.
For example, if the prefix is notes/ and the delimiter is a slash (/) as in notes/summer/july, the common prefix is notes/summer/. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.
", + "ListObjectsV2Output$CommonPrefixes": "All of the keys (up to 1,000) rolled up into a common prefix count as a single return when calculating the number of returns.
A response can contain CommonPrefixes
only if you specify a delimiter.
CommonPrefixes
contains all (if there are any) keys between Prefix
and the next occurrence of the string specified by a delimiter.
CommonPrefixes
lists keys that act like subdirectories in the directory specified by Prefix
.
For example, if the prefix is notes/
and the delimiter is a slash (/
) as in notes/summer/july
, the common prefix is notes/summer/
. All of the keys that roll up into a common prefix count as a single return when calculating the number of returns.
The MD5 hash of the OwnershipControls
request body.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", "PutBucketPolicyRequest$ContentMD5": "The MD5 hash of the request body.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", "PutBucketReplicationRequest$ContentMD5": "The base64-encoded 128-bit MD5 digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", - "PutBucketRequestPaymentRequest$ContentMD5": ">The base64-encoded 128-bit MD5 digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", + "PutBucketRequestPaymentRequest$ContentMD5": "The base64-encoded 128-bit MD5 digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", "PutBucketTaggingRequest$ContentMD5": "The base64-encoded 128-bit MD5 digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", "PutBucketVersioningRequest$ContentMD5": ">The base64-encoded 128-bit MD5 digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", "PutBucketWebsiteRequest$ContentMD5": "The base64-encoded 128-bit MD5 digest of the data. You must use this header as a message integrity check to verify that the request body was not corrupted in transit. For more information, see RFC 1864.
For requests made using the AWS Command Line Interface (CLI) or AWS SDKs, this field is calculated automatically.
", @@ -1157,7 +1157,7 @@ "refs": { "CompleteMultipartUploadOutput$ETag": "Entity tag that identifies the newly created object's data. Objects with different object data will have different entity tags. The entity tag is an opaque string. The entity tag may or may not be an MD5 digest of the object data. If the entity tag is not an MD5 digest of the object data, it will contain one or more nonhexadecimal characters and/or will consist of less than 32 or more than 32 hexadecimal digits.
", "CompletedPart$ETag": "Entity tag returned when the part was uploaded.
", - "CopyObjectResult$ETag": "Returns the ETag of the new object. The ETag reflects only changes to the contents of an object, not its metadata. The source and destination ETag is identical for a successfully copied object.
", + "CopyObjectResult$ETag": "Returns the ETag of the new object. The ETag reflects only changes to the contents of an object, not its metadata. The source and destination ETag is identical for a successfully copied non-multipart object.
", "CopyPartResult$ETag": "Entity tag of the object.
", "GetObjectOutput$ETag": "An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
", "HeadObjectOutput$ETag": "An ETag is an opaque identifier assigned by a web server to a specific version of a resource found at a URL.
", @@ -2037,7 +2037,7 @@ "KeyCount": { "base": null, "refs": { - "ListObjectsV2Output$KeyCount": "KeyCount is the number of keys returned with this request. KeyCount will always be less than equals to MaxKeys field. Say you ask for 50 keys, your result will include less than equals 50 keys
" + "ListObjectsV2Output$KeyCount": "KeyCount is the number of keys returned with this request. KeyCount will always be less than or equals to MaxKeys field. Say you ask for 50 keys, your result will include less than equals 50 keys
" } }, "KeyMarker": { @@ -2052,7 +2052,7 @@ "KeyPrefixEquals": { "base": null, "refs": { - "Condition$KeyPrefixEquals": "The object key name prefix when the redirect is applied. For example, to redirect requests for ExamplePage.html
, the key prefix will be ExamplePage.html
. To redirect request for all pages with the prefix docs/
, the key prefix will be /docs
, which identifies all objects in the docs/
folder. Required when the parent element Condition
is specified and sibling HttpErrorCodeReturnedEquals
is not specified. If both conditions are specified, both must be true for the redirect to be applied.
The object key name prefix when the redirect is applied. For example, to redirect requests for ExamplePage.html
, the key prefix will be ExamplePage.html
. To redirect request for all pages with the prefix docs/
, the key prefix will be /docs
, which identifies all objects in the docs/
folder. Required when the parent element Condition
is specified and sibling HttpErrorCodeReturnedEquals
is not specified. If both conditions are specified, both must be true for the redirect to be applied.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
Returns the date that the object was last modified.
", + "CopyObjectResult$LastModified": "Creation date of the object.
", "CopyPartResult$LastModified": "Date and time at which the object was uploaded.
", "DeleteMarkerEntry$LastModified": "Date and time the object was last modified.
", - "GetObjectOutput$LastModified": "Last modified date of the object
", - "HeadObjectOutput$LastModified": "Last modified date of the object
", - "Object$LastModified": "The date the Object was Last Modified
", + "GetObjectOutput$LastModified": "Creation date of the object.
", + "HeadObjectOutput$LastModified": "Creation date of the object.
", + "Object$LastModified": "Creation date of the object.
", "ObjectVersion$LastModified": "Date and time the object was last modified.
", "Part$LastModified": "Date and time at which the part was uploaded.
" } @@ -2575,10 +2575,10 @@ "CreateMultipartUploadRequest$Key": "Object key for which the multipart upload is to be initiated.
", "DeleteMarkerEntry$Key": "The object key.
", "DeleteObjectRequest$Key": "Key name of the object to delete.
", - "DeleteObjectTaggingRequest$Key": "Name of the object key.
", + "DeleteObjectTaggingRequest$Key": "The key that identifies the object in the bucket from which to remove all tags.
", "DeletedObject$Key": "The name of the deleted object.
", "Error$Key": "The error key.
", - "ErrorDocument$Key": "The object key name to use when a 4XX class error occurs.
", + "ErrorDocument$Key": "The object key name to use when a 4XX class error occurs.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
The key of the object for which to get the ACL information.
", "GetObjectLegalHoldRequest$Key": "The key name for the object whose Legal Hold status you want to retrieve.
", "GetObjectRequest$Key": "Key of the object to get.
", @@ -2590,7 +2590,7 @@ "ListPartsRequest$Key": "Object key for which the multipart upload was initiated.
", "MultipartUpload$Key": "Key of the object for which the multipart upload was initiated.
", "Object$Key": "The name that you assign to an object. You use the object key to retrieve the object.
", - "ObjectIdentifier$Key": "Key name of the object to delete.
", + "ObjectIdentifier$Key": "Key name of the object.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
The object key.
", "PutObjectAclRequest$Key": "Key for which the PUT operation was initiated.
When using this API with an access point, you must direct requests to the access point hostname. The access point hostname takes the form AccessPointName-AccountId.s3-accesspoint.Region.amazonaws.com. When using this operation with an access point through the AWS SDKs, you provide the access point ARN in place of the bucket name. For more information about access point ARNs, see Using Access Points in the Amazon Simple Storage Service Developer Guide.
When using this API with Amazon S3 on Outposts, you must direct requests to the S3 on Outposts hostname. The S3 on Outposts hostname takes the form AccessPointName-AccountId.outpostID.s3-outposts.Region.amazonaws.com. When using this operation using S3 on Outposts through the AWS SDKs, you provide the Outposts bucket ARN in place of the bucket name. For more information about S3 on Outposts ARNs, see Using S3 on Outposts in the Amazon Simple Storage Service Developer Guide.
", "PutObjectLegalHoldRequest$Key": "The key name for the object that you want to place a Legal Hold on.
", @@ -2891,12 +2891,12 @@ "AnalyticsS3BucketDestination$Prefix": "The prefix to use when exporting data. The prefix is prepended to all results.
", "CommonPrefix$Prefix": "Container for the specified common prefix.
", "IntelligentTieringAndOperator$Prefix": "An object key name prefix that identifies the subset of objects to which the configuration applies.
", - "IntelligentTieringFilter$Prefix": "An object key name prefix that identifies the subset of objects to which the rule applies.
", + "IntelligentTieringFilter$Prefix": "An object key name prefix that identifies the subset of objects to which the rule applies.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
The prefix that an object must have to be included in the inventory results.
", "InventoryS3BucketDestination$Prefix": "The prefix that is prepended to all inventory results.
", - "LifecycleRule$Prefix": "Prefix identifying one or more objects to which the rule applies. This is No longer used; use Filter
instead.
Prefix identifying one or more objects to which the rule applies. This is No longer used; use Filter
instead.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
Prefix identifying one or more objects to which the rule applies.
", - "LifecycleRuleFilter$Prefix": "Prefix identifying one or more objects to which the rule applies.
", + "LifecycleRuleFilter$Prefix": "Prefix identifying one or more objects to which the rule applies.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
When a prefix is provided in the request, this field contains the specified prefix. The result contains only keys starting with the specified prefix.
", "ListMultipartUploadsRequest$Prefix": "Lists in-progress uploads only for those keys that begin with the specified prefix. You can use prefixes to separate a bucket into different grouping of keys. (You can think of using prefix to make groups in the same way you'd use a folder in a file system.)
", "ListObjectVersionsOutput$Prefix": "Selects objects that start with the value supplied by this parameter.
", @@ -2907,10 +2907,10 @@ "ListObjectsV2Request$Prefix": "Limits the response to keys that begin with the specified prefix.
", "MetricsAndOperator$Prefix": "The prefix used when evaluating an AND predicate.
", "MetricsFilter$Prefix": "The prefix used when evaluating a metrics filter.
", - "ReplicationRule$Prefix": "An object key name prefix that identifies the object or objects to which the rule applies. The maximum prefix length is 1,024 characters. To include all objects in a bucket, specify an empty string.
", + "ReplicationRule$Prefix": "An object key name prefix that identifies the object or objects to which the rule applies. The maximum prefix length is 1,024 characters. To include all objects in a bucket, specify an empty string.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
An object key name prefix that identifies the subset of objects to which the rule applies.
", - "ReplicationRuleFilter$Prefix": "An object key name prefix that identifies the subset of objects to which the rule applies.
", - "Rule$Prefix": "Object key prefix that identifies one or more objects to which this rule applies.
" + "ReplicationRuleFilter$Prefix": "An object key name prefix that identifies the subset of objects to which the rule applies.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
Object key prefix that identifies one or more objects to which this rule applies.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
The object key prefix to use in the redirect request. For example, to redirect requests for all pages with prefix docs/
(objects in the docs/
folder) to documents/
, you can set a condition block with KeyPrefixEquals
set to docs/
and in the Redirect set ReplaceKeyPrefixWith
to /documents
. Not required if one of the siblings is present. Can be present only if ReplaceKeyWith
is not provided.
The object key prefix to use in the redirect request. For example, to redirect requests for all pages with prefix docs/
(objects in the docs/
folder) to documents/
, you can set a condition block with KeyPrefixEquals
set to docs/
and in the Redirect set ReplaceKeyPrefixWith
to /documents
. Not required if one of the siblings is present. Can be present only if ReplaceKeyWith
is not provided.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
The specific object key to use in the redirect request. For example, redirect request to error.html
. Not required if one of the siblings is present. Can be present only if ReplaceKeyPrefixWith
is not provided.
The specific object key to use in the redirect request. For example, redirect request to error.html
. Not required if one of the siblings is present. Can be present only if ReplaceKeyPrefixWith
is not provided.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.
Confirms that the requester knows that she or he will be charged for the list objects request. Bucket owners need not specify this parameter in their requests.
", @@ -3335,6 +3336,7 @@ "PutObjectLockConfigurationRequest$RequestPayer": null, "PutObjectRequest$RequestPayer": null, "PutObjectRetentionRequest$RequestPayer": null, + "PutObjectTaggingRequest$RequestPayer": null, "RestoreObjectRequest$RequestPayer": null, "UploadPartCopyRequest$RequestPayer": null, "UploadPartRequest$RequestPayer": null @@ -3475,7 +3477,7 @@ "CreateMultipartUploadOutput$SSECustomerAlgorithm": "If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used.
", "CreateMultipartUploadRequest$SSECustomerAlgorithm": "Specifies the algorithm to use to when encrypting the object (for example, AES256).
", "GetObjectOutput$SSECustomerAlgorithm": "If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used.
", - "GetObjectRequest$SSECustomerAlgorithm": "Specifies the algorithm to use to when encrypting the object (for example, AES256).
", + "GetObjectRequest$SSECustomerAlgorithm": "Specifies the algorithm to use to when decrypting the object (for example, AES256).
", "HeadObjectOutput$SSECustomerAlgorithm": "If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used.
", "HeadObjectRequest$SSECustomerAlgorithm": "Specifies the algorithm to use to when encrypting the object (for example, AES256).
", "PutObjectOutput$SSECustomerAlgorithm": "If server-side encryption with a customer-provided encryption key was requested, the response will include this header confirming the encryption algorithm used.
", @@ -3492,7 +3494,7 @@ "refs": { "CopyObjectRequest$SSECustomerKey": "Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
Specifies the customer-provided encryption key for Amazon S3 used to encrypt the data. This value is used to decrypt the object when recovering it and must match the one used when storing the data. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
Specifies the customer-provided encryption key for Amazon S3 to use in encrypting data. This value is used to store the object and then it is discarded; Amazon S3 does not store the encryption key. The key must be appropriate for use with the algorithm specified in the x-amz-server-side-encryption-customer-algorithm
header.
The SSE Customer Key. For more information, see Server-Side Encryption (Using Customer-Provided Encryption Keys.
", @@ -3729,7 +3731,7 @@ "Suffix": { "base": null, "refs": { - "IndexDocument$Suffix": "A suffix that is appended to a request that is for a directory on the website endpoint (for example,if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html) The suffix must not be empty and must not include a slash character.
" + "IndexDocument$Suffix": "A suffix that is appended to a request that is for a directory on the website endpoint (for example,if the suffix is index.html and you make a request to samplebucket/images/ the data that is returned will be for the object with the key name images/index.html) The suffix must not be empty and must not include a slash character.
Replacement must be made for object keys containing special characters (such as carriage returns) when using XML requests. For more information, see XML related object key constraints.