Skip to content

Commit

Permalink
Release v1.35.24 (2020-11-09) (#3631)
Browse files Browse the repository at this point in the history
Release v1.35.24 (2020-11-09)
===

### Service Client Updates
* `service/datasync`: Updates service API and documentation
* `service/dynamodb`: Updates service API, documentation, and paginators
  * This release adds supports for exporting Amazon DynamoDB table data to Amazon S3 to perform analytics at any scale.
* `service/ecs`: Updates service API and documentation
  * This release provides native support for specifying Amazon FSx for Windows File Server file systems as volumes in your Amazon ECS task definitions.
* `service/es`: Updates service API, documentation, and paginators
  * Adding support for package versioning in Amazon Elasticsearch Service
* `service/fsx`: Updates service API, documentation, paginators, and examples
* `service/iotanalytics`: Updates service API and documentation
* `service/macie2`: Updates service API and documentation
* `service/s3`: Updates service API, documentation, and examples
  * S3 Intelligent-Tiering adds support for Archive and Deep Archive Access tiers; S3 Replication adds replication metrics and failure notifications, brings feature parity for delete marker replication
* `service/ssm`: Updates service API and documentation
  * add a new filter to allow customer to filter automation executions by using resource-group which used for execute automation
* `service/storagegateway`: Updates service API, documentation, and paginators
  * Added bandwidth rate limit schedule for Tape and Volume Gateways
  • Loading branch information
aws-sdk-go-automation authored Nov 9, 2020
1 parent ed936ed commit ecaab03
Show file tree
Hide file tree
Showing 48 changed files with 7,735 additions and 1,196 deletions.
21 changes: 21 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,24 @@
Release v1.35.24 (2020-11-09)
===

### Service Client Updates
* `service/datasync`: Updates service API and documentation
* `service/dynamodb`: Updates service API, documentation, and paginators
* This release adds supports for exporting Amazon DynamoDB table data to Amazon S3 to perform analytics at any scale.
* `service/ecs`: Updates service API and documentation
* This release provides native support for specifying Amazon FSx for Windows File Server file systems as volumes in your Amazon ECS task definitions.
* `service/es`: Updates service API, documentation, and paginators
* Adding support for package versioning in Amazon Elasticsearch Service
* `service/fsx`: Updates service API, documentation, paginators, and examples
* `service/iotanalytics`: Updates service API and documentation
* `service/macie2`: Updates service API and documentation
* `service/s3`: Updates service API, documentation, and examples
* S3 Intelligent-Tiering adds support for Archive and Deep Archive Access tiers; S3 Replication adds replication metrics and failure notifications, brings feature parity for delete marker replication
* `service/ssm`: Updates service API and documentation
* add a new filter to allow customer to filter automation executions by using resource-group which used for execute automation
* `service/storagegateway`: Updates service API, documentation, and paginators
* Added bandwidth rate limit schedule for Tape and Volume Gateways

Release v1.35.23 (2020-11-06)
===

Expand Down
2 changes: 1 addition & 1 deletion aws/version.go
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@ package aws
const SDKName = "aws-sdk-go"

// SDKVersion is the version of this SDK
const SDKVersion = "1.35.23"
const SDKVersion = "1.35.24"
29 changes: 29 additions & 0 deletions models/apis/datasync/2018-11-09/api-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -416,6 +416,19 @@
{"shape":"InvalidRequestException"},
{"shape":"InternalException"}
]
},
"UpdateTaskExecution":{
"name":"UpdateTaskExecution",
"http":{
"method":"POST",
"requestUri":"/"
},
"input":{"shape":"UpdateTaskExecutionRequest"},
"output":{"shape":"UpdateTaskExecutionResponse"},
"errors":[
{"shape":"InvalidRequestException"},
{"shape":"InternalException"}
]
}
},
"shapes":{
Expand Down Expand Up @@ -1611,6 +1624,22 @@
"members":{
}
},
"UpdateTaskExecutionRequest":{
"type":"structure",
"required":[
"TaskExecutionArn",
"Options"
],
"members":{
"TaskExecutionArn":{"shape":"TaskExecutionArn"},
"Options":{"shape":"Options"}
}
},
"UpdateTaskExecutionResponse":{
"type":"structure",
"members":{
}
},
"UpdateTaskRequest":{
"type":"structure",
"required":["TaskArn"],
Expand Down
37 changes: 25 additions & 12 deletions models/apis/datasync/2018-11-09/docs-2.json
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,8 @@
"TagResource": "<p>Applies a key-value pair to an AWS resource.</p>",
"UntagResource": "<p>Removes a tag from an AWS resource.</p>",
"UpdateAgent": "<p>Updates the name of an agent.</p>",
"UpdateTask": "<p>Updates the metadata associated with a task.</p>"
"UpdateTask": "<p>Updates the metadata associated with a task.</p>",
"UpdateTaskExecution": "<p>Updates execution of a task.</p> <p>You can modify bandwidth throttling for a task execution that is running or queued. For more information, see <a href=\"https://docs.aws.amazon.com/datasync/latest/working-with-task-executions.html#adjust-bandwidth-throttling\">Adjusting Bandwidth Throttling for a Task Execution</a>.</p> <note> <p>The only <code>Option</code> that can be modified by <code>UpdateTaskExecution</code> is <code> <a href=\"https://docs.aws.amazon.com/datasync/latest/userguide/API_Options.html#DataSync-Type-Options-BytesPerSecond\">BytesPerSecond</a> </code>.</p> </note>"
},
"shapes": {
"ActivationKey": {
Expand All @@ -57,10 +58,10 @@
"base": null,
"refs": {
"CreateLocationObjectStorageRequest$AgentArns": "<p>The Amazon Resource Name (ARN) of the agents associated with the self-managed object storage server location.</p>",
"CreateLocationS3Request$AgentArns": "<p>If you are using DataSync on an AWS Outpost, specify the Amazon Resource Names (ARNs) of the DataSync agents deployed on your AWS Outpost. For more information about launching a DataSync agent on an Amazon Outpost, see <a>outposts-agent</a>.</p>",
"CreateLocationS3Request$AgentArns": "<p>If you are using DataSync on an AWS Outpost, specify the Amazon Resource Names (ARNs) of the DataSync agents deployed on your Outpost. For more information about launching a DataSync agent on an AWS Outpost, see <a>outposts-agent</a>.</p>",
"CreateLocationSmbRequest$AgentArns": "<p>The Amazon Resource Names (ARNs) of agents to use for a Simple Message Block (SMB) location. </p>",
"DescribeLocationObjectStorageResponse$AgentArns": "<p>The Amazon Resource Name (ARN) of the agents associated with the self-managed object storage server location.</p>",
"DescribeLocationS3Response$AgentArns": "<p>If you are using DataSync on an Amazon Outpost, the Amazon Resource Name (ARNs) of the EC2 agents deployed on your AWS Outpost. For more information about launching a DataSync agent on an Amazon Outpost, see <a>outposts-agent</a>.</p>",
"DescribeLocationS3Response$AgentArns": "<p>If you are using DataSync on an AWS Outpost, the Amazon Resource Name (ARNs) of the EC2 agents deployed on your Outpost. For more information about launching a DataSync agent on an AWS Outpost, see <a>outposts-agent</a>.</p>",
"DescribeLocationSmbResponse$AgentArns": "<p>The Amazon Resource Name (ARN) of the source SMB file system location that is created.</p>",
"OnPremConfig$AgentArns": "<p>ARNs of the agents to use for an NFS location.</p>"
}
Expand Down Expand Up @@ -383,7 +384,7 @@
"FilterList": {
"base": null,
"refs": {
"CreateTaskRequest$Excludes": "<p>A list of filter rules that determines which files to exclude from a task. The list should contain a single filter string that consists of the patterns to exclude. The patterns are delimited by \"|\" (that is, a pipe), for example, <code>\"/folder1|/folder2\"</code> </p> <p> </p>",
"CreateTaskRequest$Excludes": "<p>A list of filter rules that determines which files to exclude from a task. The list should contain a single filter string that consists of the patterns to exclude. The patterns are delimited by \"|\" (that is, a pipe), for example, <code>\"/folder1|/folder2\"</code>. </p> <p> </p>",
"DescribeTaskExecutionResponse$Excludes": "<p>A list of filter rules that determines which files to exclude from a task. The list should contain a single filter string that consists of the patterns to exclude. The patterns are delimited by \"|\" (that is, a pipe), for example: <code>\"/folder1|/folder2\"</code> </p> <p> </p>",
"DescribeTaskExecutionResponse$Includes": "<p>A list of filter rules that determines which files to include when running a task. The list should contain a single filter string that consists of the patterns to include. The patterns are delimited by \"|\" (that is, a pipe), for example: <code>\"/folder1|/folder2\"</code> </p> <p> </p>",
"DescribeTaskResponse$Excludes": "<p>A list of filter rules that determines which files to exclude from a task. The list should contain a single filter string that consists of the patterns to exclude. The patterns are delimited by \"|\" (that is, a pipe), for example: <code>\"/folder1|/folder2\"</code> </p> <p> </p>",
Expand Down Expand Up @@ -647,7 +648,7 @@
"NfsSubdirectory": {
"base": null,
"refs": {
"CreateLocationNfsRequest$Subdirectory": "<p>The subdirectory in the NFS file system that is used to read data from the NFS source location or write data to the NFS destination. The NFS path should be a path that's exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network. </p> <p>To see all the paths exported by your NFS server. run \"<code>showmount -e nfs-server-name</code>\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication. </p> <p>To transfer all the data in the folder you specified, DataSync needs to have permissions to read all the data. To ensure this, either configure the NFS export with <code>no_root_squash,</code> or ensure that the permissions for all of the files that you want DataSync allow read access for all users. Doing either enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.</p> <p>If you are copying data to or from your AWS Snowcone device, see <a href=\"https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone\">NFS Server on AWS Snowcone</a> for more information.</p> <p>For information about NFS export configuration, see 18.7. The /etc/exports Configuration File in the Red Hat Enterprise Linux documentation.</p>"
"CreateLocationNfsRequest$Subdirectory": "<p>The subdirectory in the NFS file system that is used to read data from the NFS source location or write data to the NFS destination. The NFS path should be a path that's exported by the NFS server, or a subdirectory of that path. The path should be such that it can be mounted by other NFS clients in your network. </p> <p>To see all the paths exported by your NFS server, run \"<code>showmount -e nfs-server-name</code>\" from an NFS client that has access to your server. You can specify any directory that appears in the results, and any subdirectory of that directory. Ensure that the NFS export is accessible without Kerberos authentication. </p> <p>To transfer all the data in the folder you specified, DataSync needs to have permissions to read all the data. To ensure this, either configure the NFS export with <code>no_root_squash,</code> or ensure that the permissions for all of the files that you want DataSync allow read access for all users. Doing either enables the agent to read the files. For the agent to access directories, you must additionally enable all execute access.</p> <p>If you are copying data to or from your AWS Snowcone device, see <a href=\"https://docs.aws.amazon.com/datasync/latest/userguide/create-nfs-location.html#nfs-on-snowcone\">NFS Server on AWS Snowcone</a> for more information.</p> <p>For information about NFS export configuration, see 18.7. The /etc/exports Configuration File in the Red Hat Enterprise Linux documentation.</p>"
}
},
"NfsVersion": {
Expand Down Expand Up @@ -710,6 +711,7 @@
"DescribeTaskExecutionResponse$Options": null,
"DescribeTaskResponse$Options": "<p>The set of configuration options that control the behavior of a single execution of the task that occurs when you call <code>StartTaskExecution</code>. You can configure these options to preserve metadata such as user ID (UID) and group (GID), file permissions, data integrity verification, and so on.</p> <p>For each individual task execution, you can override these options by specifying the overriding <code>OverrideOptions</code> value to operation. </p>",
"StartTaskExecutionRequest$OverrideOptions": null,
"UpdateTaskExecutionRequest$Options": null,
"UpdateTaskRequest$Options": null
}
},
Expand Down Expand Up @@ -743,8 +745,8 @@
"base": null,
"refs": {
"TaskExecutionResultDetail$PrepareStatus": "<p>The status of the PREPARING phase.</p>",
"TaskExecutionResultDetail$TransferStatus": "<p>The status of the TRANSFERRING Phase.</p>",
"TaskExecutionResultDetail$VerifyStatus": "<p>The status of the VERIFYING Phase.</p>"
"TaskExecutionResultDetail$TransferStatus": "<p>The status of the TRANSFERRING phase.</p>",
"TaskExecutionResultDetail$VerifyStatus": "<p>The status of the VERIFYING phase.</p>"
}
},
"PosixPermissions": {
Expand Down Expand Up @@ -774,7 +776,7 @@
"S3BucketArn": {
"base": null,
"refs": {
"CreateLocationS3Request$S3BucketArn": "<p>The Amazon Resource Name (ARN) of the Amazon S3 bucket. If the bucket is on an AWS Outpost, this must be an access point ARN.</p>"
"CreateLocationS3Request$S3BucketArn": "<p>The ARN of the Amazon S3 bucket. If the bucket is on an AWS Outpost, this must be an access point ARN.</p>"
}
},
"S3Config": {
Expand All @@ -787,8 +789,8 @@
"S3StorageClass": {
"base": null,
"refs": {
"CreateLocationS3Request$S3StorageClass": "<p>The Amazon S3 storage class that you want to store your files in when this location is used as a task destination. For buckets in AWS Regions, the storage class defaults to Standard. For buckets on AWS Outposts, the storage class defaults to AWS S3 Outposts.</p> <p>For more information about S3 storage classes, see <a href=\"https://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a> in the <i>Amazon Simple Storage Service Developer Guide</i>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>",
"DescribeLocationS3Response$S3StorageClass": "<p>The Amazon S3 storage class that you chose to store your files in when this location is used as a task destination. For more information about S3 storage classes, see <a href=\"https://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a> in the <i>Amazon Simple Storage Service Developer Guide</i>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>"
"CreateLocationS3Request$S3StorageClass": "<p>The Amazon S3 storage class that you want to store your files in when this location is used as a task destination. For buckets in AWS Regions, the storage class defaults to Standard. For buckets on AWS Outposts, the storage class defaults to AWS S3 Outposts.</p> <p>For more information about S3 storage classes, see <a href=\"http://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>",
"DescribeLocationS3Response$S3StorageClass": "<p>The Amazon S3 storage class that you chose to store your files in when this location is used as a task destination. For more information about S3 storage classes, see <a href=\"http://aws.amazon.com/s3/storage-classes/\">Amazon S3 Storage Classes</a>. Some storage classes have behaviors that can affect your S3 storage cost. For detailed information, see <a>using-storage-classes</a>.</p>"
}
},
"S3Subdirectory": {
Expand Down Expand Up @@ -945,7 +947,8 @@
"DescribeTaskExecutionResponse$TaskExecutionArn": "<p>The Amazon Resource Name (ARN) of the task execution that was described. <code>TaskExecutionArn</code> is hierarchical and includes <code>TaskArn</code> for the task that was executed. </p> <p>For example, a <code>TaskExecution</code> value with the ARN <code>arn:aws:datasync:us-east-1:111222333444:task/task-0208075f79cedf4a2/execution/exec-08ef1e88ec491019b</code> executed the task with the ARN <code>arn:aws:datasync:us-east-1:111222333444:task/task-0208075f79cedf4a2</code>. </p>",
"DescribeTaskResponse$CurrentTaskExecutionArn": "<p>The Amazon Resource Name (ARN) of the task execution that is syncing files.</p>",
"StartTaskExecutionResponse$TaskExecutionArn": "<p>The Amazon Resource Name (ARN) of the specific task execution that was started.</p>",
"TaskExecutionListEntry$TaskExecutionArn": "<p>The Amazon Resource Name (ARN) of the task that was executed.</p>"
"TaskExecutionListEntry$TaskExecutionArn": "<p>The Amazon Resource Name (ARN) of the task that was executed.</p>",
"UpdateTaskExecutionRequest$TaskExecutionArn": "<p>The Amazon Resource Name (ARN) of the specific task execution that is being updated. </p>"
}
},
"TaskExecutionList": {
Expand Down Expand Up @@ -1020,7 +1023,7 @@
"TaskStatus": {
"base": null,
"refs": {
"DescribeTaskResponse$Status": "<p>The status of the task that was described.</p> <p>For detailed information about task execution statuses, see Understanding Task Statuses in the <i>AWS DataSync User Guide.</i> </p>",
"DescribeTaskResponse$Status": "<p>The status of the task that was described.</p> <p>For detailed information about task execution statuses, see Understanding Task Statuses in the <i>AWS DataSync User Guide</i>.</p>",
"TaskListEntry$Status": "<p>The status of the task.</p>"
}
},
Expand Down Expand Up @@ -1071,6 +1074,16 @@
"refs": {
}
},
"UpdateTaskExecutionRequest": {
"base": null,
"refs": {
}
},
"UpdateTaskExecutionResponse": {
"base": null,
"refs": {
}
},
"UpdateTaskRequest": {
"base": "<p>UpdateTaskResponse</p>",
"refs": {
Expand Down
Loading

0 comments on commit ecaab03

Please sign in to comment.